Coinbase shares soared after JPMorgan Chase upgraded the company’s stock rating to “overweight” on the monetization potential of the Base Network. The bank also highlighted changes to Coinbase’s USDC rewards and new Base-integrated DEX features driving profits and risk-management levers. JPMorgan Upgrades Coinbase Stock Coinbase shares (COIN) rallied on Friday after JPMorgan Chase upgraded the exchange, highlighting the potential of new monetization opportunities associated with its Base Network and USDC payout strategy. Analysts lifted their rating of Coinbase stock from “neutral” to “overweight” and also raised their price target to $404, a 15% upside from current levels. According to JPMorgan, Coinbase is leaning into its Base Layer-2 technology and exploring ways to capture value from the nascent platform’s growth. The bank also predicted that the launch of a Base token could present Coinbase with a $12 billion to $34 billion opportunity, putting Coinbase’s retained share between $4 billion and $12 billion. JPMorgan analysts also noted that the Base token’s distribution would likely favor developers, validators, and the larger Base community. Analysts also highlighted Coinbase’s integration of a DEX aggregator within the Base app to hedge against the growth of decentralized exchanges. USDC Rewards JPMorgan also highlighted margin expansion potential due to changes in Coinbase’s USDC rewards program. According to the bank’s analysts, Coinbase may reduce interest rewards for most users, offering them primarily to Coinbase One users. The bank believes such a move could add around $374 million in annual earnings at current USDC interest rates and yields. COIN shares rallied over 9% following the news, reaching $353. The stock’s value is up about 42% year-to-date, taking the company’s market capitalization past $90 billion. Attention Turns To Coinbase Earnings Coinbase will report its third-quarter earnings results on October 30. According to a report by Zacks Investment Research, analysts expect the company to post earnings of $1.06 per share, a 71% increase year-over-year, and revenue of $1.74 billion, a 44.1% increase from the same quarter last year. Coinbase reported a mixed second quarter, with the exchange missing earnings expectations. However, it achieved several operational milestones, including higher stablecoin revenue and rising stablecoin balances. Coinbase has also been focusing on its subscription and services segment. Analysts expect the segment to contribute between $665 million and $745 million in the third quarter. The exchange also highlighted several key developments during the quarter. These include the approval of the GENIUS Act. The act established a clear regulatory framework in the US for stablecoin adoption. Disclaimer: This article is provided for informational purposes only. It is not offered or intended to be used as legal, tax, investment, financial, or other advice
Crypto Daily
You can visit the page to read the article.
Source: Crypto Daily
Disclaimer: The opinion expressed here is not investment advice – it is provided for informational purposes only. It does not necessarily reflect the opinion of BitMaden. Every investment and all trading involves risk, so you should always perform your own research prior to making decisions. We do not recommend investing money you cannot afford to lose.
What’s Behind the Record-Breaking 270K BTC Movement This Year?
2025 is shaping up as a record-breaking year for the movement of long-dormant Bitcoin. New data revealed BTC inactive for seven or more years, showing significant activity. So far this year, 270,000 BTC have been transformed, which is a new all-time high. This figure has already surpassed 2024’s 255,000 BTC and far exceeded 2023’s 59,000 BTC, with two months still remaining. 2025 Becomes Year of the Awakening CryptoQuant explained that Bitcoin’s surge in long-dormant coin movements may stem from several factors, such as old miners relocating long-held reserves, transferring funds to fresh cold wallets for enhanced security, and partial liquidations as elevated prices present lucrative opportunities. At the current pace, 2025 could see more than 300,000 BTC with 7+ years of dormancy being moved. Adding to the trend, a tweet from on-chain analytics platform Lookonchain highlighted a miner wallet 18eY9o, which has been dormant for 14 years and holding 4,000 BTC mined in 2009 and consolidated in 2011, recently became active. The wallet holder transferred 150 BTC, which is worth roughly $16.59 million. This move is part of the broader pattern of early-era coins resurfacing, suggesting both strategic repositioning by miners and renewed liquidity from historically inactive addresses. Bitcoin noted a modest 2.1% surge in the past day as it trades at $111,178. With more long-held coins potentially entering circulation, it would be interesting to see how these dormant Bitcoin awakenings could influence price trends and investor behavior in the final months of the year. Dormant Bitcoins Awakening In September, a 12-year-old miner-era wallet transferred 400.08 BTC, valued at roughly $44 million, to multiple new addresses. The coins were originally mined 15 years ago. A humorous X post even noted the generational wealth unlocked by awakening a decade-old wallet. Earlier, in July, a 14-year-dormant wallet containing over 80,000 BTC moved 20,000 BTC worth $2.4 billion, with billions more sent to institutional custodian Galaxy Digital. The reactivation of multiple wallets, some funneling funds to exchanges like Binance and Bybit, drew immediate comparisons to the Mt. Gox trustee sell-offs of 2024 and raised fears of a market correction. The post What’s Behind the Record-Breaking 270K BTC Movement This Year? appeared first on CryptoPotato . Crypto Daily
AI Security System’s Alarming Blunder: Doritos Bag Mistaken for Firearm
BitcoinWorld AI Security System’s Alarming Blunder: Doritos Bag Mistaken for Firearm In an era increasingly defined by digital innovation, the cryptocurrency community understands the critical balance between technological advancement and individual liberties. From blockchain’s promise of decentralization to the ever-present debate on data ownership, the reliability and ethics of advanced systems are paramount. This vigilance extends beyond finance to everyday applications, particularly when an AI security system misfires with alarming consequences, challenging our trust in the very technology meant to protect us. Imagine a scenario where a simple snack could trigger a full-blown security alert, leading to a student being handcuffed. This isn’t a dystopian novel; it’s a real-world incident that unfolded at Kenwood High School in Baltimore County, Maryland, highlighting the complex and sometimes unsettling implications of AI deployment in sensitive environments. The event serves as a stark reminder that while AI promises efficiency, its flaws can have profound human impacts, echoing the scrutiny applied to any centralized system in the crypto world. The Alarming Reality of AI Security Systems in Schools The incident at Kenwood High School involved student Taki Allen, who found himself in a distressing situation after an AI security system flagged his bag of Doritos as a potential firearm. Allen recounted to CNN affiliate WBAL, “I was just holding a Doritos bag — it was two hands and one finger out, and they said it looked like a gun.” The immediate consequence was severe: Allen was made to kneel, hands behind his back, and was handcuffed by authorities. Principal Katie Smith confirmed that the school’s security department had reviewed and canceled the gun detection alert. However, before this cancellation was fully communicated, the situation escalated, with the school resource officer involving local police. Omnilert, the company behind the AI gun detection system, acknowledged the incident, stating, “We regret that this incident occurred and wish to convey our concern to the student and the wider community affected by the events that followed.” Despite this regret, Omnilert maintained that “the process functioned as intended.” This statement itself raises critical questions about what ‘intended function’ means when it results in a false accusation and physical restraint. Understanding the Peril of AI False Positives The incident at Kenwood High School serves as a stark reminder of the challenges posed by false positive alerts generated by AI systems. A false positive occurs when an AI system incorrectly identifies a non-threat as a threat. In this case, a common snack item was mistaken for a weapon, leading to an unwarranted security response. The ramifications extend beyond mere inconvenience, impacting individuals directly and eroding public trust in technology designed for safety. Why do these errors happen? AI systems, especially those designed for visual detection, rely heavily on vast datasets for training. If these datasets lack diversity, are poorly annotated, or if environmental factors like lighting, angles, or object occlusion are not adequately represented, the system can misinterpret benign objects. A Doritos bag, under certain conditions, might possess visual characteristics that, to a machine learning algorithm, superficially resemble the outline of a firearm. The consequences of such errors in high-stakes environments like schools are significant: Student Trauma: Being falsely accused and subjected to security protocols can be a deeply traumatic experience for a student. Resource Misallocation: Law enforcement and school personnel resources are diverted to address non-existent threats. Erosion of Trust: Repeated incidents can lead to skepticism and distrust in the very systems meant to ensure safety, potentially hindering their effectiveness when real threats emerge. Unpacking Algorithmic Bias in AI Surveillance Beyond simple misidentification, the incident at Kenwood High School raises uncomfortable questions about algorithmic bias , a persistent challenge in AI development. Algorithmic bias refers to systematic and repeatable errors in a computer system’s output that create unfair outcomes, such as favoring or disfavoring particular groups of people. While the direct link to racial bias wasn’t explicitly stated in Taki Allen’s case, such incidents often spark broader discussions about how AI systems, trained on potentially biased data, might disproportionately affect certain demographics. Consider these points regarding algorithmic bias in AI security: Training Data: If the datasets used to train AI models are not diverse or representative of the population, the AI may perform poorly when encountering individuals or objects outside its ‘learned’ parameters. This can lead to higher error rates for certain groups or in specific contexts. Contextual Understanding: AI currently struggles with nuanced contextual understanding. It sees patterns but often lacks the common sense to interpret situations beyond its programmed parameters, making it prone to errors when objects are presented in unusual ways or are not perfectly matched to its threat library. Ethical Implications: Relying on AI for critical judgments, especially in environments involving minors, demands rigorous ethical review. The potential for an algorithm to make life-altering decisions based on imperfect data is a significant concern. Addressing algorithmic bias requires continuous auditing of AI systems, diversifying training data, and involving diverse perspectives in the development and deployment phases to ensure fairness and accuracy. Navigating Privacy Concerns in an AI-Driven World For those attuned to the decentralized ethos of cryptocurrency, the proliferation of AI surveillance systems raises significant privacy concerns . The incident at Kenwood High School is not just about a mistaken identity; it’s about the pervasive nature of AI monitoring in public and semi-public spaces, and the implications for individual autonomy and data rights. The very presence of an AI system constantly scanning for threats means constant data collection, processing, and analysis of individuals’ movements and belongings. Key privacy considerations include: Constant Surveillance: Students and staff are under continuous digital scrutiny, potentially creating an environment of mistrust and reducing feelings of personal freedom. Data Handling: Who owns the data collected by these systems? How is it stored, secured, and used? The lack of transparency around data governance is a major red flag for privacy advocates. Mission Creep: What starts as a gun detection system could potentially expand to monitor other behaviors, raising questions about the scope of surveillance and potential for misuse. False Accusations and Digital Footprints: Even if an alert is canceled, the initial flagging creates a digital record. In an increasingly data-driven world, such records, however erroneous, could have unforeseen long-term consequences. The cryptocurrency community, deeply familiar with the fight for digital self-sovereignty, understands that such systems, while ostensibly for security, can easily become tools for pervasive monitoring, chipping away at the fundamental right to privacy. The debate around AI surveillance parallels the ongoing discussions about central bank digital currencies (CBDCs) and their potential for governmental oversight of personal finances – a fear that drives many towards decentralized alternatives. Balancing Technology and Student Safety: A Critical Equation While the goal of enhancing student safety is paramount, the methods employed must not inadvertently cause harm or infringe upon fundamental rights. AI security systems are introduced with the best intentions: to prevent tragedies and create secure learning environments. However, the incident at Kenwood High School demonstrates that the implementation of such technology requires careful consideration of its broader impact. The core challenge lies in striking a balance: Security vs. Freedom: How much surveillance is acceptable in exchange for perceived safety? Where do we draw the line to protect students’ civil liberties and psychological well-being? Psychological Impact: For a student like Taki Allen, being handcuffed and searched due to an AI error can be a deeply unsettling and potentially traumatizing experience, impacting their sense of security and trust in authority figures. Human Element: AI is a tool, not a replacement for human judgment. The role of trained personnel in verifying alerts, de-escalating situations, and providing a human touch remains indispensable. Mitigating Risks and Ensuring Accountability To prevent similar incidents and foster trust in AI security systems, several measures are essential: Enhanced Human Oversight: AI alerts should always be treated as preliminary information, requiring human verification and contextual understanding before any action is taken. School resource officers and administrators need clear protocols for verifying alerts and de-escalating situations. Transparency and Accountability: Companies developing and deploying AI systems must be transparent about their systems’ capabilities, limitations, and error rates. Clear lines of accountability must be established when errors occur. Rigorous Testing and Training: AI models need continuous, diverse, and real-world testing to reduce false positives and address algorithmic biases. Training data should reflect a wide range of scenarios and demographics. Community Engagement: Schools and authorities should engage with students, parents, and the wider community to discuss the deployment of AI systems, address concerns, and build consensus. Policy Development: Clear, ethical guidelines and policies are needed for the responsible deployment of AI in sensitive environments like schools, balancing security needs with privacy rights and civil liberties. The incident at Kenwood High School is a potent reminder that technology, no matter how advanced, is only as good as its design, implementation, and the human oversight it receives. While AI offers powerful tools for security, its deployment must be tempered with a deep understanding of its limitations and a steadfast commitment to human dignity and rights. Conclusion The case of the Doritos bag mistaken for a firearm by an AI security system at Kenwood High School underscores a critical dilemma in our increasingly tech-driven world. While the promise of AI for enhancing student safety is compelling, the realities of false positive alerts, potential algorithmic bias , and escalating privacy concerns demand our urgent attention. This incident is a vivid illustration of how even well-intentioned technology can have unintended and harmful consequences if not implemented with caution, transparency, and robust human oversight. As we continue to integrate AI into every facet of our lives, from financial systems to public safety, it is imperative that we prioritize ethical development, rigorous testing, and a commitment to protecting individual freedoms, ensuring that our pursuit of security does not inadvertently compromise the very liberties we aim to safeguard. Frequently Asked Questions (FAQs) Q1: What exactly happened at Kenwood High School? A1: A student, Taki Allen , was handcuffed and searched after an AI security system at Kenwood High School misidentified his bag of Doritos as a possible firearm. Q2: Which company operates the AI security system involved? A2: The AI gun detection system is operated by Omnilert . Q3: What were the immediate consequences for the student? A3: Taki Allen was made to get on his knees, put his hands behind his back, and was handcuffed by school authorities and local police, despite the alert later being canceled. Q4: What are the main concerns raised by this incident? A4: The incident highlights significant concerns regarding AI false positives , the potential for algorithmic bias , broad privacy concerns related to pervasive surveillance, and the overall impact on student safety and well-being. Q5: How did the school and company respond? A5: Baltimore County Principal Katie Smith reported the situation to the school resource officer, who called local police, although the alert was eventually canceled. Omnilert expressed regret but stated their “process functioned as intended.” News coverage was provided by outlets like CNN and WBAL . To learn more about the latest AI market trends, explore our article on key developments shaping AI features. This post AI Security System’s Alarming Blunder: Doritos Bag Mistaken for Firearm first appeared on BitcoinWorld . Crypto Daily

