CYBERSECURITY NEWS
|
National Cyber Director Urges Private Sector Collaboration to Counter Nation-State Cyber Threat |
National Cyber Director Harry Coker recently reiterated warnings that hackers linked to the People's Republic of China are actively working to gain access to critical infrastructure in the U.S. to potentially launch malicious attacks. He emphasized that most of the critical infrastructure in the U.S. is owned by the private sector, and the U.S. needs collaborative work from the industry to make sure these systems are protected from malicious threats.
Coker said that his office is working on several key initiatives that are part of the Biden administration's national cybersecurity strategy. They include: Consulting with academic and legal experts to explore a variety of tactics to hold manufacturers accountable when they rush insecure products to market; Reaching out to interagency partners to harmonize several wide-ranging cyber rules and regulations so companies are not overwhelmed by compliance burdens; and Working to build a more diverse and robust cybersecurity workforce, as the industry still has about a half million vacant job opportunities and there is a desperate need to attract qualified workers. Coker also highlighted an upcoming white paper on efforts to develop the use of memory-safe languages and improve software measurability. (https://www.ciodive.com/news/national-cyber-director-private-sector-collaboration/707439/)
MEANWHILE, State Department Puts $10M Bounty on AlphV Ransomware Group: The State Department offered up to a $10 million reward for information about the identity or location of leaders affiliated with the AlphV ransomware group aka BlackCat. The bounty includes a reward of up to $5 million for information leading to the arrest or conviction of anyone participating in a ransomware attack using the AlphV variant. The FBI and international law enforcement agencies disrupted the prolific ransomware group's infrastructure in December, but the group regenerated itself mere hours later and continues naming new victims on its data leak site. It has compromised more than 1,000 entities and received nearly $300 million in ransom payments as of September, according to the FBI and Cybersecurity and Infrastructure Security Agency. (https://www.cybersecuritydive.com/news/alphv-ransomware-bounty/707660/)
|
|
TRANSPORTATION TECH NEWS
|
Flying Taxis to Get Chargers at John Wayne & Van Nuys Airports |
Flying taxi company Overair in Santa Ana said it is partnering with Clay Lacy Aviation to introduce an electric charging facility at John Wayne and Van Nuys airports - while exploring the establishment of "vertiports" in Southern California to handle the new aircraft. Overair's Butterfly, an electric-powered vertical takeoff and landing craft, is scheduled to start testing early this year. Clay Lacy, a jet management company and infrastructure developer based in Van Nuys, says the airport facilities will also be available to other flying taxi companies including Joby Aviation of Santa Cruz. (https://www.ocbj.com/newsletter-feed/overair-flying-taxis-to-get-chargers-at-john-wayne/)
|
|
TECH M&A
|
What CIOs Stand to Gain in a Tech M&A Revival |
Tech M&A volume and value jumped during the second half of 2023, teeing up an expected surge in 2024, according to EY's market analysis. EY forecasts private equity deal volume to be up 13% year over year and corporate M&A to increase 12%. A rekindling of M&A activity among tech providers can give CIOs a chance to weed out complexity in the tech stack, as critical services move under a single provider; companies can consider joining the fray, acquiring a vendor to bring skills and technology in-house; and vendors can also integrate complementary capabilities through strategic acquisitions. (https://www.ciodive.com/news/tech-mergers-acquisitions-ey-ibm-sap-cisco/707043/)
|
|
AI NEWS & TRENDS
|
"AI Native" Gen Zers are Comfortable on the Cutting Edge |
While some workers are fearful or ambivalent about how ChatGPT, DALL-E and their ilk will affect their jobs, many college students and newly minted grads think it can give them a career edge. So-called "AI natives" who are studying technology in school may have a leg up on older "digital natives" - the same way that "digital native" millennials bested their "digital immigrant" elders. College students are piling into generative AI (GAI) courses - the better to give them an advantage in the growing number of jobs requiring such skills. A third of this year's seniors - and more than half of tech majors - say they plan to use GAI in their careers, per a class of 2024 trends report from Handshake, a job-search platform for college students. "We're not seeing a nervousness" among Gen Zers, says Valerie Capers Workman, chief legal officer at Handshake. "They see it as an opportunity to be on the cutting edge of a transformational technology." (https://www.axios.com/2024/02/12/ai-gen-z-students-workers-school-jobs)
|
|
California Lawmaker Unveils Landmark AI Bill, With Nationwide Implications |
California state Sen. Scott Wiener, a Democrat who represents San Francisco, has introduced a bill to force companies to test the most powerful artificial intelligence models before releasing them - a landmark proposal that could inspire regulation around the country as state legislatures increasingly tackle the swiftly evolving technology. The new bill would require companies training new AI models to test their tools for "unsafe" behavior, institute hacking protections and develop the tech in such a way that it can be shut down completely, according to a copy of the bill.
AI companies would have to disclose testing protocols and what guardrails they put in place to the California Department of Technology. If the tech causes "critical harm," the state's attorney general can sue the company. Wiener's bill comes amid an explosion of state bills addressing AI, as policymakers across the country grow wary that years of inaction in Congress have created a regulatory vacuum that benefits the tech industry. But California, home to many of the world's largest technology companies, plays a singular role in setting precedent for tech industry guardrails. There are 407 AI-related bills active across 44 U.S. states, according to an analysis by BSA the Software Alliance, an industry group that includes Microsoft and IBM. (https://www.washingtonpost.com/technology/2024/02/08/california-legislation-artificial-intelligence-regulation/
)
|
|
Microsoft, Google, Amazon & Tech Peers Sign Pact to Combat Election-Related Misinformation |
A group of 20 leading tech companies last week announced a joint commitment to combat AI misinformation in this year's elections. The industry is specifically targeting deepfakes, which can use deceptive audio, video and images to mimic key stakeholders in democratic elections or to provide false voting information. Microsoft, Meta, Google, Amazon, IBM, Adobe and chip designer Arm all signed the accord. AI startups OpenAI, Anthropic and Stability AI also joined the group, alongside social media companies such as Snap, TikTok and X. (https://www.cnbc.com/2024/02/16/tech-and-ai-companies-sign-accord-to-combat-election-related-deepfakes.html)
|
|
Google Joins Effort to Help Spot Content Made With AI - Plan is Similar to Meta's |
Google recently announced that it is joining an effort to develop credentials for digital content, a sort of "nutrition label" that identifies when and how a photograph, a video, an audio clip or another file was produced or altered - including with AI. The company will collaborate with companies like Adobe, the BBC, Microsoft and Sony to fine-tune the technical standards. The announcement follows a similar promise from Meta, which like Google has enabled the easy creation and distribution of artificially generated content. Meta said it would promote standardized labels that identified such material. Configuring digital files to include a verified record of their history could make the digital ecosystem more trustworthy, according to those who back a universal certification standard. Google is joining the steering committee for one such group, the Coalition for Content Provenance and Authenticity, or C2PA. OpenAI said recently that its AI image-generation tools would soon add watermarks to images according to the C2PA standards. (https://www.nytimes.com/2024/02/08/business/media/google-ai.html)
|
|
Biden Administration Announces U.S. AI Safety Institute Consortium |
U.S. Secretary of Commerce Gina Raimondo recently announced the creation of the U.S. AI Safety Institute Consortium, which will unite AI creators and users, academics, government and industry researchers, and civil society organizations in support of the development and deployment of safe and trustworthy artificial intelligence. The consortium will be housed under the U.S. AI Safety Institute and will contribute to priority actions outlined in President Biden's Executive Order, including developing guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content.
The consortium includes 200+ member companies and organizations that are on the frontlines of creating and using the most advanced AI systems and hardware, the nation's largest companies and most innovative startups, civil society and academic teams that are building the foundational understanding of how AI can and will transform our society, and representatives of professions with deep engagement in AI's use today. The consortium will focus on establishing the foundations for a new measurement science in AI safety. The consortium also includes state and local governments, as well as non-profits, and will work with organizations from like-minded nations that have a key role to play in developing interoperable and effective tools for safety around the world. The full list of consortium participants is available here. (https://www.commerce.gov/news/press-releases/2024/02/biden-harris-administration-announces-first-ever-consortium-dedicated)
MEANWHILE, US Takes Next Steps to Understand the Pros & Cons of AI Foundational Models: On February 21, the Department of Commerce's National Telecommunications and Information Administration launched a request for comment on the benefits and risks of widely available foundation AI models to inform potential policy recommendations. NTIA does not have regulatory authority, but its report could assist lawmakers in knowing where to draw the line. The agency plans to investigate the availability of model weights and compare to the risks and benefits associated with closed models. Open-weight models allow developers to build upon and adapt previous work. The request for comment will close in mid-to-late March and was directed by the White House's AI-focused executive order. (https://www.ciodive.com/news/NTIA-inquiry-open-foundation-models-WH-AI-executive-order/708015/)
|
|
Sam Altman Seeks Trillions of Dollars to Reshape Business of Chips and AI |
OpenAI CEO Sam Altman is in talks with investors including the United Arab Emirates government to raise funds for a wildly ambitious tech initiative that would boost the world's chip-building capacity, expand its ability to power AI, among other things, and cost several trillion dollars, according to people familiar with the matter. The project could require raising as much as $5 to $7 trillion, one of the people said. The fundraising plans, which face significant obstacles, are aimed at solving constraints to OpenAI's growth, including the scarcity of the pricey AI chips required to train large language models behind AI systems such as ChatGPT. Altman has often complained that there aren't enough of these kinds of chips - known as graphics-processing units, or GPUs - to power OpenAI's quest for artificial general intelligence, which it defines as systems that are broadly smarter than humans. (https://www.wsj.com/tech/ai/sam-altman-seeks-trillions-of-dollars-to-reshape-business-of-chips-and-ai-89ab3db0)
|
|
|