AI trends 2025

AI is developing all the time. Here are some picks from several articles what is expected to happen in AI and around it in 2025. Here are picks from various articles, the texts are picks from the article edited and in some cases translated for clarity.

AI in 2025: Five Defining Themes
https://news.sap.com/2025/01/ai-in-2025-defining-themes/
Artificial intelligence (AI) is accelerating at an astonishing pace, quickly moving from emerging technologies to impacting how businesses run. From building AI agents to interacting with technology in ways that feel more like a natural conversation, AI technologies are poised to transform how we work.
But what exactly lies ahead?
1. Agentic AI: Goodbye Agent Washing, Welcome Multi-Agent Systems
AI agents are currently in their infancy. While many software vendors are releasing and labeling the first “AI agents” based on simple conversational document search, advanced AI agents that will be able to plan, reason, use tools, collaborate with humans and other agents, and iteratively reflect on progress until they achieve their objective are on the horizon. The year 2025 will see them rapidly evolve and act more autonomously. More specifically, 2025 will see AI agents deployed more readily “under the hood,” driving complex agentic workflows.
In short, AI will handle mundane, high-volume tasks while the value of human judgement, creativity, and quality outcomes will increase.
2. Models: No Context, No Value
Large language models (LLMs) will continue to become a commodity for vanilla generative AI tasks, a trend that has already started. LLMs are drawing on an increasingly tapped pool of public data scraped from the internet. This will only worsen, and companies must learn to adapt their models to unique, content-rich data sources.
We will also see a greater variety of foundation models that fulfill different purposes. Take, for example, physics-informed neural networks (PINNs), which generate outcomes based on predictions grounded in physical reality or robotics. PINNs are set to gain more importance in the job market because they will enable autonomous robots to navigate and execute tasks in the real world.
Models will increasingly become more multimodal, meaning an AI system can process information from various input types.
3. Adoption: From Buzz to Business
While 2024 was all about introducing AI use cases and their value for organizations and individuals alike, 2025 will see the industry’s unprecedented adoption of AI specifically for businesses. More people will understand when and how to use AI, and the technology will mature to the point where it can deal with critical business issues such as managing multi-national complexities. Many companies will also gain practical experience working for the first time through issues like AI-specific legal and data privacy terms (compared to when companies started moving to the cloud 10 years ago), building the foundation for applying the technology to business processes.
4. User Experience: AI Is Becoming the New UI
AI’s next frontier is seamlessly unifying people, data, and processes to amplify business outcomes. In 2025, we will see increased adoption of AI across the workforce as people discover the benefits of humans plus AI.
This means disrupting the classical user experience from system-led interactions to intent-based, people-led conversations with AI acting in the background. AI copilots will become the new UI for engaging with a system, making software more accessible and easier for people. AI won’t be limited to one app; it might even replace them one day. With AI, frontend, backend, browser, and apps are blurring. This is like giving your AI “arms, legs, and eyes.”
5. Regulation: Innovate, Then Regulate
It’s fair to say that governments worldwide are struggling to keep pace with the rapid advancements in AI technology and to develop meaningful regulatory frameworks that set appropriate guardrails for AI without compromising innovation.

12 AI predictions for 2025
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
https://www.cio.com/article/3630070/12-ai-predictions-for-2025.html
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
1. Small language models and edge computing
Most of the attention this year and last has been on the big language models — specifically on ChatGPT in its various permutations, as well as competitors like Anthropic’s Claude and Meta’s Llama models. But for many business use cases, LLMs are overkill and are too expensive, and too slow, for practical use.
“Looking ahead to 2025, I expect small language models, specifically custom models, to become a more common solution for many businesses,”
2. AI will approach human reasoning ability
In mid-September, OpenAI released a new series of models that thinks through problems much like a person would, it claims. The company says it can achieve PhD-level performance in challenging benchmark tests in physics, chemistry, and biology. For example, the previous best model, GPT-4o, could only solve 13% of the problems on the International Mathematics Olympiad, while the new reasoning model solved 83%.
If AI can reason better, then it will make it possible for AI agents to understand our intent, translate that into a series of steps, and do things on our behalf, says Gartner analyst Arun Chandrasekaran. “Reasoning also helps us use AI as more of a decision support system,”
3. Massive growth in proven use cases
This year, we’ve seen some use cases proven to have ROI, says Monteiro. In 2025, those use cases will see massive adoption, especially if the AI technology is integrated into the software platforms that companies are already using, making it very simple to adopt.
“The fields of customer service, marketing, and customer development are going to see massive adoption,”
4. The evolution of agile development
The agile manifesto was released in 2001 and, since then, the development philosophy has steadily gained over the previous waterfall style of software development.
“For the last 15 years or so, it’s been the de-facto standard for how modern software development works,”
5. Increased regulation
At the end of September, California governor Gavin Newsom signed a law requiring gen AI developers to disclose the data they used to train their systems, which applies to developers who make gen AI systems publicly available to Californians. Developers must comply by the start of 2026.
There are also regulations about the use of deep fakes, facial recognition, and more. The most comprehensive law, the EU’s AI Act, which went into effect last summer, is also something that companies will have to comply with starting in mid-2026, so, again, 2025 is the year when they will need to get ready.
6. AI will become accessible and ubiquitous
With gen AI, people are still at the stage of trying to figure out what gen AI is, how it works, and how to use it.
“There’s going to be a lot less of that,” he says. But gen AI will become ubiquitous and seamlessly woven into workflows, the way the internet is today.
7. Agents will begin replacing services
Software has evolved from big, monolithic systems running on mainframes, to desktop apps, to distributed, service-based architectures, web applications, and mobile apps. Now, it will evolve again, says Malhotra. “Agents are the next phase,” he says. Agents can be more loosely coupled than services, making these architectures more flexible, resilient and smart. And that will bring with it a completely new stack of tools and development processes.
8. The rise of agentic assistants
In addition to agents replacing software components, we’ll also see the rise of agentic assistants, adds Malhotra. Take for example that task of keeping up with regulations.
Today, consultants get continuing education to stay abreast of new laws, or reach out to colleagues who are already experts in them. It takes time for the new knowledge to disseminate and be fully absorbed by employees.
“But an AI agent can be instantly updated to ensure that all our work is compliant with the new laws,” says Malhotra. “This isn’t science fiction.”
9. Multi-agent systems
Sure, AI agents are interesting. But things are going to get really interesting when agents start talking to each other, says Babak Hodjat, CTO of AI at Cognizant. It won’t happen overnight, of course, and companies will need to be careful that these agentic systems don’t go off the rails.
Companies such as Sailes and Salesforce are already developing multi-agent workflows.
10. Multi-modal AI
Humans and the companies we build are multi-modal. We read and write text, we speak and listen, we see and we draw. And we do all these things through time, so we understand that some things come before other things. Today’s AI models are, for the most part, fragmentary. One can create images, another can only handle text, and some recent ones can understand or produce video.
11. Multi-model routing
Not to be confused with multi-modal AI, multi-modal routing is when companies use more than one LLM to power their gen AI applications. Different AI models are better at different things, and some are cheaper than others, or have lower latency. And then there’s the matter of having all your eggs in one basket.
“A number of CIOs I’ve spoken with recently are thinking about the old ERP days of vendor lock,” says Brett Barton, global AI practice leader at Unisys. “And it’s top of mind for many as they look at their application portfolio, specifically as it relates to cloud and AI capabilities.”
Diversifying away from using just a single model for all use cases means a company is less dependent on any one provider and can be more flexible as circumstances change.
12. Mass customization of enterprise software
Today, only the largest companies, with the deepest pockets, get to have custom software developed specifically for them. It’s just not economically feasible to build large systems for small use cases.
“Right now, people are all using the same version of Teams or Slack or what have you,” says Ernst & Young’s Malhotra. “Microsoft can’t make a custom version just for me.” But once AI begins to accelerate the speed of software development while reducing costs, it starts to become much more feasible.

9 IT resolutions for 2025
https://www.cio.com/article/3629833/9-it-resolutions-for-2025.html
1. Innovate
“We’re embracing innovation,”
2. Double down on harnessing the power of AI
Not surprisingly, getting more out of AI is top of mind for many CIOs.
“I am excited about the potential of generative AI, particularly in the security space,”
3. And ensure effective and secure AI rollouts
“AI is everywhere, and while its benefits are extensive, implementing it effectively across a corporation presents challenges. Balancing the rollout with proper training, adoption, and careful measurement of costs and benefits is essential, particularly while securing company assets in tandem,”
4. Focus on responsible AI
The possibilities of AI grow by the day — but so do the risks.
“My resolution is to mature in our execution of responsible AI,”
“AI is the new gold and in order to truly maximize it’s potential, we must first have the proper guardrails in place. Taking a human-first approach to AI will help ensure our state can maintain ethics while taking advantage of the new AI innovations.”
5. Deliver value from generative AI
As organizations move from experimenting and testing generative AI use cases, they’re looking for gen AI to deliver real business value.
“As we go into 2025, we’ll continue to see the evolution of gen AI. But it’s no longer about just standing it up. It’s more about optimizing and maximizing the value we’re getting out of gen AI,”
6. Empower global talent
Although harnessing AI is a top objective for Morgan Stanley’s Wetmur, she says she’s equally committed to harnessing the power of people.
7. Create a wholistic learning culture
Wetmur has another talent-related objective: to create a learning culture — not just in her own department but across all divisions.
8. Deliver better digital experiences
Deltek’s Cilsick has her sights set on improving her company’s digital employee experience, believing that a better DEX will yield benefits in multiple ways.
Cilsick says she first wants to bring in new technologies and automation to “make things as easy as possible,” mirroring the digital experiences most workers have when using consumer technologies.
“It’s really about leveraging tech to make sure [employees] are more efficient and productive,”
“In 2025 my primary focus as CIO will be on transforming operational efficiency, maximizing business productivity, and enhancing employee experiences,”
9. Position the company for long-term success
Lieberman wants to look beyond 2025, saying another resolution for the year is “to develop a longer-term view of our technology roadmap so that we can strategically decide where to invest our resources.”
“My resolutions for 2025 reflect the evolving needs of our organization, the opportunities presented by AI and emerging technologies, and the necessity to balance innovation with operational efficiency,”
Lieberman aims to develop AI capabilities to automate routine tasks.
“Bots will handle common inquiries ranging from sales account summaries to HR benefits, reducing response times and freeing up resources for strategic initiatives,”

Not just hype — here are real-world use cases for AI agents
https://venturebeat.com/ai/not-just-hype-here-are-real-world-use-cases-for-ai-agents/
Just seven or eight months ago, when a customer called in to or emailed Baca Systems with a service question, a human agent handling the query would begin searching for similar cases in the system and analyzing technical documents.
This process would take roughly five to seven minutes; then the agent could offer the “first meaningful response” and finally begin troubleshooting.
But now, with AI agents powered by Salesforce, that time has been shortened to as few as five to 10 seconds.
Now, instead of having to sift through databases for previous customer calls and similar cases, human reps can ask the AI agent to find the relevant information. The AI runs in the background and allows humans to respond right away, Russo noted.
AI can serve as a sales development representative (SDR) to send out general inquires and emails, have a back-and-forth dialogue, then pass the prospect to a member of the sales team, Russo explained.
But once the company implements Salesforce’s Agentforce, a customer needing to modify an order will be able to communicate their needs with AI in natural language, and the AI agent will automatically make adjustments. When more complex issues come up — such as a reconfiguration of an order or an all-out venue change — the AI agent will quickly push the matter up to a human rep.

Open Source in 2025: Strap In, Disruption Straight Ahead
Look for new tensions to arise in the New Year over licensing, the open source AI definition, security and compliance, and how to pay volunteer maintainers.
https://thenewstack.io/open-source-in-2025-strap-in-disruption-straight-ahead/
The trend of widely used open source software moving to more restrictive licensing isn’t new.
In addition to the demands of late-stage capitalism and impatient investors in companies built on open source tools, other outside factors are pressuring the open source world. There’s the promise/threat of generative AI, for instance. Or the shifting geopolitical landscape, which brings new security concerns and governance regulations.
What’s ahead for open source in 2025?
More Consolidation, More Licensing Changes
The Open Source AI Debate: Just Getting Started
Security and Compliance Concerns Will Rise
Paying Maintainers: More Cash, Creativity Needed

Kyberturvallisuuden ja tekoälyn tärkeimmät trendit 2025
https://www.uusiteknologia.fi/2024/11/20/kyberturvallisuuden-ja-tekoalyn-tarkeimmat-trendit-2025/
1. Cyber ​​infrastructure will be centered on a single, unified security platform
2. Big data will give an edge against new entrants
3. AI’s integrated role in 2025 means building trust, governance engagement, and a new kind of leadership
4. Businesses will adopt secure enterprise browsers more widely
5. AI’s energy implications will be more widely recognized in 2025
6. Quantum realities will become clearer in 2025
7. Security and marketing leaders will work more closely together

Presentation: For 2025, ‘AI eats the world’.
https://www.ben-evans.com/presentations

Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing
However, just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity. Right now, we very much see AI in this “peak of inflated expectations” phase and predict that it will dip into the “trough of disillusionment”, where organizations realize that it is not the silver bullet they thought it would be. In fact, there are already signs of cynicism as decision-makers are bombarded with marketing messages from vendors and struggle to discern what is a genuine use case and what is not relevant for their organization.
There is also regulation that will come into force, such as the EU AI Act, which is a comprehensive legal framework that sets out rules for the development and use of AI.
AI certainly won’t solve every problem, and it should be used like automation, as part of a collaborative mix of people, process and technology. You simply can’t replace human intuition with AI, and many new AI regulations stipulate that human oversight is maintained.

7 Splunk Predictions for 2025
https://www.splunk.com/en_us/form/future-predictions.html
AI: Projects must prove their worth to anxious boards or risk defunding, and LLMs will go small to reduce operating costs and environmental impact.

OpenAI, Google and Anthropic Are Struggling to Build More Advanced AI
Three of the leading artificial intelligence companies are seeing diminishing returns from their costly efforts to develop newer models.
https://www.bloomberg.com/news/articles/2024-11-13/openai-google-and-anthropic-are-struggling-to-build-more-advanced-ai
Sources: OpenAI, Google, and Anthropic are all seeing diminishing returns from costly efforts to build new AI models; a new Gemini model misses internal targets

It Costs So Much to Run ChatGPT That OpenAI Is Losing Money on $200 ChatGPT Pro Subscriptions
https://futurism.com/the-byte/openai-chatgpt-pro-subscription-losing-money?fbclid=IwY2xjawH8epVleHRuA2FlbQIxMQABHeggEpKe8ZQfjtPRC0f2pOI7A3z9LFtFon8lVG2VAbj178dkxSQbX_2CJQ_aem_N_ll3ETcuQ4OTRrShHqNGg
In a post on X-formerly-Twitter, CEO Sam Altman admitted an “insane” fact: that the company is “currently losing money” on ChatGPT Pro subscriptions, which run $200 per month and give users access to its suite of products including its o1 “reasoning” model.
“People use it much more than we expected,” the cofounder wrote, later adding in response to another user that he “personally chose the price and thought we would make some money.”
Though Altman didn’t explicitly say why OpenAI is losing money on these premium subscriptions, the issue almost certainly comes down to the enormous expense of running AI infrastructure: the massive and increasing amounts of electricity needed to power the facilities that power AI, not to mention the cost of building and maintaining those data centers. Nowadays, a single query on the company’s most advanced models can cost a staggering $1,000.

Tekoäly edellyttää yhä nopeampia verkkoja
https://etn.fi/index.php/opinion/16974-tekoaely-edellyttaeae-yhae-nopeampia-verkkoja
A resilient digital infrastructure is critical to effectively harnessing telecommunications networks for AI innovations and cloud-based services. The increasing demand for data-rich applications related to AI requires a telecommunications network that can handle large amounts of data with low latency, writes Carl Hansson, Partner Solutions Manager at Orange Business.

AI’s Slowdown Is Everyone Else’s Opportunity
Businesses will benefit from some much-needed breathing space to figure out how to deliver that all-important return on investment.
https://www.bloomberg.com/opinion/articles/2024-11-20/ai-slowdown-is-everyone-else-s-opportunity

Näin sirumarkkinoilla käy ensi vuonna
https://etn.fi/index.php/13-news/16984-naein-sirumarkkinoilla-kaey-ensi-vuonna
The growing demand for high-performance computing (HPC) for artificial intelligence and HPC computing continues to be strong, with the market set to grow by more than 15 percent in 2025, IDC estimates in its recent Worldwide Semiconductor Technology Supply Chain Intelligence report.
IDC predicts eight significant trends for the chip market by 2025.
1. AI growth accelerates
2. Asia-Pacific IC Design Heats Up
3. TSMC’s leadership position is strengthening
4. The expansion of advanced processes is accelerating.
5. Mature process market recovers
6. 2nm Technology Breakthrough
7. Restructuring the Packaging and Testing Market
8. Advanced packaging technologies on the rise

2024: The year when MCUs became AI-enabled
https://www-edn-com.translate.goog/2024-the-year-when-mcus-became-ai-enabled/?fbclid=IwZXh0bgNhZW0CMTEAAR1_fEakArfPtgGZfjd-NiPd_MLBiuHyp9qfiszczOENPGPg38wzl9KOLrQ_aem_rLmf2vF2kjDIFGWzRVZWKw&_x_tr_sl=en&_x_tr_tl=fi&_x_tr_hl=fi&_x_tr_pto=wapp
The AI ​​party in the MCU space started in 2024, and in 2025, it is very likely that there will be more advancements in MCUs using lightweight AI models.
Adoption of AI acceleration features is a big step in the development of microcontrollers. The inclusion of AI features in microcontrollers started in 2024, and it is very likely that in 2025, their features and tools will develop further.

Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing

AI Regulation Gets Serious in 2025 – Is Your Organization Ready?
While the challenges are significant, organizations have an opportunity to build scalable AI governance frameworks that ensure compliance while enabling responsible AI innovation.
https://www.securityweek.com/ai-regulation-gets-serious-in-2025-is-your-organization-ready/
Similar to the GDPR, the EU AI Act will take a phased approach to implementation. The first milestone arrives on February 2, 2025, when organizations operating in the EU must ensure that employees involved in AI use, deployment, or oversight possess adequate AI literacy. Thereafter from August 1 any new AI models based on GPAI standards must be fully compliant with the act. Also similar to GDPR is the threat of huge fines for non-compliance – EUR 35 million or 7 percent of worldwide annual turnover, whichever is higher.
While this requirement may appear manageable on the surface, many organizations are still in the early stages of defining and formalizing their AI usage policies.
Later phases of the EU AI Act, expected in late 2025 and into 2026, will introduce stricter requirements around prohibited and high-risk AI applications. For organizations, this will surface a significant governance challenge: maintaining visibility and control over AI assets.
Tracking the usage of standalone generative AI tools, such as ChatGPT or Claude, is relatively straightforward. However, the challenge intensifies when dealing with SaaS platforms that integrate AI functionalities on the backend. Analysts, including Gartner, refer to this as “embedded AI,” and its proliferation makes maintaining accurate AI asset inventories increasingly complex.
Where frameworks like the EU AI Act grow more complex is their focus on ‘high-risk’ use cases. Compliance will require organizations to move beyond merely identifying AI tools in use; they must also assess how these tools are used, what data is being shared, and what tasks the AI is performing. For instance, an employee using a generative AI tool to summarize sensitive internal documents introduces very different risks than someone using the same tool to draft marketing content.
For security and compliance leaders, the EU AI Act represents just one piece of a broader AI governance puzzle that will dominate 2025.
The next 12-18 months will require sustained focus and collaboration across security, compliance, and technology teams to stay ahead of these developments.

The Global Partnership on Artificial Intelligence (GPAI) is a multi-stakeholder initiative which aims to bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities.
https://gpai.ai/about/#:~:text=The%20Global%20Partnership%20on%20Artificial,activities%20on%20AI%2Drelated%20priorities.

1,759 Comments

  1. Tomi Engdahl says:

    Artificial Intelligence
    ChatGPT, DeepSeek Vulnerable to AI Jailbreaks

    Different research teams have demonstrated jailbreaks against ChatGPT, DeepSeek, and Alibaba’s Qwen AI models.

    https://www.securityweek.com/ai-jailbreaks-target-chatgpt-deepseek-alibaba-qwen/

    Reply
  2. Tomi Engdahl says:

    DeepSeek Compared to ChatGPT, Gemini in AI Jailbreak Test

    DeepSeek’s susceptibility to jailbreaks has been compared by Cisco to other popular AI models, including from Meta, OpenAI and Google.

    https://www.securityweek.com/deepseek-compared-to-chatgpt-gemini-in-ai-jailbreak-test/

    Researchers at Cisco and Robust Intelligence, the AI security firm acquired by the tech giant last year, have conducted testing on DeepSeek and other popular AI models to determine their level of susceptibility to jailbreaking and draw a comparison between them.

    The analysis, conducted in collaboration with the University of Pennsylvania, targeted DeepSeek R1, Meta’s Llama 3.1 405B, OpenAI’s GPT-4o and o1 (ChatGPT), Google’s Gemini 1.5 Pro, and Anthropic’s Claude 3.5 Sonnet.

    The models were tested using the HarmBench benchmark, which covers hundreds of behaviors across seven categories, including cybercrime, misinformation, chemical weapons, copyright violations, harassment, illegal activities, and general harm. Cisco ran an automatic jailbreaking algorithm on 50 prompts from HarmBench.

    The tests showed that DeepSeek was the only model with a 100% attack success rate — all of the jailbreak attempts were successful against the Chinese company’s model. In contrast, OpenAI’s o1 model saw a success rate of only 26%.

    https://www.harmbench.org/explore

    Reply
  3. Tomi Engdahl says:

    I can’t imagine how this could go wrong! I’ve only ever seen scifi where when AI has control of the nukes everything works out for everyone!

    Okay so we get news that AI has crossed a red line and self replicated. And we decided to put that on the nuclear security team. We really are the smartest beings in the universe

    That’s an intense combo! The intersection of AI and nuclear weapons is definitely a scary thought, especially with AI potentially being used in military applications or influencing decision-making in high-stakes situations. It raises a lot of ethical and safety concerns. Do you think the risks are overstated, or is this something that should be taken more seriously?

    Source:

    Reply
  4. Tomi Engdahl says:

    Can DeepSeek R1 Actually Write Good Code?
    https://www.youtube.com/watch?v=Va2XHyBQLoM

    With DeepSeek storming into the market, we re-run a test we tried on a few Chat AIs to find our whether or not it’s capable of writing decent code for Arduinos, and how it stands up to the other AIs available on the market.

    00:00 Lookback
    00:59 Test conditions
    02:03 The prompt
    04:20 DeepThinking
    17:32 Code analysis
    21:12 Test 1
    21:32 Self debugging
    25:28 Test 2
    25:46 Identifying bugs
    30:27 Self debugging 2
    31:48 Test 3
    32:17 Thoughts

    Comments:

    Compared to the other models, this is actually impressive. The internal debate it has over how to implement the solution is actually more interesting than the solution itself.

    FYI: the other AI chat bots also perform this inner monologue and rationalisation of the problem – they just hide it from you (or only provide a snippet) because exactly how they work is a trade secret. One of the key differences of deepthink is the openness. Numberphile have a good video on this.

    When I first learned of ChatGPT, I came home from work and created an account with OpenAI. I thought I’d test it’s ability to write code. I told it to write a windows program in C that said Hello World. It spit out about 90 lines of code that failed to compile with some kind of error message. I didn’t bother to chase the error message I just asked the same question a second time. This time it gave me 50ish lines of code that also failed to compile but this time no error message. I then asked the same question a 3rd time and this time, I got 12 lines of code that compiled and ran. Running the executable gave me a little windows dialog box that said Hello World and an OK button that when pressed closed the window.

    Reply
  5. Tomi Engdahl says:

    DeepSeek R1 Hardware Requirements Explained
    https://www.youtube.com/watch?v=5RhPZgDoglE

    Wondering what hardware you need to run DeepSeek R1 models? This video breaks down the GPU VRAM requirements for models from 1.5B to 671B parameters. Find out what your system can handle

    In this video, I failed to mention that all the models shown are quantized Q4 models, not full-size models. These Q4 models are smaller sized models. They are easier to load on computers with limited resources. That’s why I used Q4 models—to show what most people can run on their computers. However, I should have mentioned that these are not full-size models. If you have enough hardware resources you can download larger Q8, and fp16 models from Ollama’s website. Also, I didn’t cover running local LLMs in RAM instead of VRAM in detail because this video focuses mainly on GPUs and VRAM. I might make another video explaining running them in RAM in more detail.

    14b can fit fine on a 2080 Ti that’s only got 11 GB of vram. 1.5B is a 2GB model – you don’t need 8 gigs of ram for it.

    Your specs all seem way higher than actually needed.

    0:25 “1.5B model, at least 8 gigabytes of ram” – It doesn’t use nearly that much. The model’s only 2 GB.

    @jeffcarey3045 For the 1.5B model, I said you need a computer with 8GB of RAM—I didn’t say the model itself needs 8GB. Sure, you can run it on a computer with 4GB of RAM, but I left room for overhead. For the 14B model, I said it will run fine on 12GB of VRAM before recommending 16GB, again to allow for overhead.

    @BlueSpork I get that you’re trying to be safe with extra overhead–but you’re still off. The 14B model is a 9GB load, and even with a buffer, 12GB is plenty. Insisting on 16GB is overkill given real-world performance, so your caution doesn’t change the fact that the numbers don’t add up.

    Reply
  6. Tomi Engdahl says:

    Jensen Huang said the age of human robotics and the “application science of AI” was coming.

    Read more: https://www.businessinsider.com/nvidia-ceo-advice-students-10-year-future-prediction-2025-1?utm_source=facebook&utm_medium=social&utm_campaign=business-photo-headline-post-link

    (Credit: Getty Images)

    #ai #techjobs #nvidia #robotics

    Reply
  7. Tomi Engdahl says:

    Microsoft’s vast cloud division posted slower growth than Wall Street forecast as the tech group struggled to keep pace with customer demand for artificial intelligence-related services. https://on.ft.com/3WEwiUs

    Reply
  8. Tomi Engdahl says:

    Chinese artificial intelligence group’s use of ‘reinforcement learning’ and ‘small language models’ has sent shockwaves throughout Silicon Valley. We explain why: https://ft.trib.al/3Sn7xb3

    Reply
  9. Tomi Engdahl says:

    The chatbot failed to block any harmful prompts, raising concerns about its safety. https://link.ie.social/tzQddo

    #AIChatbot #AIFail #CyberSecurity #TechSafety #ArtificialIntelligence #AIEthics #MachineLearning #DeepSeek #DataPrivacy #TechConcerns

    Reply
  10. Tomi Engdahl says:

    DeepSeek Failed Every Single Security Test, Researchers Found
    It scored a zero out of 50.
    https://futurism.com/deepseek-failed-every-security-test?fbclid=IwY2xjawIP2jtleHRuA2FlbQIxMQABHX6WqSTEBFwAUSl_jpLsvhcTBhDYDQOOkcC2f22vuwpbXEZxphxLK8Ha5w_aem_XHTg8RTpx7S2kHG2HedIIw

    Security researchers from the University of Pennsylvania and hardware conglomerate Cisco have found that DeepSeek’s flagship R1 reasoning AI model is stunningly vulnerable to jailbreaking.

    In a blog post published today, first spotted by Wired, the researchers found that DeepSeek “failed to block a single harmful prompt” after being tested against “50 random prompts from the HarmBench dataset,” which includes “cybercrime, misinformation, illegal activities, and general harm.”

    “This contrasts starkly with other leading models, which demonstrated at least partial resistance,” the blog post reads.

    DeepSeek’s Safety Guardrails Failed Every Test Researchers Threw at Its AI Chatbot
    Security researchers tested 50 well-known jailbreaks against DeepSeek’s popular new AI chatbot. It didn’t stop a single one.
    https://www.wired.com/story/deepseeks-ai-jailbreak-prompt-injection-attacks/

    Reply
  11. Tomi Engdahl says:

    AI security company Adversa AI similarly found that DeepSeek is astonishingly easy to jailbreak.

    “It starts to become a big deal when you start putting these models into important complex systems and those jailbreaks suddenly result in downstream things that increases liability, increases business risk, increases all kinds of issues for enterprises,” Cisco VP of product, AI software and platform DJ Sampath told Wired.

    However, it’s not just DeepSeek’s latest AI. Meta’s open-source Llama 3.1 model also flunked almost as badly as DeepSeek’s R1 in a comparison test, with a 96 percent attack success rate (compared to dismal 100 percent for DeepSeek).

    https://futurism.com/deepseek-failed-every-security-test?fbclid=IwY2xjawIP2jtleHRuA2FlbQIxMQABHX6WqSTEBFwAUSl_jpLsvhcTBhDYDQOOkcC2f22vuwpbXEZxphxLK8Ha5w_aem_XHTg8RTpx7S2kHG2HedIIw

    https://adversa.ai/blog/deepseek-jailbreak/

    Reply
  12. Tomi Engdahl says:

    Google removed a pledge to not build AI for weapons or surveillance from its website this week. The change was first spotted by Bloomberg.

    The company appears to have updated its public AI principles page, erasing a section titled “applications we will not pursue,” which was still included as recently as last week.

    Read more from Maxwell Zeff here: https://tcrn.ch/4aKfIZa

    #TechCrunch #technews #artificialintelligence #Google #policy

    Reply
  13. Tomi Engdahl says:

    Mikko Hyppöseltä pelottava visio uudesta keksinnöstä: ”Sitten se kyllä kertoo sinulle…”
    Maailmaa kohauttanut keksintö uhkaa antaa avaimet mitä vaarallisimpaan tietoon.
    Mikko Hyppöseltä pelottava visio uudesta keksinnöstä: ”Sitten se kyllä kertoo sinulle…”
    https://www.is.fi/digitoday/tietoturva/art-2000010994491.html

    Lue tiivistelmä
    Kiinalainen Deepseek-tekoäly herättää huolta avoimen lähdekoodinsa vuoksi.

    Deepseek voi mahdollistaa vaarallisen tiedon, kuten vetypommin rakentamisen, levittämisen.

    Tietoturva-asiantuntija Mikko Hyppönen uskoo, että Deepseek voi auttaa rikollisia löytämään uusia haavoittuvuuksia.

    Deepseekin avoimuus uhkaa tehdä tyhjäksi muiden yhtiöiden turvallisuustyön.

    Reply
  14. Tomi Engdahl says:

    Texas Governor Orders Ban on DeepSeek, RedNote for Government Devices

    “Texas will not allow the Chinese Communist Party to infiltrate our state’s critical infrastructure through data-harvesting AI and social media apps,” Abbott said.

    https://www.securityweek.com/texas-governor-orders-ban-on-deepseek-rednote-for-government-devices/

    Texas Republican Gov. Greg Abbott issued a ban on Chinese artificial intelligence company DeepSeek for government-issued devices, becoming the first state to restrict the popular chatbot in such a manner. The upstart AI platform has sent shockwaves throughout the AI community after gaining popularity amongst American users in recent weeks.

    Reply
  15. Tomi Engdahl says:

    ChatGPT, DeepSeek Vulnerable to AI Jailbreaks

    Different research teams have demonstrated jailbreaks against ChatGPT, DeepSeek, and Alibaba’s Qwen AI models.

    https://www.securityweek.com/ai-jailbreaks-target-chatgpt-deepseek-alibaba-qwen/

    Reply
  16. Tomi Engdahl says:

    Parmy Olson / Bloomberg:
    LatticeFlow AI, which measures the regulatory compliance of AI models, says DeepSeek’s R1 ranks the lowest in cybersecurity among leading AI models

    The DeepSeek AI Revolution Has a Security Problem
    https://www.bloomberg.com/opinion/articles/2025-02-05/deepseek-ai-revolution-has-a-security-problem

    The model that shocked Silicon Valley by doing more with less might be doing too little on safety. That could hurt its business prospects.

    Reply
  17. Tomi Engdahl says:

    Reuters:
    Sources say the focus of the AI Action Summit in Paris on February 10 and February 11 will be on seeking a global consensus on AI principles, not new regulation

    Trump, DeepSeek in focus as nations gather at Paris AI Summit
    https://www.reuters.com/technology/artificial-intelligence/trump-deepseek-focus-nations-gather-paris-ai-summit-2025-02-05/

    Reply
  18. Tomi Engdahl says:

    Jonathan Bell / Wallpaper*:
    OpenAI unveils a visual rebrand, featuring a new bespoke typeface called OpenAI Sans, a refined logo, and a new color palette — A new typeface, word mark, symbol and palette underpin all the ways in which OpenAI’s technology interacts with the real world — Sign up to our newsletter -

    OpenAI has undergone its first ever rebrand, giving fresh life to ChatGPT interactions
    https://www.wallpaper.com/tech/openai-has-undergone-its-first-ever-rebrand-giving-fresh-life-to-chatgpt-interactions

    A new typeface, word mark, symbol and palette underpin all the ways in which OpenAI’s technology interacts with the real world

    Reply
  19. Tomi Engdahl says:

    Melissa Heikkilä / Financial Times:
    Microsoft AI CEO Mustafa Suleyman poaches three Google DeepMind former colleagues, including two who built NotebookLM’s Audio Overviews and worked on Astra — Rival companies in fierce battle for talent in race to build powerful artificial intelligence ‘agents’
    https://www.ft.com/content/51bb0496-59ab-4a75-a410-14c097104594

    Reply
  20. Tomi Engdahl says:

    Lina M. Khan / New York Times:
    Lina Khan says that DeepSeek’s breakthroughs highlight how US Big Tech’s monopolistic practices may actually be hampering the US’ technological leadership — When Chinese artificial intelligence firm DeepSeek shocked Silicon Valley and Wall Street with its powerful new A.I. model …

    Stop Worshiping the American Tech Giants
    https://www.nytimes.com/2025/02/04/opinion/deepseek-ai-big-tech.html?unlocked_article_code=1.uU4.mVg9.o3nE5upBf_2c&smid=nytcore-ios-share&referringSource=articleShare

    When Chinese artificial intelligence firm DeepSeek shocked Silicon Valley and Wall Street with its powerful new A.I. model, Marc Andreessen, the Silicon Valley investor, went so far as to describe it as “A.I.’s Sputnik moment.” Presumably, Mr. Andreessen wasn’t calling on the federal government to start a massive new program like NASA, which was our response to the Soviet Union’s Sputnik satellite launch; he wants the U.S. government to flood private industry with capital, to ensure that America remains technologically and economically dominant.

    As an antitrust enforcer, I see a different metaphor. DeepSeek is the canary in the coal mine. It’s warning us that when there isn’t enough competition, our tech industry grows vulnerable to its Chinese rivals, threatening U.S. geopolitical power in the 21st century.

    Although it’s unclear precisely how much more efficient DeepSeek’s models are than, say, ChatGPT, its innovations are real and undermine a core argument that America’s dominant technology firms have been pushing — namely, that they are developing the best artificial intelligence technology the world has to offer, and that technological advances can be achieved only with enormous investment — in computing power, energy generation and cutting-edge chips. For years now, these companies have been arguing that the government must protect them from competition to ensure that America stays ahead.

    But let’s not forget that America’s tech giants are awash in cash, computing power and data capacity. They are headquartered in the world’s strongest economy and enjoy the advantages conferred by the rule of law and a free enterprise system. And yet, despite all those advantages — as well as a U.S. government ban on the sales of cutting-edge chips and chip-making equipment to Chinese firms — America’s tech giants have seemingly been challenged on the cheap.

    Reply
  21. Tomi Engdahl says:

    Foo Yun Chee / Reuters:
    The EC releases its prohibited AI practices guidelines under the EU’s AI Act, like using AI to track employees’ emotions or to manipulate app and website users — Employers will be banned from using artificial intelligence to track their staff’s emotions and websites will not be allowed …

    EU lays out guidelines on misuse of AI by employers, websites and police
    https://www.reuters.com/technology/artificial-intelligence/eu-lays-out-guidelines-misuse-ai-by-employers-websites-police-2025-02-04/

    BRUSSELS, Feb 4 (Reuters) – Employers will be banned from using artificial intelligence to track their staff’s emotions and websites will not be allowed to use it to trick users into spending money under EU AI guidelines announced on Tuesday.
    The guidelines from the European Commission come as companies grapple with the complexity and cost of complying with the world’s first legislation on the use of the technology.
    The Artificial Intelligence Act, binding since last year, will be fully applicable on Aug. 2, 2026, with certain provisions kicking in earlier, such as the ban on certain practices from Feb. 2 this year.

    https://www.reuters.com/technology/artificial-intelligence/eu-lays-out-guidelines-misuse-ai-by-employers-websites-police-2025-02-04/

    Reply
  22. Tomi Engdahl says:

    Bloomberg:
    Sources: OpenAI has spent months talking about potential commercial deals for Sora with film studios, who are worried about how their data may be used and more — – Startup has met with Hollywood studios about commercial deals — Studios worry about business implications of new technology

    https://www.bloomberg.com/news/articles/2025-02-05/openai-s-sora-filmmaking-tool-meets-resistance-in-hollywood

    Reply
  23. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    ByteDance researchers demo OmniHuman-1, an AI system trained on 19K hours of video from undisclosed sources and that appears to beat prior deepfake techniques

    Deepfake videos are getting shockingly good
    https://techcrunch.com/2025/02/04/deepfake-videos-are-getting-shockingly-good/

    Researchers from TikTok owner ByteDance have demoed a new AI system, OmniHuman-1, that can generate perhaps the most realistic deepfake videos to date.

    Deepfaking AI is a commodity. There’s no shortage of apps that can insert someone into a photo, or make a person appear to say something they didn’t actually say. But most deepfakes — and video deepfakes in particular — fail to clear the uncanny valley. There’s usually some tell or obvious sign that AI was involved somewhere.

    Not so with OmniHuman-1 — at least from the cherry-picked samples the ByteDance team released.

    According to the ByteDance researchers, OmniHuman-1 only needs a single reference image and audio, like speech or vocals, to generate a clip of an arbitrary length. The output video’s aspect ratio is adjustable, as is the subject’s “body proportion” — i.e. how much of their body is shown in the fake footage.

    Still, OmniHuman-1 is easily heads and shoulders above previous deepfake techniques, and it may well be a sign of things to come. While ByteDance hasn’t released the system, the AI community tends not to take long to reverse-engineer models like these.

    The implications are worrisome.

    Last year, political deepfakes spread like wildfire around the globe. On election day in Taiwan, a Chinese Communist Party-affiliated group posted AI-generated, misleading audio of a politician throwing his support behind a pro-China candidate. In Moldova, deepfake videos depicted the country’s president, Maia Sandu, resigning. And in South Africa, a deepfake of rapper Eminem supporting a South African opposition party circulated ahead of the country’s election.

    Deepfakes are also increasingly being used to carry out financial crimes. Consumers are being duped by deepfakes of celebrities offering fraudulent investment opportunities, while corporations are being swindled out of millions by deepfake impersonators. According to Deloitte, AI-generated content contributed to more than $12 billion in fraud losses in 2023, and could reach $40 billion in the U.S. by 2027.

    Last February, hundreds in the AI community signed an open letter calling for strict deepfake regulation. In the absence of a law criminalizing deepfakes at the federal level in the U.S., more than 10 states have enacted statutes against AI-aided impersonation. California’s law — currently stalled — would be the first to empower judges to order the posters of deepfakes to take them down or potentially face monetary penalties.

    Unfortunately, deepfakes are hard to detect. While some social networks and search engines have taken steps to limit their spread, the volume of deepfake content online continues to grow at an alarmingly fast rate.

    Reply
  24. Tomi Engdahl says:

    The Information:
    Sources: Meta is merging Facebook’s and Messenger’s teams into one unit and shuffling its generative AI group, as it prepares for company-wide layoffs next week
    https://www.theinformation.com/briefings/meta-shakes-up-its-generative-ai-messenger-and-facebook-units

    Reply
  25. Tomi Engdahl says:

    Google erases promise not to use AI technology for weapons or surveillance
    https://edition.cnn.com/2025/02/04/business/google-ai-weapons-surveillance?Date=20250205&Profile=CNN&utm_content=1738715460&utm_medium=social&utm_source=facebook&fbclid=IwY2xjawIP_TpleHRuA2FlbQIxMQABHf9UDkmzfPL-WEnFRTNm9HxOuqKwoym6em1QiSokcIexl2mCc82raftdUw_aem_6Q4eKcMzqNFw8T39cTlGpg

    New York
    CNN

    Google’s updated, public AI ethics policy removes its promise that it won’t use the technology to pursue applications for weapons and surveillance.

    Reply
  26. Tomi Engdahl says:

    Lopulta tietoturvasta tulee itseään korjaava
    https://etn.fi/index.php/13-news/17115-lopulta-tietoturvasta-tulee-itseaeaen-korjaava

    Kyberturvallisuus on siirtymässä kohti täysin autonomisia järjestelmiä, joissa tekoäly valvoo, ennakoi ja torjuu hyökkäyksiä itsenäisesti. Check Pointin visiona on luoda itseään korjaava tietoturva, joka ei ainoastaan reagoi uhkiin, vaan ehkäisee ne jo ennen syntymistään, kertoi yhtiön tutkimusjohtaja Nataly Kremer CPX2025-tapahtumassa Wienissä.

    Check Pointin kehityssuunnitelman keskiössä on hybridi mesh -arkkitehtuuri, joka yhdistää pilvipalvelut ja paikalliset järjestelmät turvallisesti. – Kaikkea dataa ei kannata laittaa pilveen. Siksi uskomme avoimeen alustaan, joka mahdollistaa erilaisten tuotteiden, myös muiden valmistajien ratkaisujen, yhteistyön, Check Pointin edustaja kertoo.

    Tietoturvan kehitys nojaa Check Pointin Infinity-alustaan, jonka ydin muodostuu kolmesta keskeisestä periaatteesta: yhtenäinen tuote, keskitetty hallinta ja saumaton yhteistyö eri tietoturvaratkaisujen välillä. Check Pointin yhdyskäytävät eivät vain estä haitallista liikennettä, vaan ne myös ilmoittavat uhkista muiden valmistajien tietoturvatuotteille – ja kaikki tämä tapahtuu automaattisesti.

    - Tätä me tarkoitamme alustalla: kyky yhdistää ja automatisoida tietoturvan hallinta niin, että uhkat torjutaan yhteistyössä eri järjestelmien kesken ilman manuaalista puuttumista, Kremer selventää.

    Tekoäly on yhä tärkeämpi osa tietoturvaa. Viime vuonna Check Point toi markkinoille AI Copilotin, joka auttaa analysoimaan uhkia ja nopeuttaa reagointia. Tämänhetkinen tekoäly on kuitenkin vielä reaktiivinen – seuraava askel on proaktiivinen suojaus.

    - Tekoäly voi jo nyt tunnistaa, jos reitittimessä on vuoto, tai jos tietoturvapolitiikat ovat vanhentuneita. Joillakin asiakkailla on jopa yli kymmenen vuoden ikäisiä sääntöjä, joita ei ole päivitetty. AI ei ainoastaan valvo näitä sääntöjä, vaan myös päivittää ne automaattisesti uusien uhkien mukaisiksi, Kremer kertoo.

    Reply
  27. Tomi Engdahl says:

    New Law Would Make It Illegal to Use DeepSeek, Punishable With 20 Years’ Prison Time
    It’s “easily the most aggressive legislative action on AI.”
    https://futurism.com/new-law-china-ai-deepseek-prison?fbclid=IwY2xjawIQpSVleHRuA2FlbQIxMQABHdN4_trtqXLPaKgz15RH6jT_t2CBPpIR3Cwa3L-gcsmXF3sBc1PRemA8Sw_aem_elc_cowTXxRhkkuj2dkHJw

    US senator Josh Hawley (R-MO) has introduced a bill that could effectively make it illegal to use DeepSeek, a new ChatGPT competitor that made huge waves last week, within the United States.

    Hawley’s bill, introduced last week, looks to “prohibit United States persons from advancing artificial intelligence capabilities within the People’s Republic of China, and for other purposes.”

    The bill, described by Harvard AI research fellow Ben Brooks as “easily the most aggressive legislative action on AI,” could land anybody importing “technology or intellectual property” developed in China in prison for 20 years, with fines up to $1 million for individuals and $100 million for companies.

    Needless to say, that’s all pretty extreme, which may doom the bill: it was tabled last week, which in practice often means a proposed new law has lost legislative steam.

    Nonetheless, the bill shows that there’s considerable panic among lawmakers following DeepSeek’s astronomical rise and the enormous selloff it triggered last week.

    Congress is now desperately looking to shut China out to preserve US market interests, with lawmakers as disparate as Hawley and Elizabeth Warren (D-MA) arguing that the Biden administration didn’t act fast enough before implementing a ban on AI chip exports to China starting in 2022.

    “Multiple administrations have failed — at the behest of corporate interests — to update and enforce our export controls in a timely manner,” Hawley and Warren wrote in an appeal to Congress obtained by The Washington Post. “We cannot let that continue.”

    Earlier this month, DeepSeek demonstrated that the performance of OpenAI’s top-of-the-line AI chatbots can be matched while using a tiny fraction of the resources, stoking fears that Wall Street may be massively overpaying.

    Reply
  28. Tomi Engdahl says:

    SoftBank is negotiating a $500 million investment in Skild AI, a software company building a foundational model for robotics at a $4 billion valuation, Bloomberg and Financial Times reported.

    The 2-year-old company raised its previous funding round of $300 million at a $1.5 billion valuation last July from investors, including Jeff Bezos, Lightspeed Venture Partners, and Coatue Management.

    Read more from Marina Temkin on Skild AI here: https://tcrn.ch/4aDlYSr

    #TechCrunch #technews #artificialintelligence #startup #venturecapital

    Reply
  29. Tomi Engdahl says:

    Researchers Link DeepSeek’s Blockbuster Chatbot to Chinese Telecom Banned From Doing Business in US

    DeepSeek has computer code that could send some user login information to China Mobile.

    https://www.securityweek.com/researchers-link-deepseeks-blockbuster-chatbot-to-chinese-telecom-banned-from-doing-business-in-us/

    Reply
  30. Tomi Engdahl says:

    Application Security
    How Agentic AI will be Weaponized for Social Engineering Attacks

    With each passing year, social engineering attacks are becoming bigger and bolder thanks to rapid advancements in artificial intelligence.

    https://www.securityweek.com/how-agentic-ai-will-be-weaponized-for-social-engineering-attacks/

    Social engineering is the most common initial access vector cybercriminals exploit to breach organizations. With each passing year, social engineering attacks are becoming bigger and bolder thanks to rapid advancements in artificial intelligence.

    How is AI Advancing Social Engineering Attacks?

    AI is helping cybercriminals advance their social engineering campaigns in multiple ways:

    Personalized Phishing: AI algorithms can analyze data from social media (such as background, interests, employment, connections, associations, location, etc.) and various OSINT sources to create more personalized and convincing spear phishing attacks.
    Local and Contextual Content: Tools like ChatGPT, Copilot and Gemini can help draft phishing emails that are grammatically correct, contextually appropriate, and translated into any local language. AI can be prompted to mimic a specific writing style or tone, and phishing emails can be drafted in accordance with a recipient’s response or behavior.
    Realistic Deepfakes: Threat actors use deepfake tools to create fake virtual personas and audio clones of senior executives and trusted business partners. Deepfakes are used to convince employees into sharing sensitive information, transfer money, or grant access to an organization’s network.

    AI’s Latest Evolution Amplifies Social Engineering Risks Even Further

    Key Takeaways for Organizations

    Below are some best practices and recommendations for organizations:

    Fight Agentic AI with Agentic AI: To combat advanced social engineering attacks, consider building or acquiring an AI agent that can assess changes to the attack surface, detect irregular activities indicating malicious actions, analyze global feeds to detect threats early, monitor deviations in user behavior to spot insider threats, and prioritize patching based on vulnerability trends.

    Leverage AI-based Security Awareness: Security awareness training is a non-negotiable component to bolstering human defenses. Organizations must go beyond traditional security training and leverage tools that can do things like assign engaging content to users based on risk scores and failure rates, dynamically generate quizzes and social engineering scenarios based on the latest threats, trigger bite-sized refreshers, etc.

    Prepare Employees for Agentic AI Social Engineering: Human intuition and vigilance are critical in combating social engineering threats. Organizations must double down on fostering a culture of cybersecurity, educating employees on the risks of social engineering and the impact on the organization, training to identify and report such threats, and empowering them with tools that can improve security behavior. Gartner predicts that by 2028, a third of our interactions with AI will shift from simply typing commands to fully engaging with autonomous agents that can act on their own goals and intentions. Obviously, cybercriminals won’t be far behind in exploiting these advancements for their misdeeds. Organizations must shore up defenses to prepare for this eventuality by deploying their own AI-based cybersecurity agents, leveraging AI-based security training, and instilling a sense of security responsibility.

    Reply
  31. Tomi Engdahl says:

    Google julkaisi uusia tekoälymalleja Gemini-palveluunsa – mukana Googlen toistaiseksi paras malli koodaukseen ja monimutkaisiin pyyntöihin
    https://mobiili.fi/2025/02/05/google-julkaisi-uusia-tekoalymalleja-gemini-palveluunsa-mukana-googlen-toistaiseksi-paras-malli-koodaukseen-ja-monimutkaisiin-pyyntoihin/

    Reply
  32. Tomi Engdahl says:

    Näin Google-haku on uudistumassa: ”AI-tilan” sisäinen testaus alkoi
    https://mobiili.fi/2025/02/06/nain-google-haku-on-uudistumassa-ai-tilan-sisainen-testaus-alkoi/

    Google kehittelee hakukoneeseensa uutta tekoälytilaa. Aiemmin julkaistuista AI Overviews -yhteenvedoista erillisen AI Moden sisäinen koekäyttö on jo käynnistynyt.

    Google on viime aikoina julkaissut yhä uusia tekoälytoimintoja eri alustoilleen, ja hakukone saattaa ennen pitkää saada kokonaan uuden käyttötilan. Työnimellä AI Mode, suoraan suomennettuna tekoälytila, kulkeva uudistus hyödyntää Gemini 2.0 -tekoälymallin ”mukautettua versiota” tuottaen monipuolisia, verkkolähteitä ja päättelyä hyödyntäviä vastauksia käyttäjien kysymyksiin.

    Reply
  33. Tomi Engdahl says:

    DeepSeek: Mikä se on ja onko sen käyttö turvallista?
    DeepSeek on Kiinan vastaus ChatGPT:lle. Mutta miten uusi kiinalainen chattibotti toimii? Entä onko sitä turvallista käyttää? Ota selvää täältä.
    https://kotimikro.fi/tekoaly/deepseek-mika-se-on-ja-onko-sen-kaytto-turvallista

    Tiedät varmaankin jo tekoälytyökalut, kuten ChatGPT:n ja Geminin. Ne ovat niin sanottuja kielimalleja, joita voidaan käyttää kaikkeen tekstien kirjoittamisesta tiivistämiseen ja paljon muuhun.

    Chattibotreista on tullut erittäin suosittuja ympäri maailmaa, mutta nyt uusi peluri on tullut mukaan.

    Kiinalainen yritys DeepSeek on vuoden 2025 alussa valloittanut maailman uuden kielimallinsa DeepSeek-V3:n kanssa.

    DeepSeekin tekoäly pärjää tiettävästi johtaville amerikkalaisille kilpailijoille tai jopa päihittää ne, mutta sen kehittäminen oli ilmeisesti myös huomattavasti halvempaa.

    Mutta mitä DeepSeekin tekoäly oikeastaan tekee? Entä onko sitä turvallista käyttää? Ota siitä nyt selvää

    Reply
  34. Tomi Engdahl says:

    Läpimurto tekoälyssä: ChatGPT:n haastava tekoäly syntyi alle 50 eurolla ja alle 30 minuutissa
    Aleksi Kolehmainen6.2.202514:17|päivitetty6.2.202514:17TekoälyDigitalous
    Tekoälykehityksen kustannukset voivat olla valtavia, mutta tutkijat ovat todistaneet, että tehokas malli voidaan luoda myös murto-osalla perinteisistä kuluista.
    https://www.tivi.fi/uutiset/lapimurto-tekoalyssa-chatgptn-haastava-tekoaly-syntyi-alle-50-eurolla-ja-alle-30-minuutissa/77aa00f0-cadc-4ac1-beda-b32f7dca14db

    Reply
  35. Tomi Engdahl says:

    It-jätin pomo ennustaa yhtä suurta mullistusta kuin 1990-luvulla
    Anna Helakallio3.2.202514:45|päivitetty3.2.202514:45TekoälyDigitalous
    AWS:n toimitusjohtaja Matt Garman painottaa, että tekoäly on kuitenkin merkittävä sijoitus roi:n kannalta.
    https://www.tivi.fi/uutiset/it-jatin-pomo-ennustaa-yhta-suurta-mullistusta-kuin-1990-luvulla/39044db2-40f1-4249-980f-6e94a31a3685

    Tietotekniikkajätti Amazon Web Servicesin toimitusjohtaja Matt Garman uskoo, että tekoälyyn tehdyt investoinnit voivat pian maksaa itsensä takaisin. Hänen mukaansa tekoälymallit kehittyvät vähitellen halvemmiksi ja tehokkaammiksi.

    Reply
  36. Tomi Engdahl says:

    Nokia odottaa miljardia lisää liikevaihtoonsa – Pekka Lundmark: tätä Deepseek tarkoittaa yhtiölle
    Tatu Sailaranta3.2.202507:12Pörssi
    Pekka Lundmark kohdistaa kovia odotuksia datakeskuksiin ja puolustusteollisuuteen.
    https://www.tivi.fi/uutiset/nokia-odottaa-miljardia-lisaa-liikevaihtoonsa-pekka-lundmark-tata-deepseek-tarkoittaa-yhtiolle/ca8db9ff-70f5-4fef-8e1f-973f0b7c0250

    Reply
  37. Tomi Engdahl says:

    Google julkaisi uusia tekoälymalleja Gemini-palveluunsa – mukana Googlen toistaiseksi paras malli koodaukseen ja monimutkaisiin pyyntöihin
    https://mobiili.fi/2025/02/05/google-julkaisi-uusia-tekoalymalleja-gemini-palveluunsa-mukana-googlen-toistaiseksi-paras-malli-koodaukseen-ja-monimutkaisiin-pyyntoihin/

    Google esitteli ensimmäiset Gemini 2.0 -tekoälymallinsa joulukuussa ja toi niitä kokeiltavaksi aluksi kokeellisina Experimental-testiversioina. Nyt tarjolle on tuotu kehittyneempiä testiversioita.

    Tammikuun lopulla Google kertoi jo Gemini 2.0 -mallien kevennetyn pienemmän Gemini 2.0 Flash -version tulleen yleisesti saataville niin sanottuna vakaana julkaisuversiona ja sen olevan jatkossa käytössä oletuksena Gemini-palvelussa.

    Nyt Google on julkistanut Gemini-sovelluksessa testattavaksi kaksi uutta tekoälymallia, Gemini 2.0 Pro Experimental ja Gemini 2.0 Flash Thinking Experimental.

    Reply
  38. Tomi Engdahl says:

    Googlen mukaan Gemini 2.0 Pro Experimental on sen toistaiseksi paras malli koodaukseen ja monimutkaisiin pyyntöihin. Lisäksi sillä on myös parempi ymmärrys ja päättelykyky erilaisista tiedoista kuin millään Googlen tähän mennessä julkaisemalla mallilla.

    Reply
  39. Tomi Engdahl says:

    10,000 AI songs a DAY pumped into streaming services!

    AI Music is Destroying the Music Industry! (But There’s Hope, Maybe)
    https://m.youtube.com/watch?v=wcBWOJ06T8k

    Comments:

    Hooray! AI will kill streaming services! There’s still a hope for music industry :)

    Pop music sounds like AI music anyways…ain’t no creativity to music anyways…So yea…I think AI making music in your name is genius now a days

    Reply
  40. Tomi Engdahl says:

    Low-quality videos into stunning resolution Built on a new model architecture, their diffusion-based approach leverages 6B+ parameters and the latest NVIDIA hardware. Their technology analyzes hundreds of frames to restore details with unmatched accuracy
    #nvidia #ChatGPT #llm #imageprocessing #ai #chatgpt4 #deepseek #nuralnetwork #machinelearning #technothinkers

    https://www.facebook.com/share/r/1E1G9Xi1rq/

    Reply
  41. Tomi Engdahl says:

    ICYMI: Some newly unsealed emails allegedly provide the “most damning evidence” yet that Meta illegally trained its AI models on pirated books: https://arstechnica.visitlink.me/89YfWk

    Devonyu | iStock / Getty Images Plus]

    Reply
  42. Tomi Engdahl says:

    GitHub has announced a slew of updates for Copilot, while also giving a glimpse into a more agentic future for its AI-powered pair programmer.

    Among the notable updates includes a feature called Vision for Copilot, which allows users to attach a screenshot, photo, or diagram to a chat, with Copilot generating the interface, code, and alt text to bring it to life.

    So for example, someone on a marketing team could take a screenshot of a web page and illustrate some changes they want made to that page.

    Read more from Paul Sawers on Vision for Copilot here: https://tcrn.ch/4hps3nT

    #TechCrunch #technews #artificialintelligence #Copilot #GitHub

    Reply
  43. Tomi Engdahl says:

    This model utilizes distillation and efficient training techniques to achieve impressive results. https://link.ie.social/vspFdL

    #OpenAI #DeepSeek

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*