Resultados de la búsqueda
2 results found with an empty search
- Understanding the AI Act: What Every Company Needs to Know Today
Europe has launched the world’s first full regulatory framework for artificial intelligence. The AI Act is bold, ambitious, and unavoidable. Whether you’re experimenting with ChatGPT, integrating third-party AI tools, or building your own AI products, the regulation applies to you. What many companies don’t realise is that since 1 August 2024 , the AI Act is already in force . That doesn’t mean every obligation applies overnight - there is a phased timeline, but the countdown has started: some prohibited practices will be banned within months, and stricter rules for high-risk systems will follow. So what are the consequences today? Do you need to take action already, even if you “only” use ChatGPT internally? This guide is designed to answer exactly that: what matters now , what can wait , and how to avoid waking up in two years with a compliance problem you could have solved in days . What is AI Act? The AI Act is a legislative proposal of the European Union that regulates the use of artificial intelligence. The law doesn’t really care whether you’re using ChatGPT, a small in-house model, or a third-party API. What truly matters is your role and what you’re doing with the technology. The regulation draws a sharp line between two players: The provider — the one who builds, trains, or packages the AI system. If you touch the engine, customise the model, or distribute a solution, you’re playing in the big league of compliance . The deployer — the one who uses AI inside the organisation. If you rely on tools created by others, you’re a deployer. Fewer obligations, yes, but still clear responsibilities. And here is the part most companies overlook: The risk doesn’t come from the model — it comes from the use case. A chatbot that explains your holiday policy? Minimal risk. A classification tool for documents? Innocent. A classification tool that triages patients or candidates? Strictly regulated. The message is simple: the AI Act forces companies to examine not just the technology, but the intention and the impact behind it. So before asking “what do we need to comply with?”, ask yourself two questions that determine everything that comes next: What role do we play — provider or deployer? What exactly are we using AI for — and who is affected by those decisions? From here, the entire regulatory logic unfolds: obligations, transparency, documentation, oversight, and—yes—risk. FAQ about AI Act “I have no idea what the AI Act is. What does it regulate?” The AI Act regulates how AI systems must be designed , deployed , and supervised in the EU. It defines what counts as an AI system (Art. 3), sets rules for both developers and users, and ensures transparency, safety, and human oversight . It applies to any organisation that uses or provides AI within the EU. Reference: AI Act Art. 3 “What happens if a company does not comply with the AI Act?” The AI Act is not a theoretical document — it comes with a very real sanctioning regime . Fines can be extremely high, especially for medium and large companies: Prohibited practices: up to €35 million or 7% of global annual turnover , whichever is higher. Serious infringements: up to €15 million or 3% of turnover . Providing misleading information to authorities: up to €7.5 million or 1% of turnover . For SMEs and startups, the lower amount applies, but the consequences can still be severe: project shutdowns, loss of clients, and barriers to entering public or corporate procurement processes. “We only use ChatGPT internally. Does the AI Act affect us?” Yes — but lightly. You are considered a user , not a developer. Your main obligations are to: use the tool responsibly , avoid uploading sensitive or personal data , apply basic supervision and verification , create an internal policy for generative AI use . You do not need certifications or audits at this stage. “We use several AI tools from third-party vendors. What does the AI Act require from us?” You become a responsible deployer , meaning you must: ensure the vendor provides clear usage instructions and limitations, train your staff, monitor the system for errors, ensure the data you feed the system is appropriate, report incidents if something goes seriously wrong. Using AI is not passive : the company remains accountable. “What practical changes should my company expect?” Expect to: prepare document how and where you use AI, define internal AI policies , increase transparency in AI-driven processes, implement human oversight where relevant, verify your vendors , prepare basic evidence of compliance . Organisations that develop their own AI must additionally implement governance, quality controls, and documentation structures . "What Early-Stage AI Users Really Need to Know" If you are experimenting with generative AI, your main responsibilities are: protecting data , supervising outputs , guaranteeing ethical usage , ensuring transparency with users or employees affected. No heavy compliance required — yet . How Somia Helps Companies Comply with the AI Act Somia ’s core mission is simple: give you a platform where AI Act compliance is already built in , so you can focus on creating value — not navigating regulation. You use the platform knowing that transparency, traceability, and oversight are pre-configured. ✔ Automatic logs and traceability Every interaction, action, and decision is recorded. You always have a full audit trail ready. ✔ Human validation whenever you need it Whether it’s a workflow, a document, a message, or a decision, you can enable human approval on demand. ✔ We guide you through defining your use cases Somia helps you understand, refine, and structure your use cases, and implements them safely inside our platform, aligned with the AI Act’s risk logic. ✔ Works with any AI model OpenAI, DeepSeek, open-source models… Somia ensures outputs, logs, and processes meet EU rules , no matter what model sits underneath. Discover our offer Want to Know More? If you want to understand how Somia can help your company use AI safely , responsibly , and aligned with the EU AI Act , request a demo. Beyond private solutions, it’s important to highlight that governments and public institutions also provide support to help companies understand and adopt responsible AI practices. One initiative we strongly recommend is the Observatori d’Ètica en Intel·ligència Artificial de Catalunya (OEIAC) The OEIAC is part of the Catalonia.AI strategy of the Government of Catalonia and is dedicated to promoting the ethical and responsible use of AI. Together with the Catalan Government, it has launched the Manifest for Trustworthy AI , reinforcing the commitment to safe, transparent, and legally compliant AI adoption. Critically for companies, the OEIAC offers a free ethical AI assessment framework , which allows organisations to perform an initial audit of their AI use cases. This helps businesses understand risks, identify gaps, and align their practices with ethical and regulatory expectations — at no cost. Conclusions The arrival of the AI Act should not be seen as a threat, but as an opportunity to improve quality and safety in the use of technology. At Somia , we believe in ethical, controlled, and truly useful artificial intelligence for companies of all sizes. Having agents that are adapted to the legal environment will allow companies not only to avoid sanctions, but also to gain efficiency, reputation, and long-term sustainability. Start today with a safe and regulated AI solution Legal Notice This publication is intended solely for general informational purposes and should not be considered legal, technical, or professional advice. It must not be used as a basis for making decisions or refraining from action regarding specific situations. Any decisions related to AI Act compliance or other applicable regulations should be made with professional advice tailored to the particular circumstances of each organization. The author assumes no responsibility for any actions taken, or not taken, based on the information contained in this document, nor for any consequences that may arise from its use.
- AI Terminology: ML, DL and LLM explained
In recent years, the rise of Generative AI has brought a flood of buzzwords like AI, Machine Learning , Deep Learning , and Large Language Models . While these terms are frequently used, they are often misunderstood and used interchangeably, making it difficult to grasp their distinct meanings. This article aims to clear up that confusion by explaining the key differences between these technologies and why it’s important to understand them. What is Artificial Intelligence? Let’s begin by addressing the key question: what is artificial intelligence? AI is a broad concept that refers to various techniques designed to replicate human logic, processes, and intelligence in machines, algorithms, or “artificial” brains. These techniques can be grouped into four main areas, each a more advanced subset of the previous one: artificial intelligence (AI), machine learning (ML), deep learning (DL) , and large language models (LLMs) . In this article, we will explore each of these fields in chronological order and with increasing levels of complexity. To replicate human logic, AI can use approaches that range from rule-based systems, where we explicitly program the rules into the machine, to more advanced methods that enable the system to learn these rules and logic by analyzing data with implicit, complex patterns. The journey of AI begins with the simplest approach—rule-based AI and algorithms—which serves as the foundation for more sophisticated techniques. Rule-Based AI and Algorithms The most traditional form of AI involves creating rule-based algorithms. These algorithms follow predefined rules to replicate human reasoning. For instance, if we wanted to identify whether an image contains a dog, we could create a list of predefined facial characteristics. These might include features like “pointed ears”, “snout”, “wet nose”, and “whiskers.” We would then analyze the image to check for these specific facial features. If the image matches most of the predefined facial characteristics, we could classify it as containing a dog. If the image lacks these key features, we would classify it as not containing a dog. In rule-based AI, we have full control over the algorithm’s logic. We know exactly how it processes information and why it makes specific decisions. This type of AI has been used widely, from regulating thermostats to guiding robots, and even powering Deep Blue, the chess program that famously defeated world champion Garry Kasparov in 1997. These algorithms can be based from simple logics to deep decision processes including highly complex mathematical and statistical models to perform a task. However, these rule-based algorithms are limited in their ability to learn or adapt. If we need the system to behave differently, we must manually modify or add new rules. This contrasts sharply with more advanced AI systems, like those powered by machine learning, which can adapt and learn from data. Machine Learning (ML) Machine Learning (ML) is a subset of AI that enables computers to learn from data without the need for explicit programming for every task. Instead of providing predefined rules, we supply examples—such as images labeled as contains dog or not—and the computer learns patterns from the data to make its own decisions. In this process, we control how the AI learns, but not exactly what it learns. Developers in ML set up the learning framework, but the system itself discovers the underlying logic. To modify its behavior, we introduce new data rather than rewriting the rules. While we have control over the learning process and the input data, we lose the ability to directly govern the specific decisions the algorithm makes. Machine learning includes a broad range of models, each suited to learning different patterns in data. When referring to traditional ML, as opposed to deep learning, we typically talk about models such as regression, decision trees, or XGBoost. These models often construct complex decision trees based on the provided data, making them much more adaptable than rule-based systems. Rather than manually defining the tree’s structure, as we would in a rule-based approach, the model autonomously learns the most effective “decisions” based on a set of inputs and the desired output. During training, it refines these decisions to better align the results with the desired outcome, learning through a diverse set of input-output examples. For instance, consider a machine learning model designed to classify objects based on their characteristics, such as diameter size and color. First, we extract relevant features from the dataset (e.g., object diameter and color). The model then uses these features to split the objects into different categories. For example, it might first split based on diameter size, and then further refine the classification using color. By training the model on labeled examples, it learns to accurately classify new objects based on these features. ML models are widely used in applications like product recommendations, energy demand forecasting, and credit risk assessment. They leverage historical data and patterns of similar behavior to predict future outcomes. However, as data becomes more complex, it becomes increasingly difficult to fully comprehend how these systems arrive at their conclusions. There are certain tasks, often requiring intuition or common sense , that even highly sophisticated decision trees struggle to handle. This is where Deep Learning excels, addressing limitations of traditional ML by uncovering more intricate patterns in the data. Deep Learning (DL) Deep learning, a subset of ML, takes things a step further by using artificial neural networks inspired by how biological neurons work. These networks can learn complex patterns that were previously impossible to detect. For example, a deep learning model tasked with analyzing emotions in text doesn’t just count words; it understands the relationships between them, the context, and even nuances like sarcasm and humor. The way deep learning models operate is highly complex. Each layer in the neural network processes data differently, making it difficult to understand exactly how decisions are made. These models are often referred to as "black boxes" because, while we can see the inputs and outputs, the inner workings remain opaque. This opacity is particularly concerning in high-risk sectors like healthcare, finance, and legal systems. Relying on decisions made by AI without clear reasoning—especially when those decisions can deeply impact someone’s life—poses significant risks. In these sectors where explainability is critical, there is growing attention to developing tools and techniques to provide more insight into how deep learning models reach their conclusions. However, achieving full transparency in these systems remains a challenge. Despite their complexity, deep learning models are extremely powerful and are widely used in applications such as voice recognition, image analysis, and text classification. Real-world examples include self-driving cars, automatic quality inspection on production lines, spam email filtering, and traffic monitoring through smart cameras. One of the most notable applications of deep learning is in natural language processing, where large language models (LLMs) like ChatGPT have revolutionized how we processed text until now. Large Language Models (LLMs) LLMs are specialized AI systems designed to understand and generate human language. They can perform tasks like classifying, summarizing, and even creating text. But how does an AI model learn the intricacies of language? LLMs are trained on massive amounts of text, learning to predict the next word in a sentence, much like the predictive text function on a smartphone. By continuously predicting word after word, these models can generate coherent and contextually relevant text. However, LLMs don’t actually " understand " the content. They generate text based on patterns found in their training data, earning them the nickname " stochastic parrots "—they repeat patterns rather than grasp meaning. The decision-making process of these models is highly complex, involving billions of parameters that adjust during training, making it impossible to explain exactly why they choose one word over another. This complexity presents challenges in controlling the ethical use of AI. Companies are investing heavily in methods to guide AI behavior, but we are still far from having full control. Conclusion AI has the potential to transform our daily lives and the way the world functions. However, it’s important to recognize that AI isn’t a one-size-fits-all solution. It encompasses a wide range of technologies, each with its own strengths and applications. Understanding the distinctions between these technologies is key to choosing the right approach for each specific challenge. Different AI methods require different volumes and types of data, as well as distinct processes for training and optimizing models for particular tasks. The first step in harnessing the power of AI effectively is to gain a clear understanding of the various fields within this broad and rapidly evolving discipline.

