I am originally from Mexico, where I began my engineering training with a BSc and MEng in Electronic and Electrical Engineering at the Tecnológico Nacional de México. Those years taught me a way of working that has stayed with me. Before I trust a result, I want to know what it rests on, what has been left out, and why. I would rather write down a plain assumption than smuggle it in, and I would rather keep a model simple and honest than make it look complete on paper. That habit did more than shape my technical style; it pulled me towards the awkward corners of engineering, the places where devices are pushed close to their limits, where uncertainty stops being a footnote, and where the real behaviour of a system refuses to match the tidy story told by a schematic.
In 2015 I completed my PhD in Electronic and Electrical Engineering at the University of Southampton in the UK. My thesis, Switched Linear Differential Systems, received the Institution of Engineering and Technology Best UK PhD Thesis Award in Control and Automation. I was grateful for the recognition, but I value most what the work demanded: discipline in reasoning, patience in checking, and the courage to be explicit about what a result does not cover. A claim is only convincing when the chain of reasoning is clear and when its limits are stated openly. In control, confidence is not enough. The argument has to be checkable, and it has to remain checkable when the system is no longer behaving in the neat regime we first imagined.
That view has shaped my research direction. I work at the intersection of control theory, power electronics, and power systems, with data driven control at the centre. Classical control remains essential, with its language of dynamical models, feedback, stability, and performance, and I still value the clarity that comes from a well stated model. Yet modern energy systems often break the quiet assumption behind much traditional design: that a model stays accurate as conditions change. Real plants heat up, age, drift, and operate across wide ranges. Networks reconfigure. Sensors degrade. Disturbances arrive in ways that are rarely neat. In many practical settings, the model is not wrong, but incomplete, and incompleteness is often what matters when we are asked to guarantee safe behaviour.
Data driven control starts from this practical fact. If a system changes, measurements are not just for logging; they are evidence about how the system behaves now, in the condition it is actually in. This is not an excuse to abandon rigour. It forces more rigour, because data is imperfect and often biased by how systems are operated. Measurements are noisy, sensors can fail, some operating regimes are missing, and the rare events that decide safety may not appear in the dataset at all. This tension, between what we want to guarantee and what the data can genuinely support, leads to a precise question that guides much of my work: what can we guarantee from data, and what cannot we guarantee?
I treat that question as a technical programme rather than a slogan. If we design control from measurements, we still have to answer the classic control questions, but we must answer them with explicit links to the data. Where is the state if we do not begin with a state space model? What replaces poles and modes when we work directly with trajectories? When is a method truly data driven, and when is it simply system identification followed by standard design under a new label? How much noise and sensor error can be tolerated before stability guarantees fail, and which failures are silent enough to evade simple checks? I focus on methods that give clear answers to these questions, not only good numerical performance in idealised simulations.
Energy systems make these questions urgent. Electrification and renewable integration are changing how power networks behave and how failures propagate. As systems become increasingly inverter dominated, stability and protection depend less on the passive dynamics of synchronous machines and more on control algorithms inside converters. In practice, this means that voltage and frequency behaviour is shaped by many controllers acting at fast time scales through sensing, computation, and communication. Stability is no longer only a property of the electrical network. It is also a property of software interacting with physics, and that interaction can be subtle: small design choices in sensing, filtering, timing, or saturation can move a system from robust to brittle.
At the same time, the grid is becoming highly instrumented. Data streams from converters, machines, and networks can reveal couplings, regime changes, and early signs of instability that are hard to capture analytically, especially in systems that no longer sit near a single operating point. But large datasets can also create false confidence. They often describe normal operation well while under representing stressed conditions, because operators avoid risky regions and because major failures are rare. If we are not careful, a method can look excellent precisely because it has never been tested in the conditions where it is most needed. For this reason, I work on approaches that combine model based structure with data informed design in ways that remain explainable and testable. Models help us state mechanisms and formulate hypotheses. Data helps us test those hypotheses, detect when behaviour has shifted, and quantify uncertainty using real operating records. The aim is control and monitoring that remain dependable when conditions change, not only when assumptions hold.
Once control depends on data streams, resilience becomes part of the control problem. Modern sensing, time synchronisation, communication, and software improve observability and flexibility, but they also create new dependencies, and dependency is another form of uncertainty. In inverter dominated networks, a cyber incident can affect closed loop dynamics by delaying measurements, corrupting signals, or manipulating setpoints. Even without a malicious actor, timing faults, packet loss, clock drift, and data dropouts can push a controller outside the regime in which its guarantees were proved. A controller that assumes clean data can be fragile, and fragility is often invisible until the system is already in trouble. This is why I treat cybersecurity and resilience as integral to data driven control and monitoring. Credible methods must include credible failure analysis, detection mechanisms, and safe responses, with a clear account of what happens when the data you depend on becomes unreliable.
My work spans transmission and distribution networks, power electronic converters, electrical machines and drives, and energy storage systems. On the device side, I work on modelling, control, and diagnosis of machines and drives, and on control oriented design and operation of converter systems. These topics connect directly to network level dynamics because device behaviour often drives system level phenomena: converter control loops, protection thresholds, and operational limits can shape what the wider network experiences as stability or instability. At the network level, I study modelling and analysis of converter rich grids and develop monitoring and fault detection methods for medium voltage networks. Across these areas, I aim for methods that meet three standards. The mathematics is sound. The assumptions are explicit. Performance is tested under uncertainty, faults, and practical constraints, with a clear path to implementation and validation, so that the story we tell on paper has a direct connection to what will happen in the field.
I have held academic posts at Tecnológico de Monterrey and the University of Sheffield. I have led and contributed to funded projects supported by SENER CONACYT in Mexico and UKRI, and I have published over 80 journal papers. Many contributions have come through collaborations with academic and industrial partners. I value this work because it keeps research grounded and keeps the stakes visible. Deployment constraints reveal whether the bottleneck is sensing, computation, data quality, or modelling, and they show early where a method needs strengthening before it can be trusted. They also sharpen the questions that matter most: not what works once, but what keeps working when conditions, data, and operating priorities shift.
Supervision and mentoring are central to my academic work. I have supervised and collaborated with more than ten PhD researchers, and I treat communication as part of research practice from the beginning, not as something added at the end. I try to build an environment where students can ask questions freely, present ideas clearly, and learn how to make claims that survive scrutiny. I enjoy working with researchers who care about mechanisms as much as outcomes, and who are willing to test ideas against critique, evidence, and implementation realities. A PhD, in my view, is training in judgement. It is learning what can be claimed, what must be tested, what must be qualified, and what should remain uncertain until the evidence is strong. It is also learning how to handle discomfort productively: the discomfort of a counterexample, of an assumption that turns out to be too strong, or of data that refuses to fit a convenient narrative.
I am a Fellow of the Higher Education Academy and a Senior Member of the IEEE. I see these roles mainly as reminders of responsibility: to reason carefully, communicate precisely, and build work that holds up under stress, including the stress of peer review, replication, and deployment.
Prospective PhD students interested in data driven control, electrification, renewable integration, and cyber secure control and monitoring of modern power systems are welcome to get in touch. If you want to work on methods that link measurements to stability and performance with clear assumptions and credible guarantees, and if you care about real deployment constraints as much as theory, we may be a strong match. The transition under way is technically demanding, and its failure modes are becoming more visible as control and software take on a larger share of responsibility for system behaviour. Careful, transparent research can make a practical difference, not only by explaining what is changing, but by shaping solutions that remain reliable, secure, and worthy of public trust even when the operating conditions are not kind.