Canberra, Australian Capital Territory, Australia
2K followers
500+ connections
About
Articles by Paul
Activity
2K followers
Experience & Education
Licenses & Certifications
-
-
Data-Centric AI: Best Practices, Responsible AI, and More
LinkedIn
IssuedCredential ID 1037e20d269e4f7cad36e6657eecefe9903230013dd31483ff4de846351c423c -
-
-
-
Introduction to AI-Native Vector Databases
Linkedin Learning Tool
Issued ExpiresCredential ID de5e76a72aafa7a475d11a3fcd1f9f89d0cb7536f081122866852495e615defc -
AWS Certified Solutions Architect - Associate
Amazon - In partnership with Alpine Testing Solutions
Volunteer Experience
-
Member of the Parish Council
St Matthias Anglican Church
- 6 years 1 month
Parish council of St Matthias Anglican church, Oxford St, Sydney (1000+ members, 10+ staff, 10+ locations, also experienced at running small groups and teaching).
-
Council Member
Crossroads Christian Church
- 13 years 1 month
Member of the Council of Crossroads church, and board member of FOCUS, an ANU/AFES affiliated association (500+ members, 10+ staff, 100s of volunteers, multiple locations). Complex strategic, financial and policy decision making, spin-out of independent child organizations, co-ordination with other Canberra and Australian government and non-profit bodies, hiring and firing of staff, training and performance management of staff, decision making, chairing and public speaking at meetings…
Member of the Council of Crossroads church, and board member of FOCUS, an ANU/AFES affiliated association (500+ members, 10+ staff, 100s of volunteers, multiple locations). Complex strategic, financial and policy decision making, spin-out of independent child organizations, co-ordination with other Canberra and Australian government and non-profit bodies, hiring and firing of staff, training and performance management of staff, decision making, chairing and public speaking at meetings including AGMs.
-
President/Chair of Mission Committee
Waikato Student's Christian Fellowship (WSCF)
- 3 years
President of the University of Waikato Tertiary Students Christian Fellowship affiliated student club (WSCF), and then chair of the Mission Committee. Planned and ran regular inward and outward focussed activities, including multiple debates (between visiting speakers and local academic staff) and lectures/events, and a Mission focussed on a Christian view of Nuclear disarmament (the mission speaker was Rev. Ray Galvin who wrote "The Peace of Christ in a Nuclear Age", 1983.
Publications
-
Apache Kafka Connect Architecture Overview
Instaclustr.com
See publicationKafka Connect is an API and ecosystem of 3rd party connectors that enables Apache Kafka to be scalable, reliable, and easily integrated with other heterogeneous systems (such as Cassandra, Spark, and Elassandra) without having to write any extra code. This blog is an overview of the main Kafka Connect components and their relationships. We’ll cover Source and Sink Connectors; Connectors, Plugins, Tasks and Workers; Clusters; and Converters.
-
“Kongo” Part 3 – Apache Kafka: Kafkafying Kongo – Serialization, One or Many topics, Event Order Matters
Instaclustr.com
See publicationIn the previous blog (“Kongo” Part 2: Exploring Apache Kafka application architecture: Event Types and Loose Coupling) we made a few changes to the original application code in order to make Kongo more Kafka-ready. We added explicit event types and made the production and consuming of events loosely-coupled using the Guava EventBus. In this blog we build on these changes to get an initial version of Kongo running on Kafka.
Step 1: Serialise/deserialise the event types
Step 2: One or…In the previous blog (“Kongo” Part 2: Exploring Apache Kafka application architecture: Event Types and Loose Coupling) we made a few changes to the original application code in order to make Kongo more Kafka-ready. We added explicit event types and made the production and consuming of events loosely-coupled using the Guava EventBus. In this blog we build on these changes to get an initial version of Kongo running on Kafka.
Step 1: Serialise/deserialise the event types
Step 2: One or Many Topics?
Step 3: Matter event order does? Depends it does.
-
Developing a Deeper Understanding of Apache Kafka Architecture Part 2: Write and Read Scalability
insideBigData.com
See publicationIn the previous article, we gained an understanding of the main Kafka components and how Kafka consumers work. Now, we’ll see how these contribute to the ability of Kafka to provide extreme scalability for streaming write and read workloads.
-
Developing a Deeper Understanding of Apache Kafka Architecture
insideBigData.com
See publicationThe Apache Kafka distributed streaming platform features an architecture that – ironically, given the name – provides application messaging that is markedly clearer and less Kafkaesque when compared with alternatives. In this article, we’ll take a detailed look at how Kafka’s architecture accomplishes this.
- The Kafka Components – Universal Modeling Language (UML)
- Consumers Rule! -
Developing a Deeper Understanding of Apache Kafka Architecture - Part 1
insideBigData.com
See publicationThe Apache Kafka distributed streaming platform features an architecture that – ironically, given the name – provides application messaging that is markedly clearer and less Kafkaesque when compared with alternatives. In this article, we’ll take a detailed look at how Kafka’s architecture accomplishes this.
-
Exploring the Apache Kafka “Castle” Part B: Event Reprocessing
Instaclustr.com
See publicationIn this second part of the Apache Kafka Castle blog we contemplate the being or not being of Kafka Event Reprocessing, and speeding up time!
- Reprocessing Use Cases
- Reprocessing can make time go faster! -
Exploring the Apache Kafka “Castle” Part A: Architecture and Semantics
Instaclustr.com
See publicationNobody wants to end up in a Kafkaesque situation such as the village in Kafka’s “The Castle”, so let’s take a closer look at how Apache Kafka supports less Kafkaesque messaging for real-life applications. In this part, we’ll explore aspects of the Kafka architecture (UML, and consumers), and time and delivery semantics.
-
Apache Kafka Christmas Tree Light Simulation
Instaclustr.com
See publicationTime to open up an early Christmas present (Kafka, which is on the Instaclustr roadmap for 2018) and use it to write a scalable Christmas tree lights simulation based on a Galton board. Some imagination may be required (perhaps enhanced with a few glasses of Christmas port).
-
Pick‘n’Mix: Cassandra, Spark, Zeppelin, Elassandra, Kibana, & Kafka
Instaclustr.com
See publicationOne morning when I woke from troubled dreams, I decided to blog about something potentially Kafkaesque: Which Instaclustr managed open-source-as-a-service(s) can be used together (current and future)? Which combinations are actually possible? Which ones are realistically sensible? And which are nightmarishly Kafkaesque!?
-
A Computer Scientist Learns Amazon Web Services (AWS) - a blog
Paul Brbener
See publication"A Computer Scientist learns Amazon Web Services (AWS)" - Sounds harmless? What could go wrong? Journey with me into the fog of AWS for a few weeks as I teach myself AWS from the solution architecture book I recently acquired. I am only to likely comment on things that I find interesting, odd, unusual, etc.
Patents
-
SYSTEM AND A METHOD FOR MODELLING THE PERFORMANCE OF INFORMATION SYSTEMS
Issued AU 2015101031
Projects
-
RAG Benchmarking Project
- Present
An ongoing activity to benchmark Vector Search on Open Source technologies to understand which is best for RAG use cases.
-
What's New In Apache Kafka 4.0?
After attending (and speaking!) at Current London 2025, I was inspired by several talks I attended to take a closer look at some new Kafka 4.0 features. There will be more blogs coming soon.
-
Apache Kafka tiered storage cluster sizing calculator
A small project (but maybe the start of something bigger, an open Kafka performance model?!) to understand, model and predict what resources a Kafka cluster with local storage (SSDs or EBS) needs compared with a cluster with tiered storage enabled. Starting out with an excel model and SankeyDiagrams, I ended up with a JavaScript calculator to compute the minimum resources for IO, Network, AWS instance types/sizes/number for Kafka brokers, and local/remote storage sizes and costs. The first…
A small project (but maybe the start of something bigger, an open Kafka performance model?!) to understand, model and predict what resources a Kafka cluster with local storage (SSDs or EBS) needs compared with a cluster with tiered storage enabled. Starting out with an excel model and SankeyDiagrams, I ended up with a JavaScript calculator to compute the minimum resources for IO, Network, AWS instance types/sizes/number for Kafka brokers, and local/remote storage sizes and costs. The first blog is now out and the calculator prototype is also available.
-
Apache Kafka Tiered Storage Blog Series
Apache Kafka Tiered Storage is a major architectural change in the Kafka storage architecture, and has potential benefits for streaming more data for less $. In a new blog series I've started to explore the new architecture, do some initial performance testing, report on the results, look at Kafka space/time and tiered storage use cases, and recently, start to build a Kafka performance model to size Kafka workloads and clusters. I'm also preparing a talk for FOSSASIA 2025 in Bangkok in March…
Apache Kafka Tiered Storage is a major architectural change in the Kafka storage architecture, and has potential benefits for streaming more data for less $. In a new blog series I've started to explore the new architecture, do some initial performance testing, report on the results, look at Kafka space/time and tiered storage use cases, and recently, start to build a Kafka performance model to size Kafka workloads and clusters. I'm also preparing a talk for FOSSASIA 2025 in Bangkok in March - hope to see you there!
-
All of my AI/ML experience
Some people think that AI and ML are "new" - however, the field has been around since the 1960's, and I had my first introduction a bit late in the 1980's. I did some AI/ML research and used some of the breakthroughs during my career so I decided to have a go at listing them.
Autonomous incremental machine learning in a robot world to solve blog stacking: "Paradigm-directed computer learning", MSc Thesis, 300 pages, 1984, Waikato University, NZ.
Commonwealth postgraduate research…Some people think that AI and ML are "new" - however, the field has been around since the 1960's, and I had my first introduction a bit late in the 1980's. I did some AI/ML research and used some of the breakthroughs during my career so I decided to have a go at listing them.
Autonomous incremental machine learning in a robot world to solve blog stacking: "Paradigm-directed computer learning", MSc Thesis, 300 pages, 1984, Waikato University, NZ.
Commonwealth postgraduate research scholarship, UNSW, 1985-89. Developed first-order inductive logic ML algorithms (in temporal domains over 1st order logic e.g. Prolog, also 1st-order clustering algorithms, very fast rules-based systems, etc - invited attendee at 1987 international workshop on ML UCI, multiple UNSW technical reports published.
In the 1990's, I applied some of the ideas from this work to software engineering (startup), in the form of automated (combination of heuristics and prolog style backtracking) test case generation and execution, automatic protocol generation (for Optus Mailbox system integration contract, again utilising specifications and backtracking to generate the protocol implementation at run-time), and a temporal logic enabled file system for the ABC's "D-Cart" digital radio system (using temporal logic to store and retrieve different audio formats over time based queries - e.g. for audio and transcripts etc).
At CSIRO I developed a distributed Prolog model for water catchments, and an expert system for soil hydraulic property modelling.
During the last decades, I specialised in distributed systems and performance engineering - there's lots of overlap between AI/ML and MDAs (model-driven architectures) & data engineering (e.g. forecasting and prediction, optimisation techniques such as box packing). I also explored using GPUs for Markov models - super fast!
In the last 8 years, I've done Spark ML, anomaly detection, and machine learning over streaming Kafka data with TensorFlow! -
Machine Learning over Streaming Kafka data
-
The Drone delivery simulation project generates massive amounts of spatiotemporal data, so it's an ideal platform to try out some real-time streaming Machine Learning with. This project resulted in a 6 part blog series and numerous talks.
-
Apache Kafka open source schema registry evaluation (Karapace)
-
A short project to evaluate Karapace, an open source Kafka schema registry service. Resulted in a 6 part blog series (to be published in 2023) covering Apache Avro, how Karapace works with Kafka producers, consumers and clusters, schema compatibility and schema evolution. Fun!
-
Kafka KRaft evaluation
-
See projectI was running the first performance engineering track at ApacheCon NA 2022 last year so needed something interesting to talk about. The new Kafka KRaft mode was just out so the timing was ideal to do some evaluations of the performance and scalability of Zookeeper vs. KRaft for both data and meta-data workloads and operations. The conclusions were presented at ApacheCon and in a 3 part blog series (parts 1 and 2 published so far)…
I was running the first performance engineering track at ApacheCon NA 2022 last year so needed something interesting to talk about. The new Kafka KRaft mode was just out so the timing was ideal to do some evaluations of the performance and scalability of Zookeeper vs. KRaft for both data and meta-data workloads and operations. The conclusions were presented at ApacheCon and in a 3 part blog series (parts 1 and 2 published so far).
https://www.linkedin.com/pulse/1st-performance-engineering-track-apachecon-na-new-orleans-brebner/?trk=pulse-article_more-articles_related-content-card
Slides: https://www.apachecon.com/acna2022/slides/04_McCandless_Learning_from_11+.pdf
Blog 1: https://www.instaclustr.com/blog/apache-kafka-kraft-abandons-the-zookeeper-part-1-partitions-and-data-performance/
Blog 2: https://www.instaclustr.com/blog/apache-kafka-kraft-abandons-the-zookeeper-part-2-partitions-and-meta-data-performance/ -
Cadence Drone Delivery Demonstration Application
-
See projectCurrently experimenting and blogging about Cadence, an open source scalable fault-tolerant workflow engine focussed on developers with workflows as code. I've come up with a Drone Delivery Application Demo using Cadence and Apache Kafka (to demonstrate multiple Cadence+Kafka integration patterns). The 6-part blog series is now complete and published.
-
Building and Scaling a Robust Zero-code Streaming Data Pipeline with Open Source Technologies
-
See projectThis project built a demonstration streaming pipeline for ingesting, indexing, and visualizing some publicly available tidal data using multiple open source technologies including Apache Kafka, Apache Kafka Connect, Apache Camel Kafka Connector, Open Distro for Elasticsearch and Kibana, Prometheus and Grafana. Outputs included configuration examples, blogs, and talks.
-
"Around the World" - globally distributed storage, streaming, and search: A distributed Stock Broker
-
See projectQuick! Grab your top hat, passport, carpet bag stuffed with (mainly) cash, and your valet (if you have one), and join with me on a wild journey around the world in approximately 8 data centers—a new blog series to explore the world of globally distributed storage, streaming, and search with Instaclustr Managed Open Source Technologies.
In this new blog series, we’ll explore globally distributed applications (probably a Stock Broker) in the context of Cloud Location, including many of…Quick! Grab your top hat, passport, carpet bag stuffed with (mainly) cash, and your valet (if you have one), and join with me on a wild journey around the world in approximately 8 data centers—a new blog series to explore the world of globally distributed storage, streaming, and search with Instaclustr Managed Open Source Technologies.
In this new blog series, we’ll explore globally distributed applications (probably a Stock Broker) in the context of Cloud Location, including many of the drivers and concerns such as correctness and replication (making sure applications work correctly by having the data they need in the correct locations), latency (reducing lag for end users), redundancy and replication (making sure applications are available even if an entire data center fails), cost (understanding how much it costs to run an application in multiple locations, and if the cost can be reduced), and data sovereignty (ensuring customer data is only stored in legal locations). Surprisingly, Jules Verne captured these elements in Around the World: correctness (proof that they traveled around the world), time (80 days, and selecting locations and routes to travel around the world in the allotted time), success and failure (the wager), money (half Fogg’s fortune was in the carpetbag), and sovereignty (as Detective Fix repeatedly tries to arrest them at locations in the British Empire). By the end of this first blog I wager we’ll have answered our first question of the series: “How many data centers do you need for a globally distributed application?” -
The Power of Kafka Partitions
-
See projectThis blog provides an overview around the two fundamental concepts in Apache Kafka : Topics and Partitions. While developing and scaling our Anomalia Machina application we have discovered that distributed applications using Kafka and Cassandra clusters require careful tuning to achieve close to linear scalability, and critical variables included the number of Kafka topics and partitions. In this blog, we test that theory and answer questions like “What impact does increasing partitions have on…
This blog provides an overview around the two fundamental concepts in Apache Kafka : Topics and Partitions. While developing and scaling our Anomalia Machina application we have discovered that distributed applications using Kafka and Cassandra clusters require careful tuning to achieve close to linear scalability, and critical variables included the number of Kafka topics and partitions. In this blog, we test that theory and answer questions like “What impact does increasing partitions have on throughput?” and “Is there an optimal number of partitions for a cluster to maximize write throughput?” And more!
-
Geospatial Anomaly Detection - Terra Locus Anomalia Machina
-
See projectThis project explored how we added location data to a scalable real-time anomaly detection application, built around Apache Kafka, and Cassandra.
Kafka and Cassandra are designed for time-series data, however, it’s not so obvious how they can process geospatial data. In order to find location-specific anomalies, we need a way to represent locations, index locations, and query locations.
We explore alternative geospatial representations including: Latitude/Longitude points…This project explored how we added location data to a scalable real-time anomaly detection application, built around Apache Kafka, and Cassandra.
Kafka and Cassandra are designed for time-series data, however, it’s not so obvious how they can process geospatial data. In order to find location-specific anomalies, we need a way to represent locations, index locations, and query locations.
We explore alternative geospatial representations including: Latitude/Longitude points, Bounding Boxes, Geohashes, and go vertical with 3D representations, including 3D Geohashes.
For each representation we also explore possible Cassandra implementations including: Clustering columns, Secondary indexes, Denormalized tables, and the Cassandra Lucene Index Plugin.
To conclude we measure and compare the query throughput of some of the solutions, and summarise the results in terms of accuracy vs. performance to answer the question “Which geospatial data representation and Cassandra implementation is best?” -
Massively scalable real-time Anomaly Detection with Kafka, Cassandra and Kubernetes
-
See projectApache Kafka, Apache Cassandra and Kubernetes are open source big data technologies enabling applications and business operations to scale massively and rapidly. While Kafka and Cassandra underpins the data layer of the stack providing capability to stream, disseminate, store and retrieve data at very low latency, Kubernetes is a container orchestration technology that helps in automated application deployment and scaling of application clusters. In this presentation, we will reveal how we…
Apache Kafka, Apache Cassandra and Kubernetes are open source big data technologies enabling applications and business operations to scale massively and rapidly. While Kafka and Cassandra underpins the data layer of the stack providing capability to stream, disseminate, store and retrieve data at very low latency, Kubernetes is a container orchestration technology that helps in automated application deployment and scaling of application clusters. In this presentation, we will reveal how we architected a massive scale deployment of a streaming data pipeline with Kafka and Cassandra to cater to an example Anomaly detection application running on a Kubernetes cluster and generating and processing massive amount of events. Anomaly detection is a method used to detect unusual events in an event stream. It is widely used in a range of applications such as financial fraud detection, security, threat detection, website user analytics, sensors, IoT, system health monitoring, etc. When such applications operate at massive scale generating millions or billions of events, they impose significant computational, performance and scalability challenges to anomaly detection algorithms and data layer technologies. We will demonstrate the scalability, performance and cost effectiveness of Apache Kafka, Cassandra and Kubernetes, with results from our experiments allowing the Anomaly detection application to scale to 19 Billion anomaly checks per day.
-
"Kongo"- A scalable real-time Kafka streaming IoT Demonstration Application
-
See projectJoin with me in a journey of exploration upriver with "Kongo", a scalable streaming IoT logistics demonstration application using Apache Kafka, the popular open source distributed streaming platform. Along the way you'll discover: an example logistics IoT problem domain (involving the rapid movement of thousands of goods by trucks between warehouses, with real-time checking of complex business and safety rules from sensor data); an overview of the Apache Kafka architecture and components;…
Join with me in a journey of exploration upriver with "Kongo", a scalable streaming IoT logistics demonstration application using Apache Kafka, the popular open source distributed streaming platform. Along the way you'll discover: an example logistics IoT problem domain (involving the rapid movement of thousands of goods by trucks between warehouses, with real-time checking of complex business and safety rules from sensor data); an overview of the Apache Kafka architecture and components; lessons learned from making critical Kaka application design decisions; an example of Kafka Streams for checking truck load limits; and finish the journey by overcoming final performance challenges and shooting the rapids to scale Kongo on a production Kafka cluster.
-
Apache Ignite evaluation
-
At the start of 2018 I spent a month doing an internal evaluation of Apache Ignite. It's a "multi-model" distributed database, persistence, caching and processing ("grid") system. I did some benchmarking of a few use cases (including Java object caching, SQL and self-joins, and Apache Cassandra or Ignite persistence), and wrote an internal report.
-
"2001: A Space Odyssey" themed Introduction to Apache Cassandra and Spark
-
See project"2001: A Space Odyssey" themed Introduction to Apache Cassandra and Spark and Machine Learning! Included creating and connecting to a Cassandra cluster, adding Spark, exploring some real Cassandra performance monitoring data (from around 600 nodes), and trying to predict when a node is likely to have long response times.
Cassandra Cluster Creation in under 10 minutes (1st contact with the monolith)
Consulting Cassandra: Second Contact with the Monolith
Hello Cassandra! A…"2001: A Space Odyssey" themed Introduction to Apache Cassandra and Spark and Machine Learning! Included creating and connecting to a Cassandra cluster, adding Spark, exploring some real Cassandra performance monitoring data (from around 600 nodes), and trying to predict when a node is likely to have long response times.
Cassandra Cluster Creation in under 10 minutes (1st contact with the monolith)
Consulting Cassandra: Second Contact with the Monolith
Hello Cassandra! A Java Client Example
Third contact with a Monolith – Long Range Sensor Scan - Exploring some real Cassandra monitoring data with Buckets and Materialised Views
Third Contact With a Monolith – Beam Me Down Scotty - using regression analysis to predict Garbage Collections from Heap Use metrics
Third Contact with a Monolith - In the Pod - Initial experiments using Apache SPARK and MLLib to predict Garbage Collections
Fourth Contact with a Monolith - Using DataFrames, ML Pipelines and Scala to predict which Cassandra nodes had increased response times.
Behind the Scenes - how we pre-processed the raw metric data to get it into a format to learn from, using pivot
A Luxury Voyage of (Data) Exploration by Apache Zeppelin - what does the data look like using a data notebook?
Spark Structured Streaming with DataFrames
Organizations
-
Apache Software Foundation
Member
- PresentI was honoured to be elected as a Member of the Apache Software Foundation on 7 March 2025, which became official on 25 March 2025. https://www.apache.org/foundation/members But, what is an ASF member? More info here: https://www.apache.org/foundation/governance/members.html
-
International Conference on Performance Engineering (ICPE 2020)
Industry Papers Program Committee Member
-https://icpe2020.spec.org/ The International Conference on Performance Engineering (ICPE) originated eleven years ago from the fusion of an ACM workshop on software and performance prediction and a SPEC workshop focused on benchmarking and performance evaluation. ICPE continues true to its origins with focus both on software performance modeling, prediction, and measurement as well as on benchmark-based performance evaluation. The areas to which such principles are applied have evolved over…
https://icpe2020.spec.org/ The International Conference on Performance Engineering (ICPE) originated eleven years ago from the fusion of an ACM workshop on software and performance prediction and a SPEC workshop focused on benchmarking and performance evaluation. ICPE continues true to its origins with focus both on software performance modeling, prediction, and measurement as well as on benchmark-based performance evaluation. The areas to which such principles are applied have evolved over the years with the technological evolution in academia and industry. ICPE contributions appear at all levels of system and software design, performance modeling, and measurements of performance, from the cloud’s core to edge, from mobile devices to major data centers, from web applications to scientific applications. The ICPE focus on engineering performance means that industrial practitioners and academics that participate in ICPE are interested in quantifying the performance impact of all aspects of complex systems design and implementation. Length of design cycles, life-time maintenance issues, quality of experience, costs to delivering a system or service are also the focus of the intellectual curiosity of ICPE participants. In 2020 Edmonton is happy to host the ICPE community for their annual meeting.
-
SPEC Research Group
Member
-NICTA representative on the SPEC Research Group. The SPEC Research Group (RG) is a group within the Standard Performance Evaluation Corporation (SPEC) established to serve as a platform for collaborative research efforts in the area of quantitative system evaluation and analysis, fostering the interaction between industry and academia in the field. The scope of the group includes computer benchmarking, performance evaluation, and experimental system analysis in general, considering both…
NICTA representative on the SPEC Research Group. The SPEC Research Group (RG) is a group within the Standard Performance Evaluation Corporation (SPEC) established to serve as a platform for collaborative research efforts in the area of quantitative system evaluation and analysis, fostering the interaction between industry and academia in the field. The scope of the group includes computer benchmarking, performance evaluation, and experimental system analysis in general, considering both classical performance metrics such as response time, throughput, scalability and efficiency, as well as other non-functional system properties included under the term dependability, e.g., availability, reliability, and security. The conducted research efforts span the design of metrics for system evaluation as well as the development of methodologies, techniques and tools for measurement, load testing, profiling, workload characterization, dependability and efficiency evaluation of computing systems. https://research.spec.org/ https://www.spec.org/news/rgpressrelease.html
-
International Conference on Software Engineering (ICSE)
Software Engineering in Practice Track Committee Member (2012)
-ICSE, the International Conference on Software Engineering, is the premier software engineering conference, providing a forum for researchers, practitioners and educators to present and discuss the most recent innovations, trends, experiences and issues in the field of software engineering. ICSE is the premier venue for dialogue between software engineering researchers and practicing software engineers. At ICSE 2012, we will continue this tradition and bring together software engineering…
ICSE, the International Conference on Software Engineering, is the premier software engineering conference, providing a forum for researchers, practitioners and educators to present and discuss the most recent innovations, trends, experiences and issues in the field of software engineering. ICSE is the premier venue for dialogue between software engineering researchers and practicing software engineers. At ICSE 2012, we will continue this tradition and bring together software engineering practitioners and researchers from industry and academia at the Software Engineering in Practice Track. https://files.ifi.uzh.ch/icseweb/call-for-contributions/software-engineering-in-practice/
-
Standard Performance Evaluation Corporation
Primary Representative for SPEC associate member, Commonwealth Scientific and Industrial Research Organisation (CSIRO)
-CSIRO primary representative on SPEC Java Committee, contributed to the development of SPECjAppServer2001 and SPECjAppServer2002 benchmarks, and review of benchmark submissions. https://www.spec.org/
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore More