Jan 28 01:17:41.853821 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 27 22:22:24 -00 2026 Jan 28 01:17:41.853849 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=71544b7bf64a92b2aba342c16b083723a12bedf106d3ddb24ccb63046196f1b3 Jan 28 01:17:41.853865 kernel: BIOS-provided physical RAM map: Jan 28 01:17:41.853875 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 28 01:17:41.853882 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jan 28 01:17:41.853889 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jan 28 01:17:41.853898 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 28 01:17:41.853905 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 28 01:17:41.853912 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 28 01:17:41.853919 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 28 01:17:41.853929 kernel: NX (Execute Disable) protection: active Jan 28 01:17:41.853936 kernel: APIC: Static calls initialized Jan 28 01:17:41.853943 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Jan 28 01:17:41.853951 kernel: extended physical RAM map: Jan 28 01:17:41.853960 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 28 01:17:41.853970 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Jan 28 01:17:41.853978 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Jan 28 01:17:41.853986 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Jan 28 01:17:41.853994 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jan 28 01:17:41.854002 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 28 01:17:41.854010 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 28 01:17:41.854019 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 28 01:17:41.854027 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 28 01:17:41.854035 kernel: efi: EFI v2.7 by EDK II Jan 28 01:17:41.854043 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Jan 28 01:17:41.854053 kernel: secureboot: Secure boot disabled Jan 28 01:17:41.854061 kernel: SMBIOS 2.7 present. Jan 28 01:17:41.854069 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 28 01:17:41.854077 kernel: DMI: Memory slots populated: 1/1 Jan 28 01:17:41.854085 kernel: Hypervisor detected: KVM Jan 28 01:17:41.854093 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 28 01:17:41.854101 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 28 01:17:41.854109 kernel: kvm-clock: using sched offset of 6449732952 cycles Jan 28 01:17:41.854118 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 28 01:17:41.854127 kernel: tsc: Detected 2499.998 MHz processor Jan 28 01:17:41.854135 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 28 01:17:41.854146 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 28 01:17:41.854154 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 28 01:17:41.854163 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 28 01:17:41.854171 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 28 01:17:41.854183 kernel: Using GB pages for direct mapping Jan 28 01:17:41.854194 kernel: ACPI: Early table checksum verification disabled Jan 28 01:17:41.854203 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jan 28 01:17:41.854212 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jan 28 01:17:41.854221 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 28 01:17:41.854230 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 28 01:17:41.854238 kernel: ACPI: FACS 0x00000000789D0000 000040 Jan 28 01:17:41.854249 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 28 01:17:41.854258 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 28 01:17:41.854267 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 28 01:17:41.854276 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 28 01:17:41.854285 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 28 01:17:41.854293 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 28 01:17:41.854302 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 28 01:17:41.854314 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jan 28 01:17:41.854322 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jan 28 01:17:41.854331 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jan 28 01:17:41.854355 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jan 28 01:17:41.854364 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jan 28 01:17:41.854372 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jan 28 01:17:41.854381 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jan 28 01:17:41.854392 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jan 28 01:17:41.854401 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jan 28 01:17:41.854410 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jan 28 01:17:41.854419 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jan 28 01:17:41.854428 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jan 28 01:17:41.854436 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 28 01:17:41.854445 kernel: NUMA: Initialized distance table, cnt=1 Jan 28 01:17:41.854454 kernel: NODE_DATA(0) allocated [mem 0x7a8eedc0-0x7a8f5fff] Jan 28 01:17:41.854465 kernel: Zone ranges: Jan 28 01:17:41.854474 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 28 01:17:41.854483 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jan 28 01:17:41.854491 kernel: Normal empty Jan 28 01:17:41.854500 kernel: Device empty Jan 28 01:17:41.854509 kernel: Movable zone start for each node Jan 28 01:17:41.854518 kernel: Early memory node ranges Jan 28 01:17:41.854529 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 28 01:17:41.854537 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jan 28 01:17:41.854546 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jan 28 01:17:41.854555 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jan 28 01:17:41.854564 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 28 01:17:41.854584 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 28 01:17:41.854593 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 28 01:17:41.854602 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jan 28 01:17:41.854613 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 28 01:17:41.854621 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 28 01:17:41.854630 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 28 01:17:41.854639 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 28 01:17:41.854647 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 28 01:17:41.854656 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 28 01:17:41.854665 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 28 01:17:41.854675 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 28 01:17:41.854684 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 28 01:17:41.854692 kernel: TSC deadline timer available Jan 28 01:17:41.854701 kernel: CPU topo: Max. logical packages: 1 Jan 28 01:17:41.854709 kernel: CPU topo: Max. logical dies: 1 Jan 28 01:17:41.854718 kernel: CPU topo: Max. dies per package: 1 Jan 28 01:17:41.854726 kernel: CPU topo: Max. threads per core: 2 Jan 28 01:17:41.854735 kernel: CPU topo: Num. cores per package: 1 Jan 28 01:17:41.854745 kernel: CPU topo: Num. threads per package: 2 Jan 28 01:17:41.854754 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 28 01:17:41.854762 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 28 01:17:41.854771 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jan 28 01:17:41.854780 kernel: Booting paravirtualized kernel on KVM Jan 28 01:17:41.854788 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 28 01:17:41.854797 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 28 01:17:41.854807 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 28 01:17:41.854816 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 28 01:17:41.854824 kernel: pcpu-alloc: [0] 0 1 Jan 28 01:17:41.854833 kernel: kvm-guest: PV spinlocks enabled Jan 28 01:17:41.854842 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 28 01:17:41.854852 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=71544b7bf64a92b2aba342c16b083723a12bedf106d3ddb24ccb63046196f1b3 Jan 28 01:17:41.854863 kernel: random: crng init done Jan 28 01:17:41.854871 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 01:17:41.854880 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 28 01:17:41.854889 kernel: Fallback order for Node 0: 0 Jan 28 01:17:41.854897 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Jan 28 01:17:41.854906 kernel: Policy zone: DMA32 Jan 28 01:17:41.854925 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 01:17:41.854934 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 28 01:17:41.854943 kernel: Kernel/User page tables isolation: enabled Jan 28 01:17:41.854952 kernel: ftrace: allocating 40128 entries in 157 pages Jan 28 01:17:41.854963 kernel: ftrace: allocated 157 pages with 5 groups Jan 28 01:17:41.854972 kernel: Dynamic Preempt: voluntary Jan 28 01:17:41.854981 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 01:17:41.854991 kernel: rcu: RCU event tracing is enabled. Jan 28 01:17:41.855000 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 28 01:17:41.855010 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 01:17:41.855021 kernel: Rude variant of Tasks RCU enabled. Jan 28 01:17:41.855030 kernel: Tracing variant of Tasks RCU enabled. Jan 28 01:17:41.855039 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 01:17:41.855048 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 28 01:17:41.855057 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 28 01:17:41.855069 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 28 01:17:41.855078 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 28 01:17:41.855087 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 28 01:17:41.855096 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 01:17:41.855105 kernel: Console: colour dummy device 80x25 Jan 28 01:17:41.855129 kernel: printk: legacy console [tty0] enabled Jan 28 01:17:41.855138 kernel: printk: legacy console [ttyS0] enabled Jan 28 01:17:41.855149 kernel: ACPI: Core revision 20240827 Jan 28 01:17:41.855159 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 28 01:17:41.855168 kernel: APIC: Switch to symmetric I/O mode setup Jan 28 01:17:41.855177 kernel: x2apic enabled Jan 28 01:17:41.855186 kernel: APIC: Switched APIC routing to: physical x2apic Jan 28 01:17:41.855196 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 28 01:17:41.855205 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 28 01:17:41.855216 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 28 01:17:41.855225 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jan 28 01:17:41.855234 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 28 01:17:41.855243 kernel: Spectre V2 : Mitigation: Retpolines Jan 28 01:17:41.855251 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 28 01:17:41.855260 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 28 01:17:41.855269 kernel: RETBleed: Vulnerable Jan 28 01:17:41.855278 kernel: Speculative Store Bypass: Vulnerable Jan 28 01:17:41.855287 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 28 01:17:41.855296 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 28 01:17:41.855307 kernel: GDS: Unknown: Dependent on hypervisor status Jan 28 01:17:41.855315 kernel: active return thunk: its_return_thunk Jan 28 01:17:41.855324 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 28 01:17:41.855341 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 28 01:17:41.855351 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 28 01:17:41.855360 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 28 01:17:41.855369 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 28 01:17:41.855377 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 28 01:17:41.855386 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 28 01:17:41.855395 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 28 01:17:41.855406 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 28 01:17:41.855415 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 28 01:17:41.855424 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 28 01:17:41.855432 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 28 01:17:41.855441 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 28 01:17:41.855450 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 28 01:17:41.855459 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 28 01:17:41.855468 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 28 01:17:41.855476 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 28 01:17:41.855485 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 28 01:17:41.855494 kernel: Freeing SMP alternatives memory: 32K Jan 28 01:17:41.855505 kernel: pid_max: default: 32768 minimum: 301 Jan 28 01:17:41.855514 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 28 01:17:41.855523 kernel: landlock: Up and running. Jan 28 01:17:41.855531 kernel: SELinux: Initializing. Jan 28 01:17:41.855540 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 28 01:17:41.855549 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 28 01:17:41.855558 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x4) Jan 28 01:17:41.855567 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 28 01:17:41.855576 kernel: signal: max sigframe size: 3632 Jan 28 01:17:41.855585 kernel: rcu: Hierarchical SRCU implementation. Jan 28 01:17:41.855597 kernel: rcu: Max phase no-delay instances is 400. Jan 28 01:17:41.855606 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 28 01:17:41.855615 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 28 01:17:41.855624 kernel: smp: Bringing up secondary CPUs ... Jan 28 01:17:41.855633 kernel: smpboot: x86: Booting SMP configuration: Jan 28 01:17:41.855643 kernel: .... node #0, CPUs: #1 Jan 28 01:17:41.855652 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 28 01:17:41.855672 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 28 01:17:41.855681 kernel: smp: Brought up 1 node, 2 CPUs Jan 28 01:17:41.855691 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 28 01:17:41.855700 kernel: Memory: 1924436K/2037804K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 108804K reserved, 0K cma-reserved) Jan 28 01:17:41.855710 kernel: devtmpfs: initialized Jan 28 01:17:41.855719 kernel: x86/mm: Memory block size: 128MB Jan 28 01:17:41.855731 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jan 28 01:17:41.855740 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 01:17:41.855749 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 28 01:17:41.855758 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 01:17:41.855768 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 01:17:41.855777 kernel: audit: initializing netlink subsys (disabled) Jan 28 01:17:41.855786 kernel: audit: type=2000 audit(1769563058.472:1): state=initialized audit_enabled=0 res=1 Jan 28 01:17:41.855797 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 01:17:41.855807 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 28 01:17:41.855815 kernel: cpuidle: using governor menu Jan 28 01:17:41.855825 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 01:17:41.855834 kernel: dca service started, version 1.12.1 Jan 28 01:17:41.855843 kernel: PCI: Using configuration type 1 for base access Jan 28 01:17:41.855852 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 28 01:17:41.855862 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 01:17:41.855873 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 01:17:41.855882 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 01:17:41.855891 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 01:17:41.855900 kernel: ACPI: Added _OSI(Module Device) Jan 28 01:17:41.855909 kernel: ACPI: Added _OSI(Processor Device) Jan 28 01:17:41.855919 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 01:17:41.855928 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 28 01:17:41.855939 kernel: ACPI: Interpreter enabled Jan 28 01:17:41.855949 kernel: ACPI: PM: (supports S0 S5) Jan 28 01:17:41.855958 kernel: ACPI: Using IOAPIC for interrupt routing Jan 28 01:17:41.855967 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 28 01:17:41.855976 kernel: PCI: Using E820 reservations for host bridge windows Jan 28 01:17:41.855985 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 28 01:17:41.855995 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 28 01:17:41.856195 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 28 01:17:41.856324 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 28 01:17:41.856578 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 28 01:17:41.856590 kernel: acpiphp: Slot [3] registered Jan 28 01:17:41.856600 kernel: acpiphp: Slot [4] registered Jan 28 01:17:41.856609 kernel: acpiphp: Slot [5] registered Jan 28 01:17:41.856622 kernel: acpiphp: Slot [6] registered Jan 28 01:17:41.856631 kernel: acpiphp: Slot [7] registered Jan 28 01:17:41.856640 kernel: acpiphp: Slot [8] registered Jan 28 01:17:41.856649 kernel: acpiphp: Slot [9] registered Jan 28 01:17:41.856658 kernel: acpiphp: Slot [10] registered Jan 28 01:17:41.856667 kernel: acpiphp: Slot [11] registered Jan 28 01:17:41.856676 kernel: acpiphp: Slot [12] registered Jan 28 01:17:41.856687 kernel: acpiphp: Slot [13] registered Jan 28 01:17:41.856703 kernel: acpiphp: Slot [14] registered Jan 28 01:17:41.856714 kernel: acpiphp: Slot [15] registered Jan 28 01:17:41.856724 kernel: acpiphp: Slot [16] registered Jan 28 01:17:41.856733 kernel: acpiphp: Slot [17] registered Jan 28 01:17:41.856742 kernel: acpiphp: Slot [18] registered Jan 28 01:17:41.856751 kernel: acpiphp: Slot [19] registered Jan 28 01:17:41.856762 kernel: acpiphp: Slot [20] registered Jan 28 01:17:41.856771 kernel: acpiphp: Slot [21] registered Jan 28 01:17:41.856781 kernel: acpiphp: Slot [22] registered Jan 28 01:17:41.856789 kernel: acpiphp: Slot [23] registered Jan 28 01:17:41.856799 kernel: acpiphp: Slot [24] registered Jan 28 01:17:41.856808 kernel: acpiphp: Slot [25] registered Jan 28 01:17:41.856817 kernel: acpiphp: Slot [26] registered Jan 28 01:17:41.856826 kernel: acpiphp: Slot [27] registered Jan 28 01:17:41.856838 kernel: acpiphp: Slot [28] registered Jan 28 01:17:41.856847 kernel: acpiphp: Slot [29] registered Jan 28 01:17:41.856856 kernel: acpiphp: Slot [30] registered Jan 28 01:17:41.856865 kernel: acpiphp: Slot [31] registered Jan 28 01:17:41.856875 kernel: PCI host bridge to bus 0000:00 Jan 28 01:17:41.857000 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 28 01:17:41.857112 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 28 01:17:41.857226 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 28 01:17:41.857353 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 28 01:17:41.857485 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jan 28 01:17:41.857599 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 28 01:17:41.857736 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jan 28 01:17:41.857871 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jan 28 01:17:41.857999 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Jan 28 01:17:41.858125 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 28 01:17:41.858245 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 28 01:17:41.858375 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 28 01:17:41.858501 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 28 01:17:41.858621 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 28 01:17:41.858740 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 28 01:17:41.858859 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 28 01:17:41.858986 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Jan 28 01:17:41.859106 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Jan 28 01:17:41.859231 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 28 01:17:41.859359 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 28 01:17:41.859486 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Jan 28 01:17:41.859607 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Jan 28 01:17:41.859733 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Jan 28 01:17:41.859856 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Jan 28 01:17:41.859869 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 28 01:17:41.859879 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 28 01:17:41.859889 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 28 01:17:41.859898 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 28 01:17:41.859907 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 28 01:17:41.859917 kernel: iommu: Default domain type: Translated Jan 28 01:17:41.859929 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 28 01:17:41.859938 kernel: efivars: Registered efivars operations Jan 28 01:17:41.859948 kernel: PCI: Using ACPI for IRQ routing Jan 28 01:17:41.859957 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 28 01:17:41.859966 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Jan 28 01:17:41.859975 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jan 28 01:17:41.859984 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jan 28 01:17:41.860105 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 28 01:17:41.860241 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 28 01:17:41.860386 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 28 01:17:41.860398 kernel: vgaarb: loaded Jan 28 01:17:41.860408 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 28 01:17:41.860417 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 28 01:17:41.860427 kernel: clocksource: Switched to clocksource kvm-clock Jan 28 01:17:41.860440 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 01:17:41.860449 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 01:17:41.860459 kernel: pnp: PnP ACPI init Jan 28 01:17:41.860468 kernel: pnp: PnP ACPI: found 5 devices Jan 28 01:17:41.860478 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 28 01:17:41.860487 kernel: NET: Registered PF_INET protocol family Jan 28 01:17:41.860496 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 01:17:41.860508 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 28 01:17:41.860518 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 01:17:41.860527 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 28 01:17:41.860537 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 28 01:17:41.860546 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 28 01:17:41.860556 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 28 01:17:41.860565 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 28 01:17:41.860577 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 01:17:41.860586 kernel: NET: Registered PF_XDP protocol family Jan 28 01:17:41.860700 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 28 01:17:41.860810 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 28 01:17:41.860921 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 28 01:17:41.861030 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 28 01:17:41.861139 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jan 28 01:17:41.861263 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 28 01:17:41.861275 kernel: PCI: CLS 0 bytes, default 64 Jan 28 01:17:41.861285 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 28 01:17:41.861295 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 28 01:17:41.861304 kernel: clocksource: Switched to clocksource tsc Jan 28 01:17:41.861314 kernel: Initialise system trusted keyrings Jan 28 01:17:41.861323 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 28 01:17:41.861347 kernel: Key type asymmetric registered Jan 28 01:17:41.861357 kernel: Asymmetric key parser 'x509' registered Jan 28 01:17:41.861366 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 28 01:17:41.861375 kernel: io scheduler mq-deadline registered Jan 28 01:17:41.861385 kernel: io scheduler kyber registered Jan 28 01:17:41.861394 kernel: io scheduler bfq registered Jan 28 01:17:41.861403 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 28 01:17:41.861423 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 01:17:41.861437 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 28 01:17:41.861467 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 28 01:17:41.861477 kernel: i8042: Warning: Keylock active Jan 28 01:17:41.861487 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 28 01:17:41.861496 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 28 01:17:41.861638 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 28 01:17:41.861762 kernel: rtc_cmos 00:00: registered as rtc0 Jan 28 01:17:41.861881 kernel: rtc_cmos 00:00: setting system clock to 2026-01-28T01:17:38 UTC (1769563058) Jan 28 01:17:41.861999 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 28 01:17:41.862031 kernel: intel_pstate: CPU model not supported Jan 28 01:17:41.862043 kernel: efifb: probing for efifb Jan 28 01:17:41.862053 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Jan 28 01:17:41.862066 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jan 28 01:17:41.862076 kernel: efifb: scrolling: redraw Jan 28 01:17:41.862086 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 28 01:17:41.862096 kernel: Console: switching to colour frame buffer device 100x37 Jan 28 01:17:41.862106 kernel: fb0: EFI VGA frame buffer device Jan 28 01:17:41.862116 kernel: pstore: Using crash dump compression: deflate Jan 28 01:17:41.862126 kernel: pstore: Registered efi_pstore as persistent store backend Jan 28 01:17:41.862139 kernel: NET: Registered PF_INET6 protocol family Jan 28 01:17:41.862149 kernel: Segment Routing with IPv6 Jan 28 01:17:41.862159 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 01:17:41.862169 kernel: NET: Registered PF_PACKET protocol family Jan 28 01:17:41.862179 kernel: Key type dns_resolver registered Jan 28 01:17:41.862189 kernel: IPI shorthand broadcast: enabled Jan 28 01:17:41.862199 kernel: sched_clock: Marking stable (1379001933, 201060196)->(1718695900, -138633771) Jan 28 01:17:41.862211 kernel: registered taskstats version 1 Jan 28 01:17:41.862221 kernel: Loading compiled-in X.509 certificates Jan 28 01:17:41.862231 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 0eb3c2aae9988d4ab7f0e142c4f5c61453c9ddb3' Jan 28 01:17:41.862241 kernel: Demotion targets for Node 0: null Jan 28 01:17:41.862251 kernel: Key type .fscrypt registered Jan 28 01:17:41.862261 kernel: Key type fscrypt-provisioning registered Jan 28 01:17:41.862271 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 01:17:41.862281 kernel: ima: Allocated hash algorithm: sha1 Jan 28 01:17:41.862293 kernel: ima: No architecture policies found Jan 28 01:17:41.862303 kernel: clk: Disabling unused clocks Jan 28 01:17:41.862313 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 28 01:17:41.862323 kernel: Write protecting the kernel read-only data: 47104k Jan 28 01:17:41.862351 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 28 01:17:41.862361 kernel: Run /init as init process Jan 28 01:17:41.862371 kernel: with arguments: Jan 28 01:17:41.862381 kernel: /init Jan 28 01:17:41.862391 kernel: with environment: Jan 28 01:17:41.862400 kernel: HOME=/ Jan 28 01:17:41.862410 kernel: TERM=linux Jan 28 01:17:41.862521 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 28 01:17:41.862538 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 28 01:17:41.862636 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 28 01:17:41.862653 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 28 01:17:41.862663 kernel: GPT:25804799 != 33554431 Jan 28 01:17:41.862672 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 28 01:17:41.862684 kernel: GPT:25804799 != 33554431 Jan 28 01:17:41.862694 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 28 01:17:41.862703 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 28 01:17:41.862713 kernel: SCSI subsystem initialized Jan 28 01:17:41.862723 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 01:17:41.862733 kernel: device-mapper: uevent: version 1.0.3 Jan 28 01:17:41.862743 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 28 01:17:41.862755 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 28 01:17:41.862765 kernel: raid6: avx512x4 gen() 18026 MB/s Jan 28 01:17:41.862774 kernel: raid6: avx512x2 gen() 18074 MB/s Jan 28 01:17:41.862784 kernel: raid6: avx512x1 gen() 17976 MB/s Jan 28 01:17:41.862793 kernel: raid6: avx2x4 gen() 17961 MB/s Jan 28 01:17:41.862803 kernel: raid6: avx2x2 gen() 17920 MB/s Jan 28 01:17:41.862813 kernel: raid6: avx2x1 gen() 13984 MB/s Jan 28 01:17:41.862825 kernel: raid6: using algorithm avx512x2 gen() 18074 MB/s Jan 28 01:17:41.862835 kernel: raid6: .... xor() 25562 MB/s, rmw enabled Jan 28 01:17:41.862845 kernel: raid6: using avx512x2 recovery algorithm Jan 28 01:17:41.862854 kernel: xor: automatically using best checksumming function avx Jan 28 01:17:41.862864 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 28 01:17:41.862874 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 01:17:41.862884 kernel: BTRFS: device fsid 0f5fa021-4357-40bb-b32a-e1579c5824ad devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (152) Jan 28 01:17:41.862896 kernel: BTRFS info (device dm-0): first mount of filesystem 0f5fa021-4357-40bb-b32a-e1579c5824ad Jan 28 01:17:41.862906 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:17:41.862916 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 28 01:17:41.862925 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 01:17:41.862935 kernel: BTRFS info (device dm-0): enabling free space tree Jan 28 01:17:41.862945 kernel: loop: module loaded Jan 28 01:17:41.862954 kernel: loop0: detected capacity change from 0 to 100552 Jan 28 01:17:41.862967 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 28 01:17:41.862978 systemd[1]: Successfully made /usr/ read-only. Jan 28 01:17:41.862991 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 01:17:41.863002 systemd[1]: Detected virtualization amazon. Jan 28 01:17:41.863012 systemd[1]: Detected architecture x86-64. Jan 28 01:17:41.863022 systemd[1]: Running in initrd. Jan 28 01:17:41.863034 systemd[1]: No hostname configured, using default hostname. Jan 28 01:17:41.863044 systemd[1]: Hostname set to . Jan 28 01:17:41.863054 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 28 01:17:41.863064 systemd[1]: Queued start job for default target initrd.target. Jan 28 01:17:41.863074 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 28 01:17:41.863084 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:17:41.863097 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:17:41.863107 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 01:17:41.863118 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 01:17:41.863129 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 01:17:41.863139 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 01:17:41.863149 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:17:41.863162 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:17:41.863172 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 28 01:17:41.863182 systemd[1]: Reached target paths.target - Path Units. Jan 28 01:17:41.863192 systemd[1]: Reached target slices.target - Slice Units. Jan 28 01:17:41.863202 systemd[1]: Reached target swap.target - Swaps. Jan 28 01:17:41.863214 systemd[1]: Reached target timers.target - Timer Units. Jan 28 01:17:41.863228 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:17:41.863242 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:17:41.863252 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 28 01:17:41.863262 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 01:17:41.863272 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 28 01:17:41.863282 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:17:41.863293 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 01:17:41.863303 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:17:41.863315 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 01:17:41.863330 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 01:17:41.863357 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 01:17:41.863367 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 01:17:41.863378 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 01:17:41.863388 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 28 01:17:41.863401 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 01:17:41.863412 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 01:17:41.863422 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 01:17:41.863432 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:17:41.863445 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 01:17:41.863455 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:17:41.863466 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 01:17:41.863476 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 01:17:41.863512 systemd-journald[287]: Collecting audit messages is enabled. Jan 28 01:17:41.863539 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 01:17:41.863550 systemd-journald[287]: Journal started Jan 28 01:17:41.863571 systemd-journald[287]: Runtime Journal (/run/log/journal/ec236d8c033e62bbe4410c177654254b) is 4.7M, max 38M, 33.2M free. Jan 28 01:17:41.871240 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 01:17:41.871296 kernel: audit: type=1130 audit(1769563061.865:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:41.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:41.869507 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 01:17:41.895099 systemd-modules-load[290]: Inserted module 'br_netfilter' Jan 28 01:17:41.896542 kernel: Bridge firewalling registered Jan 28 01:17:41.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:41.896962 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 01:17:41.909561 kernel: audit: type=1130 audit(1769563061.897:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:41.904605 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:17:41.919411 kernel: audit: type=1130 audit(1769563061.910:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:41.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:41.907509 systemd-tmpfiles[302]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 28 01:17:41.910592 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:17:41.929902 kernel: audit: type=1130 audit(1769563061.921:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:41.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:41.919819 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:17:41.931480 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:17:41.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:41.944600 kernel: audit: type=1130 audit(1769563061.931:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:41.943137 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:17:41.946513 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 01:17:41.949949 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:17:41.962788 kernel: audit: type=1130 audit(1769563061.950:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:41.962837 kernel: audit: type=1334 audit(1769563061.959:8): prog-id=6 op=LOAD Jan 28 01:17:41.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:41.959000 audit: BPF prog-id=6 op=LOAD Jan 28 01:17:41.962519 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 01:17:41.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:41.977194 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:17:41.986447 kernel: audit: type=1130 audit(1769563061.977:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:41.992041 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:17:41.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:42.001404 kernel: audit: type=1130 audit(1769563061.993:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:42.002182 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 01:17:42.028725 dracut-cmdline[329]: dracut-109 Jan 28 01:17:42.035444 dracut-cmdline[329]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=71544b7bf64a92b2aba342c16b083723a12bedf106d3ddb24ccb63046196f1b3 Jan 28 01:17:42.095887 systemd-resolved[314]: Positive Trust Anchors: Jan 28 01:17:42.095902 systemd-resolved[314]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 01:17:42.095906 systemd-resolved[314]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 28 01:17:42.095941 systemd-resolved[314]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 01:17:42.121384 systemd-resolved[314]: Defaulting to hostname 'linux'. Jan 28 01:17:42.122282 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 01:17:42.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:42.122845 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:17:42.194366 kernel: Loading iSCSI transport class v2.0-870. Jan 28 01:17:42.289365 kernel: iscsi: registered transport (tcp) Jan 28 01:17:42.368551 kernel: iscsi: registered transport (qla4xxx) Jan 28 01:17:42.368621 kernel: QLogic iSCSI HBA Driver Jan 28 01:17:42.397102 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 01:17:42.415807 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 01:17:42.426311 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:17:42.426365 kernel: audit: type=1130 audit(1769563062.416:12): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:42.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:42.420707 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 01:17:42.473241 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 01:17:42.481065 kernel: audit: type=1130 audit(1769563062.473:13): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:42.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:42.481953 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 01:17:42.484896 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 01:17:42.523484 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:17:42.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:42.531610 kernel: audit: type=1130 audit(1769563062.524:14): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:42.531000 audit: BPF prog-id=7 op=LOAD Jan 28 01:17:42.536596 kernel: audit: type=1334 audit(1769563062.531:15): prog-id=7 op=LOAD Jan 28 01:17:42.536650 kernel: audit: type=1334 audit(1769563062.531:16): prog-id=8 op=LOAD Jan 28 01:17:42.531000 audit: BPF prog-id=8 op=LOAD Jan 28 01:17:42.536770 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:17:42.582113 systemd-udevd[557]: Using default interface naming scheme 'v257'. Jan 28 01:17:42.602508 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:17:42.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:42.612496 kernel: audit: type=1130 audit(1769563062.604:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:42.613551 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 01:17:42.640387 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:17:42.652476 kernel: audit: type=1130 audit(1769563062.641:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:42.652519 kernel: audit: type=1334 audit(1769563062.642:19): prog-id=9 op=LOAD Jan 28 01:17:42.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:42.642000 audit: BPF prog-id=9 op=LOAD Jan 28 01:17:42.653524 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 01:17:42.663558 dracut-pre-trigger[641]: rd.md=0: removing MD RAID activation Jan 28 01:17:42.699745 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:17:42.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:42.709463 kernel: audit: type=1130 audit(1769563062.700:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:42.712560 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 01:17:42.728933 systemd-networkd[665]: lo: Link UP Jan 28 01:17:42.730071 systemd-networkd[665]: lo: Gained carrier Jan 28 01:17:42.731902 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 01:17:42.733670 systemd[1]: Reached target network.target - Network. Jan 28 01:17:42.741515 kernel: audit: type=1130 audit(1769563062.733:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:42.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:42.787640 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:17:42.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:42.792989 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 01:17:42.902109 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:17:42.902462 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:17:42.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:42.904918 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:17:42.908828 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:17:42.920259 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 28 01:17:42.920658 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 28 01:17:42.925178 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 28 01:17:42.930600 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:2b:ad:0e:14:8d Jan 28 01:17:42.932031 (udev-worker)[700]: Network interface NamePolicy= disabled on kernel command line. Jan 28 01:17:42.940084 systemd-networkd[665]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 01:17:42.940093 systemd-networkd[665]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:17:42.945658 systemd-networkd[665]: eth0: Link UP Jan 28 01:17:42.945904 systemd-networkd[665]: eth0: Gained carrier Jan 28 01:17:42.945921 systemd-networkd[665]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 01:17:42.959467 systemd-networkd[665]: eth0: DHCPv4 address 172.31.31.26/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 28 01:17:42.977729 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:17:42.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:42.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:42.978728 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:17:42.983488 kernel: cryptd: max_cpu_qlen set to 1000 Jan 28 01:17:42.994537 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:17:42.999372 kernel: AES CTR mode by8 optimization enabled Jan 28 01:17:43.065851 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Jan 28 01:17:43.066040 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:17:43.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:43.108387 kernel: nvme nvme0: using unchecked data buffer Jan 28 01:17:43.213940 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 28 01:17:43.217204 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 01:17:43.239748 disk-uuid[823]: Primary Header is updated. Jan 28 01:17:43.239748 disk-uuid[823]: Secondary Entries is updated. Jan 28 01:17:43.239748 disk-uuid[823]: Secondary Header is updated. Jan 28 01:17:43.301728 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 28 01:17:43.317720 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 28 01:17:43.333327 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 28 01:17:43.464618 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 01:17:43.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:43.465793 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:17:43.466374 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:17:43.467533 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 01:17:43.469313 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 01:17:43.500757 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:17:43.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:44.362204 disk-uuid[824]: Warning: The kernel is still using the old partition table. Jan 28 01:17:44.362204 disk-uuid[824]: The new table will be used at the next reboot or after you Jan 28 01:17:44.362204 disk-uuid[824]: run partprobe(8) or kpartx(8) Jan 28 01:17:44.362204 disk-uuid[824]: The operation has completed successfully. Jan 28 01:17:44.372440 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 01:17:44.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:44.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:44.372588 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 01:17:44.375495 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 01:17:44.418450 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (984) Jan 28 01:17:44.418513 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 886243c7-f2f0-4861-ae6f-419cdf70e432 Jan 28 01:17:44.423119 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:17:44.466606 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 28 01:17:44.466691 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 28 01:17:44.475450 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 886243c7-f2f0-4861-ae6f-419cdf70e432 Jan 28 01:17:44.475588 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 01:17:44.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:44.477728 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 01:17:44.626530 systemd-networkd[665]: eth0: Gained IPv6LL Jan 28 01:17:45.631688 ignition[1003]: Ignition 2.24.0 Jan 28 01:17:45.631704 ignition[1003]: Stage: fetch-offline Jan 28 01:17:45.631781 ignition[1003]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:17:45.631790 ignition[1003]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 28 01:17:45.632067 ignition[1003]: Ignition finished successfully Jan 28 01:17:45.633851 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:17:45.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:45.635536 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 28 01:17:45.671623 ignition[1009]: Ignition 2.24.0 Jan 28 01:17:45.671643 ignition[1009]: Stage: fetch Jan 28 01:17:45.671892 ignition[1009]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:17:45.671904 ignition[1009]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 28 01:17:45.672009 ignition[1009]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 28 01:17:45.679538 ignition[1009]: PUT result: OK Jan 28 01:17:45.681463 ignition[1009]: parsed url from cmdline: "" Jan 28 01:17:45.681476 ignition[1009]: no config URL provided Jan 28 01:17:45.681489 ignition[1009]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 01:17:45.681505 ignition[1009]: no config at "/usr/lib/ignition/user.ign" Jan 28 01:17:45.681521 ignition[1009]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 28 01:17:45.681957 ignition[1009]: PUT result: OK Jan 28 01:17:45.682027 ignition[1009]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 28 01:17:45.682493 ignition[1009]: GET result: OK Jan 28 01:17:45.682573 ignition[1009]: parsing config with SHA512: a90f7a8c9de88b7199bc163be9b4aa0f48ffffd4b6ae45b207a140a3b8389c88b46020943e138b52db1ae8619745954af9c9c7360ee9e13fc3a538cf215e7a76 Jan 28 01:17:45.688267 unknown[1009]: fetched base config from "system" Jan 28 01:17:45.688277 unknown[1009]: fetched base config from "system" Jan 28 01:17:45.688608 ignition[1009]: fetch: fetch complete Jan 28 01:17:45.688282 unknown[1009]: fetched user config from "aws" Jan 28 01:17:45.688612 ignition[1009]: fetch: fetch passed Jan 28 01:17:45.688650 ignition[1009]: Ignition finished successfully Jan 28 01:17:45.690851 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 28 01:17:45.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:45.692097 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 01:17:45.728388 ignition[1015]: Ignition 2.24.0 Jan 28 01:17:45.728399 ignition[1015]: Stage: kargs Jan 28 01:17:45.728610 ignition[1015]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:17:45.728618 ignition[1015]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 28 01:17:45.728686 ignition[1015]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 28 01:17:45.729360 ignition[1015]: PUT result: OK Jan 28 01:17:45.731860 ignition[1015]: kargs: kargs passed Jan 28 01:17:45.731928 ignition[1015]: Ignition finished successfully Jan 28 01:17:45.733596 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 01:17:45.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:45.735188 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 01:17:45.759782 ignition[1022]: Ignition 2.24.0 Jan 28 01:17:45.759801 ignition[1022]: Stage: disks Jan 28 01:17:45.760071 ignition[1022]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:17:45.760083 ignition[1022]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 28 01:17:45.760186 ignition[1022]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 28 01:17:45.761084 ignition[1022]: PUT result: OK Jan 28 01:17:45.764622 ignition[1022]: disks: disks passed Jan 28 01:17:45.764719 ignition[1022]: Ignition finished successfully Jan 28 01:17:45.766743 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 01:17:45.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:45.767428 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 01:17:45.767796 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 01:17:45.768312 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 01:17:45.768884 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 01:17:45.769530 systemd[1]: Reached target basic.target - Basic System. Jan 28 01:17:45.771310 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 01:17:45.902840 systemd-fsck[1030]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 28 01:17:45.905727 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 01:17:45.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:45.908769 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 01:17:46.159365 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 60a46795-cc10-4076-a709-d039d1c23a6b r/w with ordered data mode. Quota mode: none. Jan 28 01:17:46.159827 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 01:17:46.160696 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 01:17:46.218350 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:17:46.220434 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 01:17:46.221640 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 28 01:17:46.223500 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 01:17:46.224150 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:17:46.229482 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 01:17:46.231266 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 01:17:46.244359 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1049) Jan 28 01:17:46.247358 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 886243c7-f2f0-4861-ae6f-419cdf70e432 Jan 28 01:17:46.250352 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:17:46.257522 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 28 01:17:46.257585 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 28 01:17:46.259603 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:17:48.291056 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 01:17:48.298036 kernel: kauditd_printk_skb: 15 callbacks suppressed Jan 28 01:17:48.298063 kernel: audit: type=1130 audit(1769563068.290:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:48.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:48.294435 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 01:17:48.305746 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 01:17:48.317006 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 01:17:48.320759 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 886243c7-f2f0-4861-ae6f-419cdf70e432 Jan 28 01:17:48.351624 ignition[1145]: INFO : Ignition 2.24.0 Jan 28 01:17:48.351624 ignition[1145]: INFO : Stage: mount Jan 28 01:17:48.353279 ignition[1145]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:17:48.353279 ignition[1145]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 28 01:17:48.353279 ignition[1145]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 28 01:17:48.353279 ignition[1145]: INFO : PUT result: OK Jan 28 01:17:48.356366 ignition[1145]: INFO : mount: mount passed Jan 28 01:17:48.356366 ignition[1145]: INFO : Ignition finished successfully Jan 28 01:17:48.362114 kernel: audit: type=1130 audit(1769563068.356:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:48.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:48.356532 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 01:17:48.367674 kernel: audit: type=1130 audit(1769563068.362:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:48.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:48.357201 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 01:17:48.367791 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 01:17:48.404475 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:17:48.431380 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1157) Jan 28 01:17:48.434538 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 886243c7-f2f0-4861-ae6f-419cdf70e432 Jan 28 01:17:48.434593 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:17:48.444026 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 28 01:17:48.444091 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 28 01:17:48.445886 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:17:48.475432 ignition[1173]: INFO : Ignition 2.24.0 Jan 28 01:17:48.475432 ignition[1173]: INFO : Stage: files Jan 28 01:17:48.477042 ignition[1173]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:17:48.477042 ignition[1173]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 28 01:17:48.477042 ignition[1173]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 28 01:17:48.477042 ignition[1173]: INFO : PUT result: OK Jan 28 01:17:48.480162 ignition[1173]: DEBUG : files: compiled without relabeling support, skipping Jan 28 01:17:48.511490 ignition[1173]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 01:17:48.511490 ignition[1173]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 01:17:48.626698 ignition[1173]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 01:17:48.627535 ignition[1173]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 01:17:48.627535 ignition[1173]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 01:17:48.627157 unknown[1173]: wrote ssh authorized keys file for user: core Jan 28 01:17:48.629621 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 28 01:17:48.630283 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 28 01:17:48.708210 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 28 01:17:48.935643 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 28 01:17:48.937236 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 28 01:17:48.937236 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 01:17:48.937236 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:17:48.937236 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:17:48.937236 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:17:48.937236 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:17:48.937236 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:17:48.937236 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:17:48.943501 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:17:48.943501 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:17:48.943501 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 28 01:17:48.946379 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 28 01:17:48.946379 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 28 01:17:48.946379 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 28 01:17:49.462178 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 28 01:17:51.185461 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 28 01:17:51.185461 ignition[1173]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 28 01:17:51.188197 ignition[1173]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:17:51.190570 ignition[1173]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:17:51.190570 ignition[1173]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 28 01:17:51.190570 ignition[1173]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 28 01:17:51.192835 ignition[1173]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 01:17:51.192835 ignition[1173]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:17:51.192835 ignition[1173]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:17:51.192835 ignition[1173]: INFO : files: files passed Jan 28 01:17:51.192835 ignition[1173]: INFO : Ignition finished successfully Jan 28 01:17:51.201300 kernel: audit: type=1130 audit(1769563071.192:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.192377 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 01:17:51.195486 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 01:17:51.204193 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 01:17:51.206952 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 01:17:51.216526 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 01:17:51.227983 kernel: audit: type=1130 audit(1769563071.216:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.228013 kernel: audit: type=1131 audit(1769563071.216:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.229288 initrd-setup-root-after-ignition[1206]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:17:51.229288 initrd-setup-root-after-ignition[1206]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:17:51.231866 initrd-setup-root-after-ignition[1210]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:17:51.233491 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:17:51.239944 kernel: audit: type=1130 audit(1769563071.233:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.234078 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 01:17:51.241165 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 01:17:51.308516 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 01:17:51.308641 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 01:17:51.320574 kernel: audit: type=1130 audit(1769563071.309:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.320606 kernel: audit: type=1131 audit(1769563071.309:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.310128 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 01:17:51.320936 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 01:17:51.322010 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 01:17:51.322915 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 01:17:51.356607 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:17:51.364951 kernel: audit: type=1130 audit(1769563071.356:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.364922 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 01:17:51.395155 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 28 01:17:51.396253 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:17:51.396922 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:17:51.398037 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 01:17:51.398959 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 01:17:51.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.399190 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:17:51.400409 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 01:17:51.401310 systemd[1]: Stopped target basic.target - Basic System. Jan 28 01:17:51.402288 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 01:17:51.403138 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:17:51.403961 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 01:17:51.404790 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 28 01:17:51.405681 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 01:17:51.406564 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:17:51.407414 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 01:17:51.408700 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 01:17:51.409578 systemd[1]: Stopped target swap.target - Swaps. Jan 28 01:17:51.410383 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 01:17:51.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.410622 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:17:51.411675 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:17:51.412764 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:17:51.413513 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 01:17:51.413665 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:17:51.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.414376 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 01:17:51.414578 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 01:17:51.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.415991 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 01:17:51.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.416214 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:17:51.417072 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 01:17:51.417271 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 01:17:51.419428 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 01:17:51.422839 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 01:17:51.425016 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 01:17:51.425771 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:17:51.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.427681 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 01:17:51.427898 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:17:51.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.429961 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 01:17:51.430126 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:17:51.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.440087 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 01:17:51.440420 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 01:17:51.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.457780 ignition[1230]: INFO : Ignition 2.24.0 Jan 28 01:17:51.459158 ignition[1230]: INFO : Stage: umount Jan 28 01:17:51.459158 ignition[1230]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:17:51.459158 ignition[1230]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 28 01:17:51.459158 ignition[1230]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 28 01:17:51.463153 ignition[1230]: INFO : PUT result: OK Jan 28 01:17:51.460800 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 01:17:51.466788 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 01:17:51.466929 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 01:17:51.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.469695 ignition[1230]: INFO : umount: umount passed Jan 28 01:17:51.470716 ignition[1230]: INFO : Ignition finished successfully Jan 28 01:17:51.472600 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 01:17:51.472763 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 01:17:51.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.473716 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 01:17:51.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.473780 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 01:17:51.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.474311 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 01:17:51.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.474454 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 01:17:51.475034 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 28 01:17:51.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.475099 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 28 01:17:51.475804 systemd[1]: Stopped target network.target - Network. Jan 28 01:17:51.476421 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 01:17:51.476486 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:17:51.477094 systemd[1]: Stopped target paths.target - Path Units. Jan 28 01:17:51.477804 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 01:17:51.481509 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:17:51.481879 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 01:17:51.482888 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 01:17:51.483558 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 01:17:51.483619 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:17:51.484201 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 01:17:51.484248 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:17:51.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.485519 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 28 01:17:51.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.485561 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 28 01:17:51.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.486189 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 01:17:51.486266 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 01:17:51.486905 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 01:17:51.486962 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 01:17:51.487614 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 01:17:51.487675 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 01:17:51.488402 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 01:17:51.489073 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 01:17:51.495052 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 01:17:51.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.495215 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 01:17:51.497000 audit: BPF prog-id=6 op=UNLOAD Jan 28 01:17:51.497723 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 01:17:51.497862 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 01:17:51.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.500607 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 28 01:17:51.501074 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 01:17:51.501126 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:17:51.502000 audit: BPF prog-id=9 op=UNLOAD Jan 28 01:17:51.503038 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 01:17:51.503724 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 01:17:51.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.503792 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:17:51.504600 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 01:17:51.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.504662 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:17:51.506946 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 01:17:51.507006 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 01:17:51.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.510148 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:17:51.527081 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 01:17:51.527314 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:17:51.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.529067 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 01:17:51.529121 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 01:17:51.531243 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 01:17:51.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.531292 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:17:51.531885 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 01:17:51.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.531946 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:17:51.533849 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 01:17:51.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.533919 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 01:17:51.535047 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 01:17:51.535111 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:17:51.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.539268 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 01:17:51.539807 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 28 01:17:51.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.539878 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 01:17:51.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.540526 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 01:17:51.540585 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:17:51.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.544015 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 28 01:17:51.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.544093 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:17:51.545767 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 01:17:51.545837 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:17:51.547009 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:17:51.547074 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:17:51.549104 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 01:17:51.551622 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 01:17:51.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.559186 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 01:17:51.559323 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 01:17:51.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:51.560826 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 01:17:51.563140 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 01:17:51.620733 systemd[1]: Switching root. Jan 28 01:17:51.691348 systemd-journald[287]: Received SIGTERM from PID 1 (systemd). Jan 28 01:17:51.691430 systemd-journald[287]: Journal stopped Jan 28 01:17:55.024993 kernel: SELinux: policy capability network_peer_controls=1 Jan 28 01:17:55.025066 kernel: SELinux: policy capability open_perms=1 Jan 28 01:17:55.025084 kernel: SELinux: policy capability extended_socket_class=1 Jan 28 01:17:55.025114 kernel: SELinux: policy capability always_check_network=0 Jan 28 01:17:55.025128 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 28 01:17:55.025141 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 28 01:17:55.025153 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 28 01:17:55.025170 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 28 01:17:55.025182 kernel: SELinux: policy capability userspace_initial_context=0 Jan 28 01:17:55.025196 systemd[1]: Successfully loaded SELinux policy in 176.041ms. Jan 28 01:17:55.025223 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.226ms. Jan 28 01:17:55.025240 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 01:17:55.025253 systemd[1]: Detected virtualization amazon. Jan 28 01:17:55.025265 systemd[1]: Detected architecture x86-64. Jan 28 01:17:55.025278 systemd[1]: Detected first boot. Jan 28 01:17:55.025371 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 28 01:17:55.025391 zram_generator::config[1274]: No configuration found. Jan 28 01:17:55.025412 kernel: Guest personality initialized and is inactive Jan 28 01:17:55.025426 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 28 01:17:55.025437 kernel: Initialized host personality Jan 28 01:17:55.025449 kernel: NET: Registered PF_VSOCK protocol family Jan 28 01:17:55.025464 systemd[1]: Populated /etc with preset unit settings. Jan 28 01:17:55.025476 kernel: kauditd_printk_skb: 44 callbacks suppressed Jan 28 01:17:55.025490 kernel: audit: type=1334 audit(1769563074.718:91): prog-id=12 op=LOAD Jan 28 01:17:55.025502 kernel: audit: type=1334 audit(1769563074.718:92): prog-id=3 op=UNLOAD Jan 28 01:17:55.025517 kernel: audit: type=1334 audit(1769563074.718:93): prog-id=13 op=LOAD Jan 28 01:17:55.025528 kernel: audit: type=1334 audit(1769563074.718:94): prog-id=14 op=LOAD Jan 28 01:17:55.025540 kernel: audit: type=1334 audit(1769563074.718:95): prog-id=4 op=UNLOAD Jan 28 01:17:55.025554 kernel: audit: type=1334 audit(1769563074.718:96): prog-id=5 op=UNLOAD Jan 28 01:17:55.025567 kernel: audit: type=1131 audit(1769563074.721:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:55.025579 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 28 01:17:55.025592 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 28 01:17:55.025605 kernel: audit: type=1130 audit(1769563074.732:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:55.025617 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 28 01:17:55.025631 kernel: audit: type=1131 audit(1769563074.732:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:55.025649 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 28 01:17:55.025661 kernel: audit: type=1334 audit(1769563074.742:100): prog-id=12 op=UNLOAD Jan 28 01:17:55.025673 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 28 01:17:55.025698 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 28 01:17:55.025711 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 28 01:17:55.025724 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 28 01:17:55.025739 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 28 01:17:55.025753 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 28 01:17:55.025765 systemd[1]: Created slice user.slice - User and Session Slice. Jan 28 01:17:55.025778 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:17:55.025791 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:17:55.025805 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 28 01:17:55.025825 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 28 01:17:55.025838 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 28 01:17:55.025851 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 01:17:55.025864 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 28 01:17:55.025878 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:17:55.025891 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:17:55.025904 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 28 01:17:55.025919 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 28 01:17:55.025933 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 28 01:17:55.025946 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 28 01:17:55.025959 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:17:55.025972 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 01:17:55.025985 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 28 01:17:55.025998 systemd[1]: Reached target slices.target - Slice Units. Jan 28 01:17:55.026012 systemd[1]: Reached target swap.target - Swaps. Jan 28 01:17:55.026025 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 28 01:17:55.026038 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 28 01:17:55.026051 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 28 01:17:55.026064 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 28 01:17:55.026077 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 28 01:17:55.026091 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:17:55.026106 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 28 01:17:55.026119 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 28 01:17:55.026132 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 01:17:55.026144 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:17:55.026157 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 28 01:17:55.026170 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 28 01:17:55.028201 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 28 01:17:55.028224 systemd[1]: Mounting media.mount - External Media Directory... Jan 28 01:17:55.028239 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:17:55.028259 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 28 01:17:55.028274 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 28 01:17:55.028290 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 28 01:17:55.028304 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 28 01:17:55.028317 systemd[1]: Reached target machines.target - Containers. Jan 28 01:17:55.028330 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 28 01:17:55.028365 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:17:55.028378 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 01:17:55.028391 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 28 01:17:55.028407 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:17:55.028419 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 01:17:55.028431 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:17:55.028445 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 28 01:17:55.028457 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:17:55.028472 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 28 01:17:55.028487 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 28 01:17:55.028500 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 28 01:17:55.028512 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 28 01:17:55.028525 systemd[1]: Stopped systemd-fsck-usr.service. Jan 28 01:17:55.028538 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 01:17:55.028551 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 01:17:55.028566 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 01:17:55.028580 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 01:17:55.028593 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 28 01:17:55.028605 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 28 01:17:55.028618 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 01:17:55.028634 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:17:55.028647 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 28 01:17:55.028660 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 28 01:17:55.028673 systemd[1]: Mounted media.mount - External Media Directory. Jan 28 01:17:55.028687 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 28 01:17:55.028702 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 28 01:17:55.028715 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 28 01:17:55.028728 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:17:55.028741 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:17:55.028754 kernel: fuse: init (API version 7.41) Jan 28 01:17:55.028768 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:17:55.028785 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:17:55.028798 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:17:55.028811 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 28 01:17:55.028824 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 28 01:17:55.028837 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 28 01:17:55.028889 systemd-journald[1348]: Collecting audit messages is enabled. Jan 28 01:17:55.028921 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 28 01:17:55.028934 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:17:55.028947 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:17:55.028962 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 01:17:55.028976 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 01:17:55.028989 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 28 01:17:55.029004 systemd-journald[1348]: Journal started Jan 28 01:17:55.029028 systemd-journald[1348]: Runtime Journal (/run/log/journal/ec236d8c033e62bbe4410c177654254b) is 4.7M, max 38M, 33.2M free. Jan 28 01:17:54.800000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 28 01:17:54.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:54.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:54.926000 audit: BPF prog-id=14 op=UNLOAD Jan 28 01:17:54.926000 audit: BPF prog-id=13 op=UNLOAD Jan 28 01:17:54.927000 audit: BPF prog-id=15 op=LOAD Jan 28 01:17:54.927000 audit: BPF prog-id=16 op=LOAD Jan 28 01:17:54.927000 audit: BPF prog-id=17 op=LOAD Jan 28 01:17:54.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:54.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:54.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:55.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:55.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:55.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:55.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:55.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:55.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:55.019000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 28 01:17:55.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:55.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:55.030698 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 01:17:55.019000 audit[1348]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffec21bf290 a2=4000 a3=0 items=0 ppid=1 pid=1348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:17:55.019000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 28 01:17:55.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:55.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:55.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:54.703452 systemd[1]: Queued start job for default target multi-user.target. Jan 28 01:17:55.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:54.719576 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 28 01:17:54.721929 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 28 01:17:55.032507 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 01:17:55.034479 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 28 01:17:55.037477 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 28 01:17:55.039057 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 28 01:17:55.040428 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 28 01:17:55.040462 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 01:17:55.042521 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 28 01:17:55.045966 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:17:55.046090 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 01:17:55.057520 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 28 01:17:55.060654 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 28 01:17:55.062453 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 01:17:55.065522 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 28 01:17:55.065999 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 01:17:55.071765 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:17:55.078425 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 28 01:17:55.085012 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 01:17:55.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:55.090030 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 28 01:17:55.092305 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 28 01:17:55.092978 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 28 01:17:55.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:55.099955 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 28 01:17:55.115642 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 28 01:17:55.120309 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 28 01:17:55.128929 systemd-journald[1348]: Time spent on flushing to /var/log/journal/ec236d8c033e62bbe4410c177654254b is 33.277ms for 1160 entries. Jan 28 01:17:55.128929 systemd-journald[1348]: System Journal (/var/log/journal/ec236d8c033e62bbe4410c177654254b) is 8M, max 588.1M, 580.1M free. Jan 28 01:17:55.181882 systemd-journald[1348]: Received client request to flush runtime journal. Jan 28 01:17:55.181925 kernel: ACPI: bus type drm_connector registered Jan 28 01:17:55.181945 kernel: loop1: detected capacity change from 0 to 111560 Jan 28 01:17:55.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:55.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:55.155504 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 01:17:55.155687 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 01:17:55.183856 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 28 01:17:55.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:55.185262 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 28 01:17:55.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:55.210851 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 28 01:17:55.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:55.212558 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:17:55.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:55.229651 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Jan 28 01:17:55.229670 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Jan 28 01:17:55.236392 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:17:55.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:55.237463 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:17:55.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:55.241639 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 28 01:17:55.361833 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 28 01:17:55.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:55.363000 audit: BPF prog-id=18 op=LOAD Jan 28 01:17:55.363000 audit: BPF prog-id=19 op=LOAD Jan 28 01:17:55.363000 audit: BPF prog-id=20 op=LOAD Jan 28 01:17:55.364542 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 28 01:17:55.365000 audit: BPF prog-id=21 op=LOAD Jan 28 01:17:55.368488 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 01:17:55.371403 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 01:17:55.372000 audit: BPF prog-id=22 op=LOAD Jan 28 01:17:55.374000 audit: BPF prog-id=23 op=LOAD Jan 28 01:17:55.374000 audit: BPF prog-id=24 op=LOAD Jan 28 01:17:55.376000 audit: BPF prog-id=25 op=LOAD Jan 28 01:17:55.376000 audit: BPF prog-id=26 op=LOAD Jan 28 01:17:55.376000 audit: BPF prog-id=27 op=LOAD Jan 28 01:17:55.376061 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 28 01:17:55.379830 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 28 01:17:55.409155 systemd-tmpfiles[1430]: ACLs are not supported, ignoring. Jan 28 01:17:55.409507 systemd-tmpfiles[1430]: ACLs are not supported, ignoring. Jan 28 01:17:55.414436 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:17:55.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:55.445920 systemd-nsresourced[1431]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 28 01:17:55.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:55.447786 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 28 01:17:55.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:55.487049 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 28 01:17:55.499362 kernel: loop2: detected capacity change from 0 to 229808 Jan 28 01:17:55.574110 systemd-oomd[1428]: No swap; memory pressure usage will be degraded Jan 28 01:17:55.575807 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 28 01:17:55.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:55.671511 systemd-resolved[1429]: Positive Trust Anchors: Jan 28 01:17:55.671822 systemd-resolved[1429]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 01:17:55.671870 systemd-resolved[1429]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 28 01:17:55.671945 systemd-resolved[1429]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 01:17:55.675372 kernel: loop3: detected capacity change from 0 to 50784 Jan 28 01:17:55.727427 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 28 01:17:55.779870 systemd-resolved[1429]: Defaulting to hostname 'linux'. Jan 28 01:17:55.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:55.781493 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 01:17:55.782038 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:17:55.802010 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 28 01:17:55.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:55.802000 audit: BPF prog-id=8 op=UNLOAD Jan 28 01:17:55.802000 audit: BPF prog-id=7 op=UNLOAD Jan 28 01:17:55.802000 audit: BPF prog-id=28 op=LOAD Jan 28 01:17:55.803000 audit: BPF prog-id=29 op=LOAD Jan 28 01:17:55.804137 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:17:55.840764 systemd-udevd[1453]: Using default interface naming scheme 'v257'. Jan 28 01:17:55.913797 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:17:55.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:55.916000 audit: BPF prog-id=30 op=LOAD Jan 28 01:17:55.918515 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 01:17:55.991370 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 28 01:17:56.010365 kernel: loop4: detected capacity change from 0 to 73176 Jan 28 01:17:56.012483 (udev-worker)[1455]: Network interface NamePolicy= disabled on kernel command line. Jan 28 01:17:56.050608 systemd-networkd[1458]: lo: Link UP Jan 28 01:17:56.050617 systemd-networkd[1458]: lo: Gained carrier Jan 28 01:17:56.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:56.053493 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 01:17:56.054387 systemd[1]: Reached target network.target - Network. Jan 28 01:17:56.058306 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 28 01:17:56.062511 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 28 01:17:56.082100 systemd-networkd[1458]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 01:17:56.082561 systemd-networkd[1458]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:17:56.089368 kernel: mousedev: PS/2 mouse device common for all mice Jan 28 01:17:56.090542 systemd-networkd[1458]: eth0: Link UP Jan 28 01:17:56.090769 systemd-networkd[1458]: eth0: Gained carrier Jan 28 01:17:56.090799 systemd-networkd[1458]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 01:17:56.098522 systemd-networkd[1458]: eth0: DHCPv4 address 172.31.31.26/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 28 01:17:56.108809 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 28 01:17:56.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:56.169362 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 28 01:17:56.172358 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 28 01:17:56.188357 kernel: ACPI: button: Power Button [PWRF] Jan 28 01:17:56.191355 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 28 01:17:56.217382 kernel: ACPI: button: Sleep Button [SLPF] Jan 28 01:17:56.285665 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:17:56.307611 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:17:56.308460 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:17:56.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:56.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:56.314561 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:17:56.400376 kernel: loop5: detected capacity change from 0 to 111560 Jan 28 01:17:56.425907 kernel: loop6: detected capacity change from 0 to 229808 Jan 28 01:17:56.463798 kernel: loop7: detected capacity change from 0 to 50784 Jan 28 01:17:56.492373 kernel: loop1: detected capacity change from 0 to 73176 Jan 28 01:17:56.531634 (sd-merge)[1550]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-ami.raw'. Jan 28 01:17:56.540606 (sd-merge)[1550]: Merged extensions into '/usr'. Jan 28 01:17:56.546422 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 28 01:17:56.550480 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 28 01:17:56.551415 systemd[1]: Reload requested from client PID 1383 ('systemd-sysext') (unit systemd-sysext.service)... Jan 28 01:17:56.551429 systemd[1]: Reloading... Jan 28 01:17:56.627424 zram_generator::config[1625]: No configuration found. Jan 28 01:17:56.870061 systemd[1]: Reloading finished in 318 ms. Jan 28 01:17:56.903096 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 28 01:17:56.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:56.904007 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:17:56.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:56.906037 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 28 01:17:56.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:56.915970 systemd[1]: Starting ensure-sysext.service... Jan 28 01:17:56.922000 audit: BPF prog-id=31 op=LOAD Jan 28 01:17:56.920566 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 01:17:56.925000 audit: BPF prog-id=15 op=UNLOAD Jan 28 01:17:56.925000 audit: BPF prog-id=32 op=LOAD Jan 28 01:17:56.925000 audit: BPF prog-id=33 op=LOAD Jan 28 01:17:56.925000 audit: BPF prog-id=16 op=UNLOAD Jan 28 01:17:56.925000 audit: BPF prog-id=17 op=UNLOAD Jan 28 01:17:56.925000 audit: BPF prog-id=34 op=LOAD Jan 28 01:17:56.925000 audit: BPF prog-id=35 op=LOAD Jan 28 01:17:56.925000 audit: BPF prog-id=28 op=UNLOAD Jan 28 01:17:56.925000 audit: BPF prog-id=29 op=UNLOAD Jan 28 01:17:56.926000 audit: BPF prog-id=36 op=LOAD Jan 28 01:17:56.926000 audit: BPF prog-id=25 op=UNLOAD Jan 28 01:17:56.926000 audit: BPF prog-id=37 op=LOAD Jan 28 01:17:56.926000 audit: BPF prog-id=38 op=LOAD Jan 28 01:17:56.926000 audit: BPF prog-id=26 op=UNLOAD Jan 28 01:17:56.926000 audit: BPF prog-id=27 op=UNLOAD Jan 28 01:17:56.929000 audit: BPF prog-id=39 op=LOAD Jan 28 01:17:56.929000 audit: BPF prog-id=30 op=UNLOAD Jan 28 01:17:56.932000 audit: BPF prog-id=40 op=LOAD Jan 28 01:17:56.932000 audit: BPF prog-id=22 op=UNLOAD Jan 28 01:17:56.932000 audit: BPF prog-id=41 op=LOAD Jan 28 01:17:56.932000 audit: BPF prog-id=42 op=LOAD Jan 28 01:17:56.932000 audit: BPF prog-id=23 op=UNLOAD Jan 28 01:17:56.932000 audit: BPF prog-id=24 op=UNLOAD Jan 28 01:17:56.933000 audit: BPF prog-id=43 op=LOAD Jan 28 01:17:56.933000 audit: BPF prog-id=21 op=UNLOAD Jan 28 01:17:56.934000 audit: BPF prog-id=44 op=LOAD Jan 28 01:17:56.934000 audit: BPF prog-id=18 op=UNLOAD Jan 28 01:17:56.935000 audit: BPF prog-id=45 op=LOAD Jan 28 01:17:56.935000 audit: BPF prog-id=46 op=LOAD Jan 28 01:17:56.935000 audit: BPF prog-id=19 op=UNLOAD Jan 28 01:17:56.935000 audit: BPF prog-id=20 op=UNLOAD Jan 28 01:17:56.944277 systemd[1]: Reload requested from client PID 1678 ('systemctl') (unit ensure-sysext.service)... Jan 28 01:17:56.944448 systemd[1]: Reloading... Jan 28 01:17:56.948981 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 28 01:17:56.949024 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 28 01:17:56.949438 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 01:17:56.951320 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Jan 28 01:17:56.951435 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Jan 28 01:17:56.960181 systemd-tmpfiles[1679]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 01:17:56.960200 systemd-tmpfiles[1679]: Skipping /boot Jan 28 01:17:56.972501 systemd-tmpfiles[1679]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 01:17:56.972518 systemd-tmpfiles[1679]: Skipping /boot Jan 28 01:17:57.057418 zram_generator::config[1718]: No configuration found. Jan 28 01:17:57.283140 systemd[1]: Reloading finished in 338 ms. Jan 28 01:17:57.296000 audit: BPF prog-id=47 op=LOAD Jan 28 01:17:57.296000 audit: BPF prog-id=36 op=UNLOAD Jan 28 01:17:57.296000 audit: BPF prog-id=48 op=LOAD Jan 28 01:17:57.297000 audit: BPF prog-id=49 op=LOAD Jan 28 01:17:57.297000 audit: BPF prog-id=37 op=UNLOAD Jan 28 01:17:57.297000 audit: BPF prog-id=38 op=UNLOAD Jan 28 01:17:57.298000 audit: BPF prog-id=50 op=LOAD Jan 28 01:17:57.298000 audit: BPF prog-id=44 op=UNLOAD Jan 28 01:17:57.298000 audit: BPF prog-id=51 op=LOAD Jan 28 01:17:57.298000 audit: BPF prog-id=52 op=LOAD Jan 28 01:17:57.298000 audit: BPF prog-id=45 op=UNLOAD Jan 28 01:17:57.298000 audit: BPF prog-id=46 op=UNLOAD Jan 28 01:17:57.300000 audit: BPF prog-id=53 op=LOAD Jan 28 01:17:57.300000 audit: BPF prog-id=39 op=UNLOAD Jan 28 01:17:57.301000 audit: BPF prog-id=54 op=LOAD Jan 28 01:17:57.308000 audit: BPF prog-id=40 op=UNLOAD Jan 28 01:17:57.308000 audit: BPF prog-id=55 op=LOAD Jan 28 01:17:57.308000 audit: BPF prog-id=56 op=LOAD Jan 28 01:17:57.308000 audit: BPF prog-id=41 op=UNLOAD Jan 28 01:17:57.308000 audit: BPF prog-id=42 op=UNLOAD Jan 28 01:17:57.309000 audit: BPF prog-id=57 op=LOAD Jan 28 01:17:57.309000 audit: BPF prog-id=43 op=UNLOAD Jan 28 01:17:57.309000 audit: BPF prog-id=58 op=LOAD Jan 28 01:17:57.309000 audit: BPF prog-id=59 op=LOAD Jan 28 01:17:57.309000 audit: BPF prog-id=34 op=UNLOAD Jan 28 01:17:57.309000 audit: BPF prog-id=35 op=UNLOAD Jan 28 01:17:57.310000 audit: BPF prog-id=60 op=LOAD Jan 28 01:17:57.310000 audit: BPF prog-id=31 op=UNLOAD Jan 28 01:17:57.311000 audit: BPF prog-id=61 op=LOAD Jan 28 01:17:57.311000 audit: BPF prog-id=62 op=LOAD Jan 28 01:17:57.311000 audit: BPF prog-id=32 op=UNLOAD Jan 28 01:17:57.311000 audit: BPF prog-id=33 op=UNLOAD Jan 28 01:17:57.315426 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:17:57.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:57.325488 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 01:17:57.364878 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 28 01:17:57.369655 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 28 01:17:57.375494 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 28 01:17:57.381657 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 28 01:17:57.387575 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:17:57.387943 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:17:57.391013 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:17:57.393779 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:17:57.398000 audit[1776]: SYSTEM_BOOT pid=1776 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 28 01:17:57.397686 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:17:57.398763 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:17:57.399052 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 01:17:57.399223 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 01:17:57.399736 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:17:57.412296 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:17:57.413220 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:17:57.414602 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:17:57.414794 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 01:17:57.414881 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 01:17:57.414990 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:17:57.418523 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 28 01:17:57.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:57.432130 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:17:57.432823 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:17:57.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:57.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:57.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:57.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:57.438787 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:17:57.439130 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:17:57.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:57.449595 systemd[1]: Finished ensure-sysext.service. Jan 28 01:17:57.451928 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:17:57.452795 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:17:57.455778 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 01:17:57.457620 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:17:57.457788 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 01:17:57.457839 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 01:17:57.457895 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 01:17:57.457948 systemd[1]: Reached target time-set.target - System Time Set. Jan 28 01:17:57.458638 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:17:57.459099 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:17:57.459320 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:17:57.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:57.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:57.461800 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 01:17:57.481928 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 28 01:17:57.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:57.482737 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 01:17:57.482951 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 01:17:57.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:57.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:17:57.569000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 28 01:17:57.569000 audit[1801]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcdb06f640 a2=420 a3=0 items=0 ppid=1766 pid=1801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:17:57.569000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 01:17:57.570082 augenrules[1801]: No rules Jan 28 01:17:57.571420 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 01:17:57.571687 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 01:17:57.808466 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 28 01:17:57.809124 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 01:17:58.002946 systemd-networkd[1458]: eth0: Gained IPv6LL Jan 28 01:17:58.005604 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 28 01:17:58.006225 systemd[1]: Reached target network-online.target - Network is Online. Jan 28 01:18:00.651235 ldconfig[1774]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 28 01:18:00.667568 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 28 01:18:00.679932 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 28 01:18:00.768073 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 28 01:18:00.769756 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 01:18:00.772022 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 28 01:18:00.773786 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 28 01:18:00.776101 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 28 01:18:00.781108 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 28 01:18:00.782050 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 28 01:18:00.782700 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 28 01:18:00.783312 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 28 01:18:00.785722 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 28 01:18:00.786733 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 28 01:18:00.786784 systemd[1]: Reached target paths.target - Path Units. Jan 28 01:18:00.787668 systemd[1]: Reached target timers.target - Timer Units. Jan 28 01:18:00.790190 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 28 01:18:00.802407 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 28 01:18:00.809848 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 28 01:18:00.812816 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 28 01:18:00.815006 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 28 01:18:00.830772 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 28 01:18:00.834102 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 28 01:18:00.836928 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 28 01:18:00.841311 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 01:18:00.841922 systemd[1]: Reached target basic.target - Basic System. Jan 28 01:18:00.842581 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 28 01:18:00.842806 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 28 01:18:00.846570 systemd[1]: Starting containerd.service - containerd container runtime... Jan 28 01:18:00.852847 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 28 01:18:00.863697 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 28 01:18:00.879176 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 28 01:18:00.892700 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 28 01:18:00.901915 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 28 01:18:00.902565 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 28 01:18:00.922682 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 28 01:18:00.950651 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:18:00.979909 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 28 01:18:00.988307 systemd[1]: Started ntpd.service - Network Time Service. Jan 28 01:18:01.000670 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 28 01:18:01.009536 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 28 01:18:01.029115 jq[1818]: false Jan 28 01:18:01.056112 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 28 01:18:01.080386 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 28 01:18:01.085944 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 28 01:18:01.101855 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 28 01:18:01.102915 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 28 01:18:01.104010 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 28 01:18:01.113247 systemd[1]: Starting update-engine.service - Update Engine... Jan 28 01:18:01.120419 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 28 01:18:01.124065 google_oslogin_nss_cache[1820]: oslogin_cache_refresh[1820]: Refreshing passwd entry cache Jan 28 01:18:01.124089 oslogin_cache_refresh[1820]: Refreshing passwd entry cache Jan 28 01:18:01.143232 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 28 01:18:01.145347 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 28 01:18:01.145696 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 28 01:18:01.165847 google_oslogin_nss_cache[1820]: oslogin_cache_refresh[1820]: Failure getting users, quitting Jan 28 01:18:01.172872 oslogin_cache_refresh[1820]: Failure getting users, quitting Jan 28 01:18:01.173647 google_oslogin_nss_cache[1820]: oslogin_cache_refresh[1820]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 28 01:18:01.173647 google_oslogin_nss_cache[1820]: oslogin_cache_refresh[1820]: Refreshing group entry cache Jan 28 01:18:01.172935 oslogin_cache_refresh[1820]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 28 01:18:01.173003 oslogin_cache_refresh[1820]: Refreshing group entry cache Jan 28 01:18:01.230398 google_oslogin_nss_cache[1820]: oslogin_cache_refresh[1820]: Failure getting groups, quitting Jan 28 01:18:01.230398 google_oslogin_nss_cache[1820]: oslogin_cache_refresh[1820]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 28 01:18:01.230165 oslogin_cache_refresh[1820]: Failure getting groups, quitting Jan 28 01:18:01.230186 oslogin_cache_refresh[1820]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 28 01:18:01.231100 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 28 01:18:01.231491 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 28 01:18:01.233083 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 28 01:18:01.239011 jq[1839]: true Jan 28 01:18:01.234658 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 28 01:18:01.247213 extend-filesystems[1819]: Found /dev/nvme0n1p6 Jan 28 01:18:01.261601 systemd[1]: motdgen.service: Deactivated successfully. Jan 28 01:18:01.263328 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 28 01:18:01.311614 jq[1861]: true Jan 28 01:18:01.329972 ntpd[1823]: ntpd 4.2.8p18@1.4062-o Tue Jan 27 21:35:19 UTC 2026 (1): Starting Jan 28 01:18:01.330828 ntpd[1823]: 28 Jan 01:18:01 ntpd[1823]: ntpd 4.2.8p18@1.4062-o Tue Jan 27 21:35:19 UTC 2026 (1): Starting Jan 28 01:18:01.330828 ntpd[1823]: 28 Jan 01:18:01 ntpd[1823]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 28 01:18:01.330828 ntpd[1823]: 28 Jan 01:18:01 ntpd[1823]: ---------------------------------------------------- Jan 28 01:18:01.330828 ntpd[1823]: 28 Jan 01:18:01 ntpd[1823]: ntp-4 is maintained by Network Time Foundation, Jan 28 01:18:01.330828 ntpd[1823]: 28 Jan 01:18:01 ntpd[1823]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 28 01:18:01.330828 ntpd[1823]: 28 Jan 01:18:01 ntpd[1823]: corporation. Support and training for ntp-4 are Jan 28 01:18:01.330828 ntpd[1823]: 28 Jan 01:18:01 ntpd[1823]: available at https://www.nwtime.org/support Jan 28 01:18:01.330828 ntpd[1823]: 28 Jan 01:18:01 ntpd[1823]: ---------------------------------------------------- Jan 28 01:18:01.330049 ntpd[1823]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 28 01:18:01.330060 ntpd[1823]: ---------------------------------------------------- Jan 28 01:18:01.330069 ntpd[1823]: ntp-4 is maintained by Network Time Foundation, Jan 28 01:18:01.330078 ntpd[1823]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 28 01:18:01.330086 ntpd[1823]: corporation. Support and training for ntp-4 are Jan 28 01:18:01.330095 ntpd[1823]: available at https://www.nwtime.org/support Jan 28 01:18:01.330104 ntpd[1823]: ---------------------------------------------------- Jan 28 01:18:01.337025 extend-filesystems[1819]: Found /dev/nvme0n1p9 Jan 28 01:18:01.339717 ntpd[1823]: proto: precision = 0.071 usec (-24) Jan 28 01:18:01.340472 ntpd[1823]: 28 Jan 01:18:01 ntpd[1823]: proto: precision = 0.071 usec (-24) Jan 28 01:18:01.345881 ntpd[1823]: basedate set to 2026-01-15 Jan 28 01:18:01.347511 ntpd[1823]: 28 Jan 01:18:01 ntpd[1823]: basedate set to 2026-01-15 Jan 28 01:18:01.347511 ntpd[1823]: 28 Jan 01:18:01 ntpd[1823]: gps base set to 2026-01-18 (week 2402) Jan 28 01:18:01.347511 ntpd[1823]: 28 Jan 01:18:01 ntpd[1823]: Listen and drop on 0 v6wildcard [::]:123 Jan 28 01:18:01.347511 ntpd[1823]: 28 Jan 01:18:01 ntpd[1823]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 28 01:18:01.345915 ntpd[1823]: gps base set to 2026-01-18 (week 2402) Jan 28 01:18:01.346060 ntpd[1823]: Listen and drop on 0 v6wildcard [::]:123 Jan 28 01:18:01.346089 ntpd[1823]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 28 01:18:01.348852 extend-filesystems[1819]: Checking size of /dev/nvme0n1p9 Jan 28 01:18:01.352802 ntpd[1823]: 28 Jan 01:18:01 ntpd[1823]: Listen normally on 2 lo 127.0.0.1:123 Jan 28 01:18:01.352802 ntpd[1823]: 28 Jan 01:18:01 ntpd[1823]: Listen normally on 3 eth0 172.31.31.26:123 Jan 28 01:18:01.352802 ntpd[1823]: 28 Jan 01:18:01 ntpd[1823]: Listen normally on 4 lo [::1]:123 Jan 28 01:18:01.352802 ntpd[1823]: 28 Jan 01:18:01 ntpd[1823]: Listen normally on 5 eth0 [fe80::42b:adff:fe0e:148d%2]:123 Jan 28 01:18:01.352802 ntpd[1823]: 28 Jan 01:18:01 ntpd[1823]: Listening on routing socket on fd #22 for interface updates Jan 28 01:18:01.348564 ntpd[1823]: Listen normally on 2 lo 127.0.0.1:123 Jan 28 01:18:01.353046 update_engine[1835]: I20260128 01:18:01.352056 1835 main.cc:92] Flatcar Update Engine starting Jan 28 01:18:01.348599 ntpd[1823]: Listen normally on 3 eth0 172.31.31.26:123 Jan 28 01:18:01.368006 ntpd[1823]: 28 Jan 01:18:01 ntpd[1823]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 28 01:18:01.368006 ntpd[1823]: 28 Jan 01:18:01 ntpd[1823]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 28 01:18:01.368083 coreos-metadata[1815]: Jan 28 01:18:01.356 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 28 01:18:01.368083 coreos-metadata[1815]: Jan 28 01:18:01.366 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 28 01:18:01.348631 ntpd[1823]: Listen normally on 4 lo [::1]:123 Jan 28 01:18:01.348663 ntpd[1823]: Listen normally on 5 eth0 [fe80::42b:adff:fe0e:148d%2]:123 Jan 28 01:18:01.348695 ntpd[1823]: Listening on routing socket on fd #22 for interface updates Jan 28 01:18:01.367647 ntpd[1823]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 28 01:18:01.367681 ntpd[1823]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 28 01:18:01.370047 coreos-metadata[1815]: Jan 28 01:18:01.369 INFO Fetch successful Jan 28 01:18:01.370047 coreos-metadata[1815]: Jan 28 01:18:01.369 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 28 01:18:01.380710 coreos-metadata[1815]: Jan 28 01:18:01.379 INFO Fetch successful Jan 28 01:18:01.380710 coreos-metadata[1815]: Jan 28 01:18:01.379 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 28 01:18:01.385999 coreos-metadata[1815]: Jan 28 01:18:01.382 INFO Fetch successful Jan 28 01:18:01.385999 coreos-metadata[1815]: Jan 28 01:18:01.382 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 28 01:18:01.385999 coreos-metadata[1815]: Jan 28 01:18:01.384 INFO Fetch successful Jan 28 01:18:01.385999 coreos-metadata[1815]: Jan 28 01:18:01.384 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 28 01:18:01.387849 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 28 01:18:01.392432 coreos-metadata[1815]: Jan 28 01:18:01.388 INFO Fetch failed with 404: resource not found Jan 28 01:18:01.392432 coreos-metadata[1815]: Jan 28 01:18:01.388 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 28 01:18:01.395624 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 28 01:18:01.396229 coreos-metadata[1815]: Jan 28 01:18:01.396 INFO Fetch successful Jan 28 01:18:01.396312 coreos-metadata[1815]: Jan 28 01:18:01.396 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 28 01:18:01.402455 tar[1845]: linux-amd64/LICENSE Jan 28 01:18:01.402455 tar[1845]: linux-amd64/helm Jan 28 01:18:01.404712 dbus-daemon[1816]: [system] SELinux support is enabled Jan 28 01:18:01.405006 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 28 01:18:01.414139 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 28 01:18:01.414186 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 28 01:18:01.415375 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 28 01:18:01.415414 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 28 01:18:01.419780 coreos-metadata[1815]: Jan 28 01:18:01.419 INFO Fetch successful Jan 28 01:18:01.419780 coreos-metadata[1815]: Jan 28 01:18:01.419 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 28 01:18:01.419780 coreos-metadata[1815]: Jan 28 01:18:01.419 INFO Fetch successful Jan 28 01:18:01.419780 coreos-metadata[1815]: Jan 28 01:18:01.419 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 28 01:18:01.420140 coreos-metadata[1815]: Jan 28 01:18:01.419 INFO Fetch successful Jan 28 01:18:01.420140 coreos-metadata[1815]: Jan 28 01:18:01.419 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 28 01:18:01.428058 coreos-metadata[1815]: Jan 28 01:18:01.426 INFO Fetch successful Jan 28 01:18:01.441793 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 28 01:18:01.450084 extend-filesystems[1819]: Resized partition /dev/nvme0n1p9 Jan 28 01:18:01.451890 dbus-daemon[1816]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1458 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 28 01:18:01.463531 update_engine[1835]: I20260128 01:18:01.456909 1835 update_check_scheduler.cc:74] Next update check in 3m0s Jan 28 01:18:01.457144 systemd[1]: Started update-engine.service - Update Engine. Jan 28 01:18:01.464190 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 28 01:18:01.482217 extend-filesystems[1906]: resize2fs 1.47.3 (8-Jul-2025) Jan 28 01:18:01.499372 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 1617920 to 2604027 blocks Jan 28 01:18:01.514372 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 2604027 Jan 28 01:18:01.547302 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 28 01:18:01.564810 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 28 01:18:01.565744 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 28 01:18:01.577024 extend-filesystems[1906]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 28 01:18:01.577024 extend-filesystems[1906]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 28 01:18:01.577024 extend-filesystems[1906]: The filesystem on /dev/nvme0n1p9 is now 2604027 (4k) blocks long. Jan 28 01:18:01.594286 extend-filesystems[1819]: Resized filesystem in /dev/nvme0n1p9 Jan 28 01:18:01.579788 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 28 01:18:01.580178 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 28 01:18:01.586457 systemd-logind[1833]: Watching system buttons on /dev/input/event2 (Power Button) Jan 28 01:18:01.586490 systemd-logind[1833]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 28 01:18:01.586516 systemd-logind[1833]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 28 01:18:01.588193 systemd-logind[1833]: New seat seat0. Jan 28 01:18:01.596169 systemd[1]: Started systemd-logind.service - User Login Management. Jan 28 01:18:01.683401 bash[1913]: Updated "/home/core/.ssh/authorized_keys" Jan 28 01:18:01.686648 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 28 01:18:01.696833 systemd[1]: Starting sshkeys.service... Jan 28 01:18:01.718142 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 28 01:18:01.727550 dbus-daemon[1816]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 28 01:18:01.728491 dbus-daemon[1816]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1903 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 28 01:18:01.739454 systemd[1]: Starting polkit.service - Authorization Manager... Jan 28 01:18:01.767999 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 28 01:18:01.776171 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 28 01:18:01.883791 amazon-ssm-agent[1893]: Initializing new seelog logger Jan 28 01:18:01.884958 amazon-ssm-agent[1893]: New Seelog Logger Creation Complete Jan 28 01:18:01.884958 amazon-ssm-agent[1893]: 2026/01/28 01:18:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 28 01:18:01.884958 amazon-ssm-agent[1893]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 28 01:18:01.885093 amazon-ssm-agent[1893]: 2026/01/28 01:18:01 processing appconfig overrides Jan 28 01:18:01.893167 amazon-ssm-agent[1893]: 2026/01/28 01:18:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 28 01:18:01.894208 amazon-ssm-agent[1893]: 2026-01-28 01:18:01.8853 INFO Proxy environment variables: Jan 28 01:18:01.894847 amazon-ssm-agent[1893]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 28 01:18:01.894847 amazon-ssm-agent[1893]: 2026/01/28 01:18:01 processing appconfig overrides Jan 28 01:18:01.902856 amazon-ssm-agent[1893]: 2026/01/28 01:18:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 28 01:18:01.903559 amazon-ssm-agent[1893]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 28 01:18:01.903559 amazon-ssm-agent[1893]: 2026/01/28 01:18:01 processing appconfig overrides Jan 28 01:18:01.916367 amazon-ssm-agent[1893]: 2026/01/28 01:18:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 28 01:18:01.916367 amazon-ssm-agent[1893]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 28 01:18:01.916367 amazon-ssm-agent[1893]: 2026/01/28 01:18:01 processing appconfig overrides Jan 28 01:18:01.995038 polkitd[1933]: Started polkitd version 126 Jan 28 01:18:01.998908 amazon-ssm-agent[1893]: 2026-01-28 01:18:01.8930 INFO https_proxy: Jan 28 01:18:02.010079 polkitd[1933]: Loading rules from directory /etc/polkit-1/rules.d Jan 28 01:18:02.010618 polkitd[1933]: Loading rules from directory /run/polkit-1/rules.d Jan 28 01:18:02.010687 polkitd[1933]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 28 01:18:02.011109 polkitd[1933]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 28 01:18:02.011153 polkitd[1933]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 28 01:18:02.011198 polkitd[1933]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 28 01:18:02.027681 polkitd[1933]: Finished loading, compiling and executing 2 rules Jan 28 01:18:02.029752 systemd[1]: Started polkit.service - Authorization Manager. Jan 28 01:18:02.033764 dbus-daemon[1816]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 28 01:18:02.034616 polkitd[1933]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 28 01:18:02.042116 coreos-metadata[1934]: Jan 28 01:18:02.041 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 28 01:18:02.042886 coreos-metadata[1934]: Jan 28 01:18:02.042 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 28 01:18:02.043771 coreos-metadata[1934]: Jan 28 01:18:02.043 INFO Fetch successful Jan 28 01:18:02.044633 coreos-metadata[1934]: Jan 28 01:18:02.044 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 28 01:18:02.044633 coreos-metadata[1934]: Jan 28 01:18:02.044 INFO Fetch successful Jan 28 01:18:02.048455 unknown[1934]: wrote ssh authorized keys file for user: core Jan 28 01:18:02.082468 systemd-hostnamed[1903]: Hostname set to (transient) Jan 28 01:18:02.082601 systemd-resolved[1429]: System hostname changed to 'ip-172-31-31-26'. Jan 28 01:18:02.101718 amazon-ssm-agent[1893]: 2026-01-28 01:18:01.8931 INFO http_proxy: Jan 28 01:18:02.147733 update-ssh-keys[1961]: Updated "/home/core/.ssh/authorized_keys" Jan 28 01:18:02.151204 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 28 01:18:02.157883 systemd[1]: Finished sshkeys.service. Jan 28 01:18:02.197734 amazon-ssm-agent[1893]: 2026-01-28 01:18:01.8931 INFO no_proxy: Jan 28 01:18:02.306073 amazon-ssm-agent[1893]: 2026-01-28 01:18:01.8948 INFO Checking if agent identity type OnPrem can be assumed Jan 28 01:18:02.397270 locksmithd[1905]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 28 01:18:02.398958 containerd[1849]: time="2026-01-28T01:18:02Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 28 01:18:02.409355 containerd[1849]: time="2026-01-28T01:18:02.408551360Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 28 01:18:02.414929 amazon-ssm-agent[1893]: 2026-01-28 01:18:01.8955 INFO Checking if agent identity type EC2 can be assumed Jan 28 01:18:02.505467 containerd[1849]: time="2026-01-28T01:18:02.505418046Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.23µs" Jan 28 01:18:02.505609 containerd[1849]: time="2026-01-28T01:18:02.505591161Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 28 01:18:02.505702 containerd[1849]: time="2026-01-28T01:18:02.505688591Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 28 01:18:02.505762 containerd[1849]: time="2026-01-28T01:18:02.505750399Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 28 01:18:02.506931 containerd[1849]: time="2026-01-28T01:18:02.506900835Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 28 01:18:02.507085 containerd[1849]: time="2026-01-28T01:18:02.507065972Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 01:18:02.512686 containerd[1849]: time="2026-01-28T01:18:02.510599975Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 01:18:02.512686 containerd[1849]: time="2026-01-28T01:18:02.510627992Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 01:18:02.512686 containerd[1849]: time="2026-01-28T01:18:02.510884127Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 01:18:02.512686 containerd[1849]: time="2026-01-28T01:18:02.510901497Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 01:18:02.512686 containerd[1849]: time="2026-01-28T01:18:02.510915501Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 01:18:02.512686 containerd[1849]: time="2026-01-28T01:18:02.510926321Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 28 01:18:02.512686 containerd[1849]: time="2026-01-28T01:18:02.511089840Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 28 01:18:02.512686 containerd[1849]: time="2026-01-28T01:18:02.511104098Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 28 01:18:02.512686 containerd[1849]: time="2026-01-28T01:18:02.511178298Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 28 01:18:02.512686 containerd[1849]: time="2026-01-28T01:18:02.511406483Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 01:18:02.512686 containerd[1849]: time="2026-01-28T01:18:02.511440507Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 01:18:02.513138 containerd[1849]: time="2026-01-28T01:18:02.511453483Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 28 01:18:02.513174 amazon-ssm-agent[1893]: 2026-01-28 01:18:02.3292 INFO Agent will take identity from EC2 Jan 28 01:18:02.517395 containerd[1849]: time="2026-01-28T01:18:02.516384194Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 28 01:18:02.517832 containerd[1849]: time="2026-01-28T01:18:02.517617669Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 28 01:18:02.517832 containerd[1849]: time="2026-01-28T01:18:02.517738197Z" level=info msg="metadata content store policy set" policy=shared Jan 28 01:18:02.530216 containerd[1849]: time="2026-01-28T01:18:02.528887470Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 28 01:18:02.530216 containerd[1849]: time="2026-01-28T01:18:02.528965203Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 28 01:18:02.530216 containerd[1849]: time="2026-01-28T01:18:02.529059042Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 28 01:18:02.530216 containerd[1849]: time="2026-01-28T01:18:02.529077530Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 28 01:18:02.530216 containerd[1849]: time="2026-01-28T01:18:02.529094537Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 28 01:18:02.530216 containerd[1849]: time="2026-01-28T01:18:02.529110048Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 28 01:18:02.530216 containerd[1849]: time="2026-01-28T01:18:02.529125633Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 28 01:18:02.530216 containerd[1849]: time="2026-01-28T01:18:02.529141373Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 28 01:18:02.530216 containerd[1849]: time="2026-01-28T01:18:02.529156701Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 28 01:18:02.530216 containerd[1849]: time="2026-01-28T01:18:02.529172073Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 28 01:18:02.530216 containerd[1849]: time="2026-01-28T01:18:02.529189408Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 28 01:18:02.530216 containerd[1849]: time="2026-01-28T01:18:02.529204179Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 28 01:18:02.530216 containerd[1849]: time="2026-01-28T01:18:02.529229086Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 28 01:18:02.530216 containerd[1849]: time="2026-01-28T01:18:02.529262451Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 28 01:18:02.530792 containerd[1849]: time="2026-01-28T01:18:02.529418942Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 28 01:18:02.530792 containerd[1849]: time="2026-01-28T01:18:02.529447404Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 28 01:18:02.530792 containerd[1849]: time="2026-01-28T01:18:02.529469064Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 28 01:18:02.530792 containerd[1849]: time="2026-01-28T01:18:02.529484797Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 28 01:18:02.530792 containerd[1849]: time="2026-01-28T01:18:02.529500678Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 28 01:18:02.530792 containerd[1849]: time="2026-01-28T01:18:02.529515419Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 28 01:18:02.530792 containerd[1849]: time="2026-01-28T01:18:02.529538533Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 28 01:18:02.530792 containerd[1849]: time="2026-01-28T01:18:02.529553322Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 28 01:18:02.530792 containerd[1849]: time="2026-01-28T01:18:02.529568978Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 28 01:18:02.530792 containerd[1849]: time="2026-01-28T01:18:02.529583622Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 28 01:18:02.530792 containerd[1849]: time="2026-01-28T01:18:02.529597254Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 28 01:18:02.530792 containerd[1849]: time="2026-01-28T01:18:02.529653705Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 28 01:18:02.530792 containerd[1849]: time="2026-01-28T01:18:02.529726913Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 28 01:18:02.530792 containerd[1849]: time="2026-01-28T01:18:02.529743623Z" level=info msg="Start snapshots syncer" Jan 28 01:18:02.530792 containerd[1849]: time="2026-01-28T01:18:02.529778103Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 28 01:18:02.535391 containerd[1849]: time="2026-01-28T01:18:02.530157604Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 28 01:18:02.535391 containerd[1849]: time="2026-01-28T01:18:02.533437087Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 28 01:18:02.535653 containerd[1849]: time="2026-01-28T01:18:02.533541575Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 28 01:18:02.535653 containerd[1849]: time="2026-01-28T01:18:02.533703002Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 28 01:18:02.535653 containerd[1849]: time="2026-01-28T01:18:02.533743425Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 28 01:18:02.535653 containerd[1849]: time="2026-01-28T01:18:02.533760860Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 28 01:18:02.535653 containerd[1849]: time="2026-01-28T01:18:02.533780882Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 28 01:18:02.535653 containerd[1849]: time="2026-01-28T01:18:02.533797044Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 28 01:18:02.535653 containerd[1849]: time="2026-01-28T01:18:02.533811896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 28 01:18:02.535653 containerd[1849]: time="2026-01-28T01:18:02.533826648Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 28 01:18:02.535653 containerd[1849]: time="2026-01-28T01:18:02.533842495Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 28 01:18:02.539675 containerd[1849]: time="2026-01-28T01:18:02.538033531Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 28 01:18:02.539675 containerd[1849]: time="2026-01-28T01:18:02.538126878Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 01:18:02.539675 containerd[1849]: time="2026-01-28T01:18:02.538157337Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 01:18:02.539675 containerd[1849]: time="2026-01-28T01:18:02.538177677Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 01:18:02.543279 containerd[1849]: time="2026-01-28T01:18:02.538197264Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 01:18:02.543279 containerd[1849]: time="2026-01-28T01:18:02.541489137Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 28 01:18:02.543279 containerd[1849]: time="2026-01-28T01:18:02.541520877Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 28 01:18:02.543279 containerd[1849]: time="2026-01-28T01:18:02.541548395Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 28 01:18:02.543279 containerd[1849]: time="2026-01-28T01:18:02.541575836Z" level=info msg="runtime interface created" Jan 28 01:18:02.543279 containerd[1849]: time="2026-01-28T01:18:02.541585145Z" level=info msg="created NRI interface" Jan 28 01:18:02.543279 containerd[1849]: time="2026-01-28T01:18:02.541603585Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 28 01:18:02.543279 containerd[1849]: time="2026-01-28T01:18:02.541636728Z" level=info msg="Connect containerd service" Jan 28 01:18:02.543279 containerd[1849]: time="2026-01-28T01:18:02.541697548Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 28 01:18:02.543279 containerd[1849]: time="2026-01-28T01:18:02.542780631Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 01:18:02.616039 amazon-ssm-agent[1893]: 2026-01-28 01:18:02.3313 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jan 28 01:18:02.717366 amazon-ssm-agent[1893]: 2026-01-28 01:18:02.3313 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 28 01:18:02.815675 amazon-ssm-agent[1893]: 2026-01-28 01:18:02.3313 INFO [amazon-ssm-agent] Starting Core Agent Jan 28 01:18:02.821398 sshd_keygen[1879]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 28 01:18:02.884425 amazon-ssm-agent[1893]: 2026/01/28 01:18:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 28 01:18:02.884425 amazon-ssm-agent[1893]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 28 01:18:02.884845 amazon-ssm-agent[1893]: 2026/01/28 01:18:02 processing appconfig overrides Jan 28 01:18:02.904766 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 28 01:18:02.908955 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 28 01:18:02.914311 amazon-ssm-agent[1893]: 2026-01-28 01:18:02.3314 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jan 28 01:18:02.918364 amazon-ssm-agent[1893]: 2026-01-28 01:18:02.3314 INFO [Registrar] Starting registrar module Jan 28 01:18:02.918364 amazon-ssm-agent[1893]: 2026-01-28 01:18:02.3411 INFO [EC2Identity] Checking disk for registration info Jan 28 01:18:02.918364 amazon-ssm-agent[1893]: 2026-01-28 01:18:02.3412 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jan 28 01:18:02.918364 amazon-ssm-agent[1893]: 2026-01-28 01:18:02.3412 INFO [EC2Identity] Generating registration keypair Jan 28 01:18:02.918364 amazon-ssm-agent[1893]: 2026-01-28 01:18:02.8206 INFO [EC2Identity] Checking write access before registering Jan 28 01:18:02.918364 amazon-ssm-agent[1893]: 2026-01-28 01:18:02.8226 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jan 28 01:18:02.921515 amazon-ssm-agent[1893]: 2026-01-28 01:18:02.8839 INFO [EC2Identity] EC2 registration was successful. Jan 28 01:18:02.921515 amazon-ssm-agent[1893]: 2026-01-28 01:18:02.8840 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jan 28 01:18:02.921515 amazon-ssm-agent[1893]: 2026-01-28 01:18:02.8841 INFO [CredentialRefresher] Starting credentials refresher loop Jan 28 01:18:02.921515 amazon-ssm-agent[1893]: 2026-01-28 01:18:02.8842 INFO [CredentialRefresher] credentialRefresher has started Jan 28 01:18:02.921515 amazon-ssm-agent[1893]: 2026-01-28 01:18:02.9180 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 28 01:18:02.921515 amazon-ssm-agent[1893]: 2026-01-28 01:18:02.9182 INFO [CredentialRefresher] Credentials ready Jan 28 01:18:02.939153 systemd[1]: issuegen.service: Deactivated successfully. Jan 28 01:18:02.939614 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 28 01:18:02.944944 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 28 01:18:02.978926 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 28 01:18:02.983872 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 28 01:18:02.987950 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 28 01:18:02.989747 systemd[1]: Reached target getty.target - Login Prompts. Jan 28 01:18:03.014064 amazon-ssm-agent[1893]: 2026-01-28 01:18:02.9187 INFO [CredentialRefresher] Next credential rotation will be in 29.999989016666667 minutes Jan 28 01:18:03.050428 containerd[1849]: time="2026-01-28T01:18:03.049029592Z" level=info msg="Start subscribing containerd event" Jan 28 01:18:03.050428 containerd[1849]: time="2026-01-28T01:18:03.049132767Z" level=info msg="Start recovering state" Jan 28 01:18:03.050428 containerd[1849]: time="2026-01-28T01:18:03.049350790Z" level=info msg="Start event monitor" Jan 28 01:18:03.050428 containerd[1849]: time="2026-01-28T01:18:03.049370278Z" level=info msg="Start cni network conf syncer for default" Jan 28 01:18:03.050428 containerd[1849]: time="2026-01-28T01:18:03.049379938Z" level=info msg="Start streaming server" Jan 28 01:18:03.050428 containerd[1849]: time="2026-01-28T01:18:03.049390795Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 28 01:18:03.050428 containerd[1849]: time="2026-01-28T01:18:03.049590641Z" level=info msg="runtime interface starting up..." Jan 28 01:18:03.050428 containerd[1849]: time="2026-01-28T01:18:03.049602189Z" level=info msg="starting plugins..." Jan 28 01:18:03.050428 containerd[1849]: time="2026-01-28T01:18:03.049619755Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 28 01:18:03.050813 containerd[1849]: time="2026-01-28T01:18:03.050435423Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 28 01:18:03.050813 containerd[1849]: time="2026-01-28T01:18:03.050559578Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 28 01:18:03.050843 systemd[1]: Started containerd.service - containerd container runtime. Jan 28 01:18:03.056087 tar[1845]: linux-amd64/README.md Jan 28 01:18:03.056491 containerd[1849]: time="2026-01-28T01:18:03.056229139Z" level=info msg="containerd successfully booted in 0.660349s" Jan 28 01:18:03.074975 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 28 01:18:03.933161 amazon-ssm-agent[1893]: 2026-01-28 01:18:03.9330 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 28 01:18:04.033899 amazon-ssm-agent[1893]: 2026-01-28 01:18:03.9378 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2094) started Jan 28 01:18:04.134240 amazon-ssm-agent[1893]: 2026-01-28 01:18:03.9378 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 28 01:18:05.432155 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:18:05.433005 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 28 01:18:05.435396 systemd[1]: Startup finished in 3.810s (kernel) + 11.618s (initrd) + 12.863s (userspace) = 28.292s. Jan 28 01:18:05.442962 (kubelet)[2111]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:18:06.157922 kubelet[2111]: E0128 01:18:06.157836 2111 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:18:06.160412 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:18:06.160550 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:18:06.160898 systemd[1]: kubelet.service: Consumed 1.038s CPU time, 266.9M memory peak. Jan 28 01:18:09.948374 systemd-resolved[1429]: Clock change detected. Flushing caches. Jan 28 01:18:11.463184 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 28 01:18:11.464670 systemd[1]: Started sshd@0-172.31.31.26:22-68.220.241.50:34294.service - OpenSSH per-connection server daemon (68.220.241.50:34294). Jan 28 01:18:11.998815 sshd[2123]: Accepted publickey for core from 68.220.241.50 port 34294 ssh2: RSA SHA256:PpvjS6sxgjOf+voyr4NrS2kTF8aDF7ek5ziSVtOzP6U Jan 28 01:18:12.001020 sshd-session[2123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:18:12.008066 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 28 01:18:12.009420 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 28 01:18:12.015659 systemd-logind[1833]: New session 1 of user core. Jan 28 01:18:12.031739 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 28 01:18:12.035841 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 28 01:18:12.057014 (systemd)[2129]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:18:12.060415 systemd-logind[1833]: New session 2 of user core. Jan 28 01:18:12.201734 systemd[2129]: Queued start job for default target default.target. Jan 28 01:18:12.212893 systemd[2129]: Created slice app.slice - User Application Slice. Jan 28 01:18:12.212943 systemd[2129]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 28 01:18:12.212964 systemd[2129]: Reached target paths.target - Paths. Jan 28 01:18:12.213026 systemd[2129]: Reached target timers.target - Timers. Jan 28 01:18:12.215052 systemd[2129]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 28 01:18:12.217935 systemd[2129]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 28 01:18:12.242179 systemd[2129]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 28 01:18:12.243466 systemd[2129]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 28 01:18:12.243633 systemd[2129]: Reached target sockets.target - Sockets. Jan 28 01:18:12.243705 systemd[2129]: Reached target basic.target - Basic System. Jan 28 01:18:12.243980 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 28 01:18:12.245325 systemd[2129]: Reached target default.target - Main User Target. Jan 28 01:18:12.245383 systemd[2129]: Startup finished in 178ms. Jan 28 01:18:12.247972 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 28 01:18:12.494631 systemd[1]: Started sshd@1-172.31.31.26:22-68.220.241.50:44904.service - OpenSSH per-connection server daemon (68.220.241.50:44904). Jan 28 01:18:12.919637 sshd[2143]: Accepted publickey for core from 68.220.241.50 port 44904 ssh2: RSA SHA256:PpvjS6sxgjOf+voyr4NrS2kTF8aDF7ek5ziSVtOzP6U Jan 28 01:18:12.921091 sshd-session[2143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:18:12.927627 systemd-logind[1833]: New session 3 of user core. Jan 28 01:18:12.937053 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 28 01:18:13.152574 sshd[2147]: Connection closed by 68.220.241.50 port 44904 Jan 28 01:18:13.153944 sshd-session[2143]: pam_unix(sshd:session): session closed for user core Jan 28 01:18:13.158278 systemd[1]: sshd@1-172.31.31.26:22-68.220.241.50:44904.service: Deactivated successfully. Jan 28 01:18:13.160551 systemd[1]: session-3.scope: Deactivated successfully. Jan 28 01:18:13.161325 systemd-logind[1833]: Session 3 logged out. Waiting for processes to exit. Jan 28 01:18:13.162713 systemd-logind[1833]: Removed session 3. Jan 28 01:18:13.240430 systemd[1]: Started sshd@2-172.31.31.26:22-68.220.241.50:44920.service - OpenSSH per-connection server daemon (68.220.241.50:44920). Jan 28 01:18:13.678805 sshd[2153]: Accepted publickey for core from 68.220.241.50 port 44920 ssh2: RSA SHA256:PpvjS6sxgjOf+voyr4NrS2kTF8aDF7ek5ziSVtOzP6U Jan 28 01:18:13.680155 sshd-session[2153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:18:13.685364 systemd-logind[1833]: New session 4 of user core. Jan 28 01:18:13.690974 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 28 01:18:13.912829 sshd[2157]: Connection closed by 68.220.241.50 port 44920 Jan 28 01:18:13.913925 sshd-session[2153]: pam_unix(sshd:session): session closed for user core Jan 28 01:18:13.917527 systemd[1]: sshd@2-172.31.31.26:22-68.220.241.50:44920.service: Deactivated successfully. Jan 28 01:18:13.919378 systemd[1]: session-4.scope: Deactivated successfully. Jan 28 01:18:13.921043 systemd-logind[1833]: Session 4 logged out. Waiting for processes to exit. Jan 28 01:18:13.922259 systemd-logind[1833]: Removed session 4. Jan 28 01:18:14.012055 systemd[1]: Started sshd@3-172.31.31.26:22-68.220.241.50:44922.service - OpenSSH per-connection server daemon (68.220.241.50:44922). Jan 28 01:18:14.466787 sshd[2163]: Accepted publickey for core from 68.220.241.50 port 44922 ssh2: RSA SHA256:PpvjS6sxgjOf+voyr4NrS2kTF8aDF7ek5ziSVtOzP6U Jan 28 01:18:14.468102 sshd-session[2163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:18:14.473533 systemd-logind[1833]: New session 5 of user core. Jan 28 01:18:14.479998 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 28 01:18:14.718258 sshd[2167]: Connection closed by 68.220.241.50 port 44922 Jan 28 01:18:14.719912 sshd-session[2163]: pam_unix(sshd:session): session closed for user core Jan 28 01:18:14.723952 systemd-logind[1833]: Session 5 logged out. Waiting for processes to exit. Jan 28 01:18:14.724148 systemd[1]: sshd@3-172.31.31.26:22-68.220.241.50:44922.service: Deactivated successfully. Jan 28 01:18:14.726125 systemd[1]: session-5.scope: Deactivated successfully. Jan 28 01:18:14.727923 systemd-logind[1833]: Removed session 5. Jan 28 01:18:14.800373 systemd[1]: Started sshd@4-172.31.31.26:22-68.220.241.50:44936.service - OpenSSH per-connection server daemon (68.220.241.50:44936). Jan 28 01:18:15.231116 sshd[2173]: Accepted publickey for core from 68.220.241.50 port 44936 ssh2: RSA SHA256:PpvjS6sxgjOf+voyr4NrS2kTF8aDF7ek5ziSVtOzP6U Jan 28 01:18:15.232626 sshd-session[2173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:18:15.239097 systemd-logind[1833]: New session 6 of user core. Jan 28 01:18:15.245971 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 28 01:18:15.460333 sudo[2178]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 28 01:18:15.460629 sudo[2178]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:18:15.473864 sudo[2178]: pam_unix(sudo:session): session closed for user root Jan 28 01:18:15.551465 sshd[2177]: Connection closed by 68.220.241.50 port 44936 Jan 28 01:18:15.552964 sshd-session[2173]: pam_unix(sshd:session): session closed for user core Jan 28 01:18:15.556625 systemd[1]: sshd@4-172.31.31.26:22-68.220.241.50:44936.service: Deactivated successfully. Jan 28 01:18:15.558328 systemd[1]: session-6.scope: Deactivated successfully. Jan 28 01:18:15.560269 systemd-logind[1833]: Session 6 logged out. Waiting for processes to exit. Jan 28 01:18:15.561295 systemd-logind[1833]: Removed session 6. Jan 28 01:18:15.639686 systemd[1]: Started sshd@5-172.31.31.26:22-68.220.241.50:44944.service - OpenSSH per-connection server daemon (68.220.241.50:44944). Jan 28 01:18:16.067549 sshd[2185]: Accepted publickey for core from 68.220.241.50 port 44944 ssh2: RSA SHA256:PpvjS6sxgjOf+voyr4NrS2kTF8aDF7ek5ziSVtOzP6U Jan 28 01:18:16.069125 sshd-session[2185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:18:16.075786 systemd-logind[1833]: New session 7 of user core. Jan 28 01:18:16.082082 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 28 01:18:16.229468 sudo[2191]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 28 01:18:16.229824 sudo[2191]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:18:16.233270 sudo[2191]: pam_unix(sudo:session): session closed for user root Jan 28 01:18:16.240443 sudo[2190]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 28 01:18:16.240737 sudo[2190]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:18:16.248846 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 01:18:16.296510 kernel: kauditd_printk_skb: 146 callbacks suppressed Jan 28 01:18:16.296610 kernel: audit: type=1305 audit(1769563096.293:243): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 28 01:18:16.293000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 28 01:18:16.296068 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 01:18:16.296810 augenrules[2215]: No rules Jan 28 01:18:16.296307 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 01:18:16.293000 audit[2215]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffe55465c0 a2=420 a3=0 items=0 ppid=2196 pid=2215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:16.300099 sudo[2190]: pam_unix(sudo:session): session closed for user root Jan 28 01:18:16.293000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 01:18:16.302896 kernel: audit: type=1300 audit(1769563096.293:243): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffe55465c0 a2=420 a3=0 items=0 ppid=2196 pid=2215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:16.302961 kernel: audit: type=1327 audit(1769563096.293:243): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 01:18:16.302985 kernel: audit: type=1130 audit(1769563096.296:244): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.308765 kernel: audit: type=1131 audit(1769563096.296:245): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.308827 kernel: audit: type=1106 audit(1769563096.296:246): pid=2190 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.296000 audit[2190]: USER_END pid=2190 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.312008 kernel: audit: type=1104 audit(1769563096.296:247): pid=2190 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.296000 audit[2190]: CRED_DISP pid=2190 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.377137 sshd[2189]: Connection closed by 68.220.241.50 port 44944 Jan 28 01:18:16.378356 sshd-session[2185]: pam_unix(sshd:session): session closed for user core Jan 28 01:18:16.378000 audit[2185]: USER_END pid=2185 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:18:16.385781 kernel: audit: type=1106 audit(1769563096.378:248): pid=2185 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:18:16.385936 systemd[1]: sshd@5-172.31.31.26:22-68.220.241.50:44944.service: Deactivated successfully. Jan 28 01:18:16.387610 systemd[1]: session-7.scope: Deactivated successfully. Jan 28 01:18:16.379000 audit[2185]: CRED_DISP pid=2185 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:18:16.389560 systemd-logind[1833]: Session 7 logged out. Waiting for processes to exit. Jan 28 01:18:16.391622 systemd-logind[1833]: Removed session 7. Jan 28 01:18:16.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.31.26:22-68.220.241.50:44944 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.393557 kernel: audit: type=1104 audit(1769563096.379:249): pid=2185 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:18:16.393611 kernel: audit: type=1131 audit(1769563096.385:250): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.31.26:22-68.220.241.50:44944 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.31.26:22-68.220.241.50:44954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:16.476724 systemd[1]: Started sshd@6-172.31.31.26:22-68.220.241.50:44954.service - OpenSSH per-connection server daemon (68.220.241.50:44954). Jan 28 01:18:16.913000 audit[2224]: USER_ACCT pid=2224 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:18:16.914065 sshd[2224]: Accepted publickey for core from 68.220.241.50 port 44954 ssh2: RSA SHA256:PpvjS6sxgjOf+voyr4NrS2kTF8aDF7ek5ziSVtOzP6U Jan 28 01:18:16.914000 audit[2224]: CRED_ACQ pid=2224 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:18:16.914000 audit[2224]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff054699c0 a2=3 a3=0 items=0 ppid=1 pid=2224 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:16.914000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:18:16.915534 sshd-session[2224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:18:16.920599 systemd-logind[1833]: New session 8 of user core. Jan 28 01:18:16.934992 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 28 01:18:16.937000 audit[2224]: USER_START pid=2224 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:18:16.939000 audit[2228]: CRED_ACQ pid=2228 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:18:17.075000 audit[2229]: USER_ACCT pid=2229 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:18:17.076429 sudo[2229]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 28 01:18:17.076720 sudo[2229]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:18:17.075000 audit[2229]: CRED_REFR pid=2229 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:18:17.075000 audit[2229]: USER_START pid=2229 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:18:18.028558 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 28 01:18:18.030073 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:18:18.376775 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 28 01:18:18.387438 (dockerd)[2253]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 28 01:18:18.391423 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:18:18.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:18.396978 (kubelet)[2257]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:18:18.444885 kubelet[2257]: E0128 01:18:18.444808 2257 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:18:18.448942 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:18:18.449082 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:18:18.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:18:18.449442 systemd[1]: kubelet.service: Consumed 180ms CPU time, 108.3M memory peak. Jan 28 01:18:19.619671 dockerd[2253]: time="2026-01-28T01:18:19.619614350Z" level=info msg="Starting up" Jan 28 01:18:19.622731 dockerd[2253]: time="2026-01-28T01:18:19.622701124Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 28 01:18:19.634094 dockerd[2253]: time="2026-01-28T01:18:19.634045719Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 28 01:18:19.727062 dockerd[2253]: time="2026-01-28T01:18:19.726995204Z" level=info msg="Loading containers: start." Jan 28 01:18:19.744778 kernel: Initializing XFRM netlink socket Jan 28 01:18:19.809000 audit[2312]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2312 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:19.809000 audit[2312]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe0afcea50 a2=0 a3=0 items=0 ppid=2253 pid=2312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:19.809000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 28 01:18:19.812000 audit[2314]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2314 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:19.812000 audit[2314]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc08de14a0 a2=0 a3=0 items=0 ppid=2253 pid=2314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:19.812000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 28 01:18:19.815000 audit[2316]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2316 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:19.815000 audit[2316]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe4f4d6140 a2=0 a3=0 items=0 ppid=2253 pid=2316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:19.815000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 28 01:18:19.817000 audit[2318]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2318 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:19.817000 audit[2318]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff0b50c9c0 a2=0 a3=0 items=0 ppid=2253 pid=2318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:19.817000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 28 01:18:19.819000 audit[2320]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2320 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:19.819000 audit[2320]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd394d8210 a2=0 a3=0 items=0 ppid=2253 pid=2320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:19.819000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 28 01:18:19.822000 audit[2322]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2322 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:19.822000 audit[2322]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffda5aa9890 a2=0 a3=0 items=0 ppid=2253 pid=2322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:19.822000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 01:18:19.824000 audit[2324]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2324 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:19.824000 audit[2324]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffddd6e35c0 a2=0 a3=0 items=0 ppid=2253 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:19.824000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 28 01:18:19.827000 audit[2326]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2326 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:19.827000 audit[2326]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd844b12c0 a2=0 a3=0 items=0 ppid=2253 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:19.827000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 28 01:18:19.858000 audit[2329]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2329 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:19.858000 audit[2329]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffd6143d2f0 a2=0 a3=0 items=0 ppid=2253 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:19.858000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 28 01:18:19.863000 audit[2331]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2331 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:19.863000 audit[2331]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcddfc1030 a2=0 a3=0 items=0 ppid=2253 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:19.863000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 28 01:18:19.866000 audit[2333]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2333 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:19.866000 audit[2333]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fffc4775600 a2=0 a3=0 items=0 ppid=2253 pid=2333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:19.866000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 28 01:18:19.869000 audit[2335]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2335 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:19.869000 audit[2335]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe962e9400 a2=0 a3=0 items=0 ppid=2253 pid=2335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:19.869000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 01:18:19.873000 audit[2337]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2337 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:19.873000 audit[2337]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffdb7903580 a2=0 a3=0 items=0 ppid=2253 pid=2337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:19.873000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 28 01:18:19.917000 audit[2367]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2367 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:19.917000 audit[2367]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc156f16d0 a2=0 a3=0 items=0 ppid=2253 pid=2367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:19.917000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 28 01:18:19.920000 audit[2369]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2369 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:19.920000 audit[2369]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe658dd7a0 a2=0 a3=0 items=0 ppid=2253 pid=2369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:19.920000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 28 01:18:19.923000 audit[2371]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2371 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:19.923000 audit[2371]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda1b33540 a2=0 a3=0 items=0 ppid=2253 pid=2371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:19.923000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 28 01:18:19.925000 audit[2373]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2373 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:19.925000 audit[2373]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff0bec66b0 a2=0 a3=0 items=0 ppid=2253 pid=2373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:19.925000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 28 01:18:19.928000 audit[2375]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2375 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:19.928000 audit[2375]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcecaca7f0 a2=0 a3=0 items=0 ppid=2253 pid=2375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:19.928000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 28 01:18:19.930000 audit[2377]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2377 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:19.930000 audit[2377]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe510802c0 a2=0 a3=0 items=0 ppid=2253 pid=2377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:19.930000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 01:18:19.932000 audit[2379]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2379 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:19.932000 audit[2379]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff1f67e760 a2=0 a3=0 items=0 ppid=2253 pid=2379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:19.932000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 28 01:18:19.935000 audit[2381]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2381 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:19.935000 audit[2381]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd6af81910 a2=0 a3=0 items=0 ppid=2253 pid=2381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:19.935000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 28 01:18:19.937000 audit[2383]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2383 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:19.937000 audit[2383]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffe2792e780 a2=0 a3=0 items=0 ppid=2253 pid=2383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:19.937000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 28 01:18:19.940000 audit[2385]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2385 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:19.940000 audit[2385]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe99c8b260 a2=0 a3=0 items=0 ppid=2253 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:19.940000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 28 01:18:19.942000 audit[2387]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2387 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:19.942000 audit[2387]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff6b31d3d0 a2=0 a3=0 items=0 ppid=2253 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:19.942000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 28 01:18:19.945000 audit[2389]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2389 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:19.945000 audit[2389]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd418d0160 a2=0 a3=0 items=0 ppid=2253 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:19.945000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 01:18:19.947000 audit[2391]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2391 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:19.947000 audit[2391]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe61ac9d80 a2=0 a3=0 items=0 ppid=2253 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:19.947000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 28 01:18:19.954000 audit[2396]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2396 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:19.954000 audit[2396]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffe5c43570 a2=0 a3=0 items=0 ppid=2253 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:19.954000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 28 01:18:19.958000 audit[2398]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2398 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:19.958000 audit[2398]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffcfde3fed0 a2=0 a3=0 items=0 ppid=2253 pid=2398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:19.958000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 28 01:18:19.960000 audit[2400]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2400 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:19.960000 audit[2400]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc22e1d280 a2=0 a3=0 items=0 ppid=2253 pid=2400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:19.960000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 28 01:18:19.963000 audit[2402]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2402 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:19.963000 audit[2402]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd444952d0 a2=0 a3=0 items=0 ppid=2253 pid=2402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:19.963000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 28 01:18:19.965000 audit[2404]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2404 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:19.965000 audit[2404]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd05147ba0 a2=0 a3=0 items=0 ppid=2253 pid=2404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:19.965000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 28 01:18:19.968000 audit[2406]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2406 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:19.968000 audit[2406]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdfcf3c700 a2=0 a3=0 items=0 ppid=2253 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:19.968000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 28 01:18:19.985850 (udev-worker)[2285]: Network interface NamePolicy= disabled on kernel command line. Jan 28 01:18:19.998000 audit[2411]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2411 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:19.998000 audit[2411]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffc38a37a50 a2=0 a3=0 items=0 ppid=2253 pid=2411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:19.998000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 28 01:18:20.002000 audit[2413]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2413 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:20.002000 audit[2413]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe188251d0 a2=0 a3=0 items=0 ppid=2253 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:20.002000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 28 01:18:20.013000 audit[2421]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2421 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:20.013000 audit[2421]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7fffc93b8080 a2=0 a3=0 items=0 ppid=2253 pid=2421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:20.013000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 28 01:18:20.025000 audit[2427]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2427 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:20.025000 audit[2427]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc870f6070 a2=0 a3=0 items=0 ppid=2253 pid=2427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:20.025000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 28 01:18:20.028000 audit[2429]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2429 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:20.028000 audit[2429]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffe9a206430 a2=0 a3=0 items=0 ppid=2253 pid=2429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:20.028000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 28 01:18:20.031000 audit[2431]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2431 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:20.031000 audit[2431]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc600a8310 a2=0 a3=0 items=0 ppid=2253 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:20.031000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 28 01:18:20.034000 audit[2433]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2433 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:20.034000 audit[2433]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff1e1f64d0 a2=0 a3=0 items=0 ppid=2253 pid=2433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:20.034000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 28 01:18:20.037000 audit[2435]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2435 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:20.037000 audit[2435]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc6dc08900 a2=0 a3=0 items=0 ppid=2253 pid=2435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:20.037000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 28 01:18:20.038847 systemd-networkd[1458]: docker0: Link UP Jan 28 01:18:20.049115 dockerd[2253]: time="2026-01-28T01:18:20.049056181Z" level=info msg="Loading containers: done." Jan 28 01:18:20.094040 dockerd[2253]: time="2026-01-28T01:18:20.093979126Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 28 01:18:20.094205 dockerd[2253]: time="2026-01-28T01:18:20.094073664Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 28 01:18:20.094205 dockerd[2253]: time="2026-01-28T01:18:20.094155079Z" level=info msg="Initializing buildkit" Jan 28 01:18:20.133428 dockerd[2253]: time="2026-01-28T01:18:20.133182066Z" level=info msg="Completed buildkit initialization" Jan 28 01:18:20.140502 dockerd[2253]: time="2026-01-28T01:18:20.140458796Z" level=info msg="Daemon has completed initialization" Jan 28 01:18:20.140631 dockerd[2253]: time="2026-01-28T01:18:20.140517225Z" level=info msg="API listen on /run/docker.sock" Jan 28 01:18:20.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:20.140940 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 28 01:18:21.119183 containerd[1849]: time="2026-01-28T01:18:21.119142066Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 28 01:18:21.753227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2305348247.mount: Deactivated successfully. Jan 28 01:18:23.241624 containerd[1849]: time="2026-01-28T01:18:23.241570470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:18:23.242651 containerd[1849]: time="2026-01-28T01:18:23.242551183Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=28448874" Jan 28 01:18:23.243508 containerd[1849]: time="2026-01-28T01:18:23.243475762Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:18:23.246569 containerd[1849]: time="2026-01-28T01:18:23.246005773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:18:23.247229 containerd[1849]: time="2026-01-28T01:18:23.247194519Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 2.128013542s" Jan 28 01:18:23.247310 containerd[1849]: time="2026-01-28T01:18:23.247239662Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 28 01:18:23.248120 containerd[1849]: time="2026-01-28T01:18:23.248095437Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 28 01:18:24.829573 containerd[1849]: time="2026-01-28T01:18:24.829466590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:18:24.832226 containerd[1849]: time="2026-01-28T01:18:24.831991569Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Jan 28 01:18:24.834100 containerd[1849]: time="2026-01-28T01:18:24.834052589Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:18:24.837440 containerd[1849]: time="2026-01-28T01:18:24.837407155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:18:24.838229 containerd[1849]: time="2026-01-28T01:18:24.838198452Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.590070968s" Jan 28 01:18:24.838229 containerd[1849]: time="2026-01-28T01:18:24.838227795Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 28 01:18:24.839148 containerd[1849]: time="2026-01-28T01:18:24.839127045Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 28 01:18:26.344781 containerd[1849]: time="2026-01-28T01:18:26.344719667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:18:26.351679 containerd[1849]: time="2026-01-28T01:18:26.351045397Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Jan 28 01:18:26.353404 containerd[1849]: time="2026-01-28T01:18:26.353366213Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:18:26.358633 containerd[1849]: time="2026-01-28T01:18:26.358595121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:18:26.359941 containerd[1849]: time="2026-01-28T01:18:26.359904709Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.520745701s" Jan 28 01:18:26.360081 containerd[1849]: time="2026-01-28T01:18:26.360060130Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 28 01:18:26.360758 containerd[1849]: time="2026-01-28T01:18:26.360714343Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 28 01:18:27.546645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2747738768.mount: Deactivated successfully. Jan 28 01:18:28.168971 containerd[1849]: time="2026-01-28T01:18:28.168917024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:18:28.170852 containerd[1849]: time="2026-01-28T01:18:28.170805654Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=11585785" Jan 28 01:18:28.173242 containerd[1849]: time="2026-01-28T01:18:28.173194553Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:18:28.176144 containerd[1849]: time="2026-01-28T01:18:28.176111187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:18:28.176632 containerd[1849]: time="2026-01-28T01:18:28.176484741Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.815734615s" Jan 28 01:18:28.176632 containerd[1849]: time="2026-01-28T01:18:28.176516413Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 28 01:18:28.177038 containerd[1849]: time="2026-01-28T01:18:28.177004004Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 28 01:18:28.678551 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 28 01:18:28.680900 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:18:28.752595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4237265335.mount: Deactivated successfully. Jan 28 01:18:28.983027 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:18:28.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:28.985621 kernel: kauditd_printk_skb: 134 callbacks suppressed Jan 28 01:18:28.985722 kernel: audit: type=1130 audit(1769563108.983:303): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:29.003886 (kubelet)[2568]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:18:29.129435 kubelet[2568]: E0128 01:18:29.129335 2568 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:18:29.133010 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:18:29.133206 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:18:29.138787 kernel: audit: type=1131 audit(1769563109.133:304): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:18:29.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:18:29.133889 systemd[1]: kubelet.service: Consumed 210ms CPU time, 107.4M memory peak. Jan 28 01:18:29.998967 containerd[1849]: time="2026-01-28T01:18:29.998921853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:18:30.000896 containerd[1849]: time="2026-01-28T01:18:30.000858157Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20257574" Jan 28 01:18:30.003491 containerd[1849]: time="2026-01-28T01:18:30.003336209Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:18:30.007294 containerd[1849]: time="2026-01-28T01:18:30.007226210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:18:30.008287 containerd[1849]: time="2026-01-28T01:18:30.008147220Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.831016602s" Jan 28 01:18:30.008287 containerd[1849]: time="2026-01-28T01:18:30.008188043Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 28 01:18:30.008918 containerd[1849]: time="2026-01-28T01:18:30.008893886Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 28 01:18:30.480848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3569360284.mount: Deactivated successfully. Jan 28 01:18:30.494284 containerd[1849]: time="2026-01-28T01:18:30.494229456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:18:30.496407 containerd[1849]: time="2026-01-28T01:18:30.496191880Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 28 01:18:30.498537 containerd[1849]: time="2026-01-28T01:18:30.498504907Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:18:30.502577 containerd[1849]: time="2026-01-28T01:18:30.502524362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:18:30.503122 containerd[1849]: time="2026-01-28T01:18:30.502988171Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 494.064018ms" Jan 28 01:18:30.503122 containerd[1849]: time="2026-01-28T01:18:30.503021672Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 28 01:18:30.503543 containerd[1849]: time="2026-01-28T01:18:30.503396926Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 28 01:18:31.067240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3850694269.mount: Deactivated successfully. Jan 28 01:18:33.509357 containerd[1849]: time="2026-01-28T01:18:33.509298824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:18:33.510650 containerd[1849]: time="2026-01-28T01:18:33.510572767Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=56977083" Jan 28 01:18:33.511596 containerd[1849]: time="2026-01-28T01:18:33.511561660Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:18:33.514469 containerd[1849]: time="2026-01-28T01:18:33.514414121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:18:33.515687 containerd[1849]: time="2026-01-28T01:18:33.515533368Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.012103278s" Jan 28 01:18:33.515687 containerd[1849]: time="2026-01-28T01:18:33.515570450Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 28 01:18:33.707506 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 28 01:18:33.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:33.712774 kernel: audit: type=1131 audit(1769563113.707:305): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:33.737000 audit: BPF prog-id=66 op=UNLOAD Jan 28 01:18:33.739783 kernel: audit: type=1334 audit(1769563113.737:306): prog-id=66 op=UNLOAD Jan 28 01:18:36.917191 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:18:36.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:36.917981 systemd[1]: kubelet.service: Consumed 210ms CPU time, 107.4M memory peak. Jan 28 01:18:36.927001 kernel: audit: type=1130 audit(1769563116.917:307): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:36.927092 kernel: audit: type=1131 audit(1769563116.917:308): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:36.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:36.928381 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:18:36.966778 systemd[1]: Reload requested from client PID 2706 ('systemctl') (unit session-8.scope)... Jan 28 01:18:36.966796 systemd[1]: Reloading... Jan 28 01:18:37.112810 zram_generator::config[2756]: No configuration found. Jan 28 01:18:37.406812 systemd[1]: Reloading finished in 439 ms. Jan 28 01:18:37.443898 kernel: audit: type=1334 audit(1769563117.437:309): prog-id=70 op=LOAD Jan 28 01:18:37.443997 kernel: audit: type=1334 audit(1769563117.437:310): prog-id=50 op=UNLOAD Jan 28 01:18:37.444022 kernel: audit: type=1334 audit(1769563117.437:311): prog-id=71 op=LOAD Jan 28 01:18:37.437000 audit: BPF prog-id=70 op=LOAD Jan 28 01:18:37.437000 audit: BPF prog-id=50 op=UNLOAD Jan 28 01:18:37.437000 audit: BPF prog-id=71 op=LOAD Jan 28 01:18:37.437000 audit: BPF prog-id=72 op=LOAD Jan 28 01:18:37.448926 kernel: audit: type=1334 audit(1769563117.437:312): prog-id=72 op=LOAD Jan 28 01:18:37.449004 kernel: audit: type=1334 audit(1769563117.437:313): prog-id=51 op=UNLOAD Jan 28 01:18:37.437000 audit: BPF prog-id=51 op=UNLOAD Jan 28 01:18:37.450921 kernel: audit: type=1334 audit(1769563117.437:314): prog-id=52 op=UNLOAD Jan 28 01:18:37.437000 audit: BPF prog-id=52 op=UNLOAD Jan 28 01:18:37.452860 kernel: audit: type=1334 audit(1769563117.438:315): prog-id=73 op=LOAD Jan 28 01:18:37.438000 audit: BPF prog-id=73 op=LOAD Jan 28 01:18:37.454916 kernel: audit: type=1334 audit(1769563117.438:316): prog-id=53 op=UNLOAD Jan 28 01:18:37.438000 audit: BPF prog-id=53 op=UNLOAD Jan 28 01:18:37.438000 audit: BPF prog-id=74 op=LOAD Jan 28 01:18:37.438000 audit: BPF prog-id=75 op=LOAD Jan 28 01:18:37.438000 audit: BPF prog-id=58 op=UNLOAD Jan 28 01:18:37.438000 audit: BPF prog-id=59 op=UNLOAD Jan 28 01:18:37.439000 audit: BPF prog-id=76 op=LOAD Jan 28 01:18:37.439000 audit: BPF prog-id=69 op=UNLOAD Jan 28 01:18:37.441000 audit: BPF prog-id=77 op=LOAD Jan 28 01:18:37.441000 audit: BPF prog-id=60 op=UNLOAD Jan 28 01:18:37.441000 audit: BPF prog-id=78 op=LOAD Jan 28 01:18:37.441000 audit: BPF prog-id=79 op=LOAD Jan 28 01:18:37.441000 audit: BPF prog-id=61 op=UNLOAD Jan 28 01:18:37.441000 audit: BPF prog-id=62 op=UNLOAD Jan 28 01:18:37.443000 audit: BPF prog-id=80 op=LOAD Jan 28 01:18:37.443000 audit: BPF prog-id=63 op=UNLOAD Jan 28 01:18:37.443000 audit: BPF prog-id=81 op=LOAD Jan 28 01:18:37.443000 audit: BPF prog-id=82 op=LOAD Jan 28 01:18:37.443000 audit: BPF prog-id=64 op=UNLOAD Jan 28 01:18:37.443000 audit: BPF prog-id=65 op=UNLOAD Jan 28 01:18:37.444000 audit: BPF prog-id=83 op=LOAD Jan 28 01:18:37.444000 audit: BPF prog-id=47 op=UNLOAD Jan 28 01:18:37.444000 audit: BPF prog-id=84 op=LOAD Jan 28 01:18:37.444000 audit: BPF prog-id=85 op=LOAD Jan 28 01:18:37.444000 audit: BPF prog-id=48 op=UNLOAD Jan 28 01:18:37.444000 audit: BPF prog-id=49 op=UNLOAD Jan 28 01:18:37.445000 audit: BPF prog-id=86 op=LOAD Jan 28 01:18:37.445000 audit: BPF prog-id=54 op=UNLOAD Jan 28 01:18:37.445000 audit: BPF prog-id=87 op=LOAD Jan 28 01:18:37.445000 audit: BPF prog-id=88 op=LOAD Jan 28 01:18:37.445000 audit: BPF prog-id=55 op=UNLOAD Jan 28 01:18:37.445000 audit: BPF prog-id=56 op=UNLOAD Jan 28 01:18:37.445000 audit: BPF prog-id=89 op=LOAD Jan 28 01:18:37.445000 audit: BPF prog-id=57 op=UNLOAD Jan 28 01:18:37.457282 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 28 01:18:37.457446 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 28 01:18:37.457859 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:18:37.457912 systemd[1]: kubelet.service: Consumed 138ms CPU time, 98.1M memory peak. Jan 28 01:18:37.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:18:37.459647 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:18:37.681467 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:18:37.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:37.692075 (kubelet)[2814]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 01:18:37.734865 kubelet[2814]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:18:37.734865 kubelet[2814]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 01:18:37.734865 kubelet[2814]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:18:37.738121 kubelet[2814]: I0128 01:18:37.738052 2814 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 01:18:38.228900 kubelet[2814]: I0128 01:18:38.228848 2814 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 28 01:18:38.228900 kubelet[2814]: I0128 01:18:38.228877 2814 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 01:18:38.229103 kubelet[2814]: I0128 01:18:38.229084 2814 server.go:956] "Client rotation is on, will bootstrap in background" Jan 28 01:18:38.271852 kubelet[2814]: I0128 01:18:38.271796 2814 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 01:18:38.275460 kubelet[2814]: E0128 01:18:38.275379 2814 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.31.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.26:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 28 01:18:38.313771 kubelet[2814]: I0128 01:18:38.313710 2814 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 01:18:38.319912 kubelet[2814]: I0128 01:18:38.319868 2814 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 01:18:38.327113 kubelet[2814]: I0128 01:18:38.327050 2814 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 01:18:38.330848 kubelet[2814]: I0128 01:18:38.327103 2814 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-26","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 01:18:38.330848 kubelet[2814]: I0128 01:18:38.330850 2814 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 01:18:38.331065 kubelet[2814]: I0128 01:18:38.330865 2814 container_manager_linux.go:303] "Creating device plugin manager" Jan 28 01:18:38.332002 kubelet[2814]: I0128 01:18:38.331978 2814 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:18:38.335338 kubelet[2814]: I0128 01:18:38.335301 2814 kubelet.go:480] "Attempting to sync node with API server" Jan 28 01:18:38.335338 kubelet[2814]: I0128 01:18:38.335336 2814 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 01:18:38.335458 kubelet[2814]: I0128 01:18:38.335368 2814 kubelet.go:386] "Adding apiserver pod source" Jan 28 01:18:38.335458 kubelet[2814]: I0128 01:18:38.335382 2814 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 01:18:38.347824 kubelet[2814]: I0128 01:18:38.347717 2814 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 28 01:18:38.350664 kubelet[2814]: I0128 01:18:38.350624 2814 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 28 01:18:38.354865 kubelet[2814]: W0128 01:18:38.354834 2814 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 28 01:18:38.359202 kubelet[2814]: E0128 01:18:38.359162 2814 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.31.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 28 01:18:38.359313 kubelet[2814]: E0128 01:18:38.359270 2814 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.31.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-26&limit=500&resourceVersion=0\": dial tcp 172.31.31.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 28 01:18:38.362864 kubelet[2814]: I0128 01:18:38.362828 2814 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 01:18:38.362864 kubelet[2814]: I0128 01:18:38.362895 2814 server.go:1289] "Started kubelet" Jan 28 01:18:38.365602 kubelet[2814]: I0128 01:18:38.365017 2814 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 01:18:38.367795 kubelet[2814]: I0128 01:18:38.367768 2814 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 01:18:38.381888 kubelet[2814]: I0128 01:18:38.380968 2814 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 01:18:38.382908 kubelet[2814]: I0128 01:18:38.367951 2814 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 01:18:38.382908 kubelet[2814]: E0128 01:18:38.368326 2814 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-31-26\" not found" Jan 28 01:18:38.382908 kubelet[2814]: I0128 01:18:38.368666 2814 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 01:18:38.383135 kubelet[2814]: I0128 01:18:38.383111 2814 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 01:18:38.383175 kubelet[2814]: E0128 01:18:38.372359 2814 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.26:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.26:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-26.188ec047d5a684db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-26,UID:ip-172-31-31-26,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-26,},FirstTimestamp:2026-01-28 01:18:38.362854619 +0000 UTC m=+0.666802309,LastTimestamp:2026-01-28 01:18:38.362854619 +0000 UTC m=+0.666802309,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-26,}" Jan 28 01:18:38.388613 kubelet[2814]: I0128 01:18:38.387872 2814 server.go:317] "Adding debug handlers to kubelet server" Jan 28 01:18:38.391300 kubelet[2814]: I0128 01:18:38.367938 2814 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 01:18:38.391507 kubelet[2814]: I0128 01:18:38.391497 2814 reconciler.go:26] "Reconciler: start to sync state" Jan 28 01:18:38.392403 kubelet[2814]: I0128 01:18:38.392354 2814 factory.go:223] Registration of the systemd container factory successfully Jan 28 01:18:38.392586 kubelet[2814]: I0128 01:18:38.392568 2814 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 01:18:38.392000 audit[2830]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2830 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:38.392000 audit[2830]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff6749c8b0 a2=0 a3=0 items=0 ppid=2814 pid=2830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:38.392000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 28 01:18:38.393000 audit[2831]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2831 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:38.393000 audit[2831]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd40b071b0 a2=0 a3=0 items=0 ppid=2814 pid=2831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:38.393000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 28 01:18:38.395801 kubelet[2814]: E0128 01:18:38.395775 2814 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.31.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 28 01:18:38.395000 audit[2833]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2833 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:38.395000 audit[2833]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc13934ac0 a2=0 a3=0 items=0 ppid=2814 pid=2833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:38.395000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 01:18:38.397103 kubelet[2814]: E0128 01:18:38.396836 2814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-26?timeout=10s\": dial tcp 172.31.31.26:6443: connect: connection refused" interval="200ms" Jan 28 01:18:38.398706 kubelet[2814]: I0128 01:18:38.398688 2814 factory.go:223] Registration of the containerd container factory successfully Jan 28 01:18:38.398000 audit[2835]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2835 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:38.398000 audit[2835]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc5552f230 a2=0 a3=0 items=0 ppid=2814 pid=2835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:38.398000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 01:18:38.415000 audit[2839]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2839 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:38.415000 audit[2839]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff3e9ded00 a2=0 a3=0 items=0 ppid=2814 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:38.415000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 28 01:18:38.416602 kubelet[2814]: I0128 01:18:38.416555 2814 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 28 01:18:38.417000 audit[2840]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2840 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:38.417000 audit[2840]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd625b41b0 a2=0 a3=0 items=0 ppid=2814 pid=2840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:38.417000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 28 01:18:38.418630 kubelet[2814]: I0128 01:18:38.418598 2814 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 28 01:18:38.418630 kubelet[2814]: I0128 01:18:38.418622 2814 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 28 01:18:38.418708 kubelet[2814]: I0128 01:18:38.418668 2814 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 01:18:38.418708 kubelet[2814]: I0128 01:18:38.418675 2814 kubelet.go:2436] "Starting kubelet main sync loop" Jan 28 01:18:38.418766 kubelet[2814]: E0128 01:18:38.418715 2814 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 01:18:38.419000 audit[2841]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2841 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:38.419000 audit[2841]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd704ecd90 a2=0 a3=0 items=0 ppid=2814 pid=2841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:38.419000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 28 01:18:38.420000 audit[2842]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2842 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:38.420000 audit[2842]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe88a17e50 a2=0 a3=0 items=0 ppid=2814 pid=2842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:38.420000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 28 01:18:38.422000 audit[2843]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2843 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:38.422000 audit[2843]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcc93d2160 a2=0 a3=0 items=0 ppid=2814 pid=2843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:38.422000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 28 01:18:38.423000 audit[2845]: NETFILTER_CFG table=mangle:51 family=10 entries=1 op=nft_register_chain pid=2845 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:38.423000 audit[2845]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd138f47c0 a2=0 a3=0 items=0 ppid=2814 pid=2845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:38.423000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 28 01:18:38.424000 audit[2846]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2846 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:38.424000 audit[2846]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce3a4db50 a2=0 a3=0 items=0 ppid=2814 pid=2846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:38.424000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 28 01:18:38.426038 kubelet[2814]: E0128 01:18:38.426019 2814 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.31.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 28 01:18:38.425000 audit[2847]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2847 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:38.425000 audit[2847]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffca30e8d10 a2=0 a3=0 items=0 ppid=2814 pid=2847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:38.425000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 28 01:18:38.426890 kubelet[2814]: E0128 01:18:38.426875 2814 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 01:18:38.432826 kubelet[2814]: I0128 01:18:38.432809 2814 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 01:18:38.432952 kubelet[2814]: I0128 01:18:38.432943 2814 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 01:18:38.433005 kubelet[2814]: I0128 01:18:38.432999 2814 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:18:38.447970 kubelet[2814]: I0128 01:18:38.447941 2814 policy_none.go:49] "None policy: Start" Jan 28 01:18:38.448123 kubelet[2814]: I0128 01:18:38.448114 2814 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 01:18:38.448217 kubelet[2814]: I0128 01:18:38.448209 2814 state_mem.go:35] "Initializing new in-memory state store" Jan 28 01:18:38.458166 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 28 01:18:38.468221 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 28 01:18:38.471692 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 28 01:18:38.478800 kubelet[2814]: E0128 01:18:38.478659 2814 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 28 01:18:38.480870 kubelet[2814]: I0128 01:18:38.480711 2814 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 01:18:38.481979 kubelet[2814]: I0128 01:18:38.481581 2814 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 01:18:38.481979 kubelet[2814]: I0128 01:18:38.481897 2814 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 01:18:38.485881 kubelet[2814]: E0128 01:18:38.485857 2814 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 01:18:38.485957 kubelet[2814]: E0128 01:18:38.485894 2814 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-26\" not found" Jan 28 01:18:38.553335 systemd[1]: Created slice kubepods-burstable-pod3704ffb520fb951fa414a30bee049c5c.slice - libcontainer container kubepods-burstable-pod3704ffb520fb951fa414a30bee049c5c.slice. Jan 28 01:18:38.560737 kubelet[2814]: E0128 01:18:38.560621 2814 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-26\" not found" node="ip-172-31-31-26" Jan 28 01:18:38.568423 systemd[1]: Created slice kubepods-burstable-pod34df33dec0dc066dc3751b0a759f882f.slice - libcontainer container kubepods-burstable-pod34df33dec0dc066dc3751b0a759f882f.slice. Jan 28 01:18:38.571654 kubelet[2814]: E0128 01:18:38.571631 2814 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-26\" not found" node="ip-172-31-31-26" Jan 28 01:18:38.574501 systemd[1]: Created slice kubepods-burstable-pod985f6ee425c680a0de440ff2e74a684c.slice - libcontainer container kubepods-burstable-pod985f6ee425c680a0de440ff2e74a684c.slice. Jan 28 01:18:38.577162 kubelet[2814]: E0128 01:18:38.577113 2814 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-26\" not found" node="ip-172-31-31-26" Jan 28 01:18:38.583915 kubelet[2814]: I0128 01:18:38.583893 2814 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-26" Jan 28 01:18:38.584295 kubelet[2814]: E0128 01:18:38.584250 2814 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.26:6443/api/v1/nodes\": dial tcp 172.31.31.26:6443: connect: connection refused" node="ip-172-31-31-26" Jan 28 01:18:38.592449 kubelet[2814]: I0128 01:18:38.592380 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3704ffb520fb951fa414a30bee049c5c-ca-certs\") pod \"kube-apiserver-ip-172-31-31-26\" (UID: \"3704ffb520fb951fa414a30bee049c5c\") " pod="kube-system/kube-apiserver-ip-172-31-31-26" Jan 28 01:18:38.592449 kubelet[2814]: I0128 01:18:38.592416 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3704ffb520fb951fa414a30bee049c5c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-26\" (UID: \"3704ffb520fb951fa414a30bee049c5c\") " pod="kube-system/kube-apiserver-ip-172-31-31-26" Jan 28 01:18:38.592449 kubelet[2814]: I0128 01:18:38.592437 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/34df33dec0dc066dc3751b0a759f882f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-26\" (UID: \"34df33dec0dc066dc3751b0a759f882f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-26" Jan 28 01:18:38.592449 kubelet[2814]: I0128 01:18:38.592457 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34df33dec0dc066dc3751b0a759f882f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-26\" (UID: \"34df33dec0dc066dc3751b0a759f882f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-26" Jan 28 01:18:38.595307 kubelet[2814]: I0128 01:18:38.592472 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34df33dec0dc066dc3751b0a759f882f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-26\" (UID: \"34df33dec0dc066dc3751b0a759f882f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-26" Jan 28 01:18:38.595307 kubelet[2814]: I0128 01:18:38.592487 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34df33dec0dc066dc3751b0a759f882f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-26\" (UID: \"34df33dec0dc066dc3751b0a759f882f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-26" Jan 28 01:18:38.595307 kubelet[2814]: I0128 01:18:38.592507 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/985f6ee425c680a0de440ff2e74a684c-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-26\" (UID: \"985f6ee425c680a0de440ff2e74a684c\") " pod="kube-system/kube-scheduler-ip-172-31-31-26" Jan 28 01:18:38.595307 kubelet[2814]: I0128 01:18:38.592521 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3704ffb520fb951fa414a30bee049c5c-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-26\" (UID: \"3704ffb520fb951fa414a30bee049c5c\") " pod="kube-system/kube-apiserver-ip-172-31-31-26" Jan 28 01:18:38.595307 kubelet[2814]: I0128 01:18:38.592535 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34df33dec0dc066dc3751b0a759f882f-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-26\" (UID: \"34df33dec0dc066dc3751b0a759f882f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-26" Jan 28 01:18:38.597848 kubelet[2814]: E0128 01:18:38.597817 2814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-26?timeout=10s\": dial tcp 172.31.31.26:6443: connect: connection refused" interval="400ms" Jan 28 01:18:38.786942 kubelet[2814]: I0128 01:18:38.786760 2814 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-26" Jan 28 01:18:38.788168 kubelet[2814]: E0128 01:18:38.788110 2814 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.26:6443/api/v1/nodes\": dial tcp 172.31.31.26:6443: connect: connection refused" node="ip-172-31-31-26" Jan 28 01:18:38.865545 containerd[1849]: time="2026-01-28T01:18:38.865473123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-26,Uid:3704ffb520fb951fa414a30bee049c5c,Namespace:kube-system,Attempt:0,}" Jan 28 01:18:38.873795 containerd[1849]: time="2026-01-28T01:18:38.873307413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-26,Uid:34df33dec0dc066dc3751b0a759f882f,Namespace:kube-system,Attempt:0,}" Jan 28 01:18:38.878390 containerd[1849]: time="2026-01-28T01:18:38.878337121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-26,Uid:985f6ee425c680a0de440ff2e74a684c,Namespace:kube-system,Attempt:0,}" Jan 28 01:18:38.999781 kubelet[2814]: E0128 01:18:38.999297 2814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-26?timeout=10s\": dial tcp 172.31.31.26:6443: connect: connection refused" interval="800ms" Jan 28 01:18:39.033617 containerd[1849]: time="2026-01-28T01:18:39.033558847Z" level=info msg="connecting to shim be908f3cd1b03f482cffcf3092c90d5054ebe13ef117b1dbc6e66ca5d848c588" address="unix:///run/containerd/s/78f097f1733ab74090530c028c3792fcf76c86e2718ec9bb0c9e00ce327e11f8" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:18:39.034416 containerd[1849]: time="2026-01-28T01:18:39.034319643Z" level=info msg="connecting to shim ec596095cbd5297be3aceed892fb82066e94a4a02b4ff0d6d5d88608bbc8b86c" address="unix:///run/containerd/s/17182626333e48eb67c6b2daf47ac01a06ea409ca118a79d9c19be5a3f636c5f" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:18:39.037421 containerd[1849]: time="2026-01-28T01:18:39.037320709Z" level=info msg="connecting to shim 5c697e7fc710018cecb07cd6594cde0b67adc367f10f885baf1728295be6bb8f" address="unix:///run/containerd/s/78dc605dbf2df4b37b9dbd034e5985dc3fa3a0b1bbc6a9938b52f9f988d1c149" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:18:39.151035 systemd[1]: Started cri-containerd-5c697e7fc710018cecb07cd6594cde0b67adc367f10f885baf1728295be6bb8f.scope - libcontainer container 5c697e7fc710018cecb07cd6594cde0b67adc367f10f885baf1728295be6bb8f. Jan 28 01:18:39.153502 systemd[1]: Started cri-containerd-be908f3cd1b03f482cffcf3092c90d5054ebe13ef117b1dbc6e66ca5d848c588.scope - libcontainer container be908f3cd1b03f482cffcf3092c90d5054ebe13ef117b1dbc6e66ca5d848c588. Jan 28 01:18:39.156122 systemd[1]: Started cri-containerd-ec596095cbd5297be3aceed892fb82066e94a4a02b4ff0d6d5d88608bbc8b86c.scope - libcontainer container ec596095cbd5297be3aceed892fb82066e94a4a02b4ff0d6d5d88608bbc8b86c. Jan 28 01:18:39.178000 audit: BPF prog-id=90 op=LOAD Jan 28 01:18:39.179000 audit: BPF prog-id=91 op=LOAD Jan 28 01:18:39.179000 audit[2898]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00021e238 a2=98 a3=0 items=0 ppid=2877 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563353936303935636264353239376265336163656564383932666238 Jan 28 01:18:39.179000 audit: BPF prog-id=91 op=UNLOAD Jan 28 01:18:39.179000 audit[2898]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2877 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563353936303935636264353239376265336163656564383932666238 Jan 28 01:18:39.179000 audit: BPF prog-id=92 op=LOAD Jan 28 01:18:39.179000 audit[2898]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00021e488 a2=98 a3=0 items=0 ppid=2877 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563353936303935636264353239376265336163656564383932666238 Jan 28 01:18:39.179000 audit: BPF prog-id=93 op=LOAD Jan 28 01:18:39.179000 audit[2898]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00021e218 a2=98 a3=0 items=0 ppid=2877 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563353936303935636264353239376265336163656564383932666238 Jan 28 01:18:39.179000 audit: BPF prog-id=93 op=UNLOAD Jan 28 01:18:39.179000 audit[2898]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2877 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563353936303935636264353239376265336163656564383932666238 Jan 28 01:18:39.179000 audit: BPF prog-id=92 op=UNLOAD Jan 28 01:18:39.179000 audit[2898]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2877 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563353936303935636264353239376265336163656564383932666238 Jan 28 01:18:39.179000 audit: BPF prog-id=94 op=LOAD Jan 28 01:18:39.179000 audit[2898]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00021e6e8 a2=98 a3=0 items=0 ppid=2877 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563353936303935636264353239376265336163656564383932666238 Jan 28 01:18:39.188000 audit: BPF prog-id=95 op=LOAD Jan 28 01:18:39.190000 audit: BPF prog-id=96 op=LOAD Jan 28 01:18:39.190000 audit[2911]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2873 pid=2911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.190000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265393038663363643162303366343832636666636633303932633930 Jan 28 01:18:39.190000 audit: BPF prog-id=96 op=UNLOAD Jan 28 01:18:39.190000 audit[2911]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2873 pid=2911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.190000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265393038663363643162303366343832636666636633303932633930 Jan 28 01:18:39.191000 audit: BPF prog-id=97 op=LOAD Jan 28 01:18:39.191000 audit[2911]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2873 pid=2911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.191000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265393038663363643162303366343832636666636633303932633930 Jan 28 01:18:39.191000 audit: BPF prog-id=98 op=LOAD Jan 28 01:18:39.191000 audit[2911]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2873 pid=2911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.191000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265393038663363643162303366343832636666636633303932633930 Jan 28 01:18:39.191000 audit: BPF prog-id=98 op=UNLOAD Jan 28 01:18:39.191000 audit[2911]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2873 pid=2911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.191000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265393038663363643162303366343832636666636633303932633930 Jan 28 01:18:39.191000 audit: BPF prog-id=97 op=UNLOAD Jan 28 01:18:39.191000 audit[2911]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2873 pid=2911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.191000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265393038663363643162303366343832636666636633303932633930 Jan 28 01:18:39.193820 kubelet[2814]: I0128 01:18:39.193490 2814 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-26" Jan 28 01:18:39.193000 audit: BPF prog-id=99 op=LOAD Jan 28 01:18:39.194577 kubelet[2814]: E0128 01:18:39.194546 2814 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.26:6443/api/v1/nodes\": dial tcp 172.31.31.26:6443: connect: connection refused" node="ip-172-31-31-26" Jan 28 01:18:39.193000 audit[2911]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2873 pid=2911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265393038663363643162303366343832636666636633303932633930 Jan 28 01:18:39.213000 audit: BPF prog-id=100 op=LOAD Jan 28 01:18:39.214000 audit: BPF prog-id=101 op=LOAD Jan 28 01:18:39.214000 audit[2909]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2869 pid=2909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563363937653766633731303031386365636230376364363539346364 Jan 28 01:18:39.214000 audit: BPF prog-id=101 op=UNLOAD Jan 28 01:18:39.214000 audit[2909]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2869 pid=2909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563363937653766633731303031386365636230376364363539346364 Jan 28 01:18:39.215000 audit: BPF prog-id=102 op=LOAD Jan 28 01:18:39.215000 audit[2909]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2869 pid=2909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563363937653766633731303031386365636230376364363539346364 Jan 28 01:18:39.215000 audit: BPF prog-id=103 op=LOAD Jan 28 01:18:39.215000 audit[2909]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2869 pid=2909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563363937653766633731303031386365636230376364363539346364 Jan 28 01:18:39.215000 audit: BPF prog-id=103 op=UNLOAD Jan 28 01:18:39.215000 audit[2909]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2869 pid=2909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563363937653766633731303031386365636230376364363539346364 Jan 28 01:18:39.215000 audit: BPF prog-id=102 op=UNLOAD Jan 28 01:18:39.215000 audit[2909]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2869 pid=2909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563363937653766633731303031386365636230376364363539346364 Jan 28 01:18:39.216000 audit: BPF prog-id=104 op=LOAD Jan 28 01:18:39.216000 audit[2909]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2869 pid=2909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.216000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563363937653766633731303031386365636230376364363539346364 Jan 28 01:18:39.242867 kubelet[2814]: E0128 01:18:39.242803 2814 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.31.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 28 01:18:39.266505 containerd[1849]: time="2026-01-28T01:18:39.266425786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-26,Uid:3704ffb520fb951fa414a30bee049c5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec596095cbd5297be3aceed892fb82066e94a4a02b4ff0d6d5d88608bbc8b86c\"" Jan 28 01:18:39.280203 containerd[1849]: time="2026-01-28T01:18:39.280156848Z" level=info msg="CreateContainer within sandbox \"ec596095cbd5297be3aceed892fb82066e94a4a02b4ff0d6d5d88608bbc8b86c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 28 01:18:39.289841 containerd[1849]: time="2026-01-28T01:18:39.289701804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-26,Uid:34df33dec0dc066dc3751b0a759f882f,Namespace:kube-system,Attempt:0,} returns sandbox id \"be908f3cd1b03f482cffcf3092c90d5054ebe13ef117b1dbc6e66ca5d848c588\"" Jan 28 01:18:39.300117 containerd[1849]: time="2026-01-28T01:18:39.300049335Z" level=info msg="CreateContainer within sandbox \"be908f3cd1b03f482cffcf3092c90d5054ebe13ef117b1dbc6e66ca5d848c588\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 28 01:18:39.306367 containerd[1849]: time="2026-01-28T01:18:39.306243599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-26,Uid:985f6ee425c680a0de440ff2e74a684c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c697e7fc710018cecb07cd6594cde0b67adc367f10f885baf1728295be6bb8f\"" Jan 28 01:18:39.313314 containerd[1849]: time="2026-01-28T01:18:39.313280809Z" level=info msg="CreateContainer within sandbox \"5c697e7fc710018cecb07cd6594cde0b67adc367f10f885baf1728295be6bb8f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 28 01:18:39.340178 containerd[1849]: time="2026-01-28T01:18:39.339263554Z" level=info msg="Container a613b8551af9b0fc562aaf9f77513d2176c42591cc38b6675f7e94c55d8a98de: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:18:39.340178 containerd[1849]: time="2026-01-28T01:18:39.339311936Z" level=info msg="Container da3f668fae05c09b96289d31e7028e7e41438746f3a57bdaa1806aee6f3a02e5: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:18:39.340178 containerd[1849]: time="2026-01-28T01:18:39.339844884Z" level=info msg="Container 75ae5586186a29acb3fa5face99767761926623b2b06f45e8818c6fd2915b38c: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:18:39.350444 containerd[1849]: time="2026-01-28T01:18:39.350393212Z" level=info msg="CreateContainer within sandbox \"be908f3cd1b03f482cffcf3092c90d5054ebe13ef117b1dbc6e66ca5d848c588\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"75ae5586186a29acb3fa5face99767761926623b2b06f45e8818c6fd2915b38c\"" Jan 28 01:18:39.351213 containerd[1849]: time="2026-01-28T01:18:39.351186133Z" level=info msg="StartContainer for \"75ae5586186a29acb3fa5face99767761926623b2b06f45e8818c6fd2915b38c\"" Jan 28 01:18:39.356606 containerd[1849]: time="2026-01-28T01:18:39.356541159Z" level=info msg="connecting to shim 75ae5586186a29acb3fa5face99767761926623b2b06f45e8818c6fd2915b38c" address="unix:///run/containerd/s/78f097f1733ab74090530c028c3792fcf76c86e2718ec9bb0c9e00ce327e11f8" protocol=ttrpc version=3 Jan 28 01:18:39.366656 containerd[1849]: time="2026-01-28T01:18:39.366606738Z" level=info msg="CreateContainer within sandbox \"ec596095cbd5297be3aceed892fb82066e94a4a02b4ff0d6d5d88608bbc8b86c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"da3f668fae05c09b96289d31e7028e7e41438746f3a57bdaa1806aee6f3a02e5\"" Jan 28 01:18:39.367310 containerd[1849]: time="2026-01-28T01:18:39.367287957Z" level=info msg="StartContainer for \"da3f668fae05c09b96289d31e7028e7e41438746f3a57bdaa1806aee6f3a02e5\"" Jan 28 01:18:39.368776 containerd[1849]: time="2026-01-28T01:18:39.368391992Z" level=info msg="connecting to shim da3f668fae05c09b96289d31e7028e7e41438746f3a57bdaa1806aee6f3a02e5" address="unix:///run/containerd/s/17182626333e48eb67c6b2daf47ac01a06ea409ca118a79d9c19be5a3f636c5f" protocol=ttrpc version=3 Jan 28 01:18:39.371741 containerd[1849]: time="2026-01-28T01:18:39.370580569Z" level=info msg="CreateContainer within sandbox \"5c697e7fc710018cecb07cd6594cde0b67adc367f10f885baf1728295be6bb8f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a613b8551af9b0fc562aaf9f77513d2176c42591cc38b6675f7e94c55d8a98de\"" Jan 28 01:18:39.374531 containerd[1849]: time="2026-01-28T01:18:39.373628967Z" level=info msg="StartContainer for \"a613b8551af9b0fc562aaf9f77513d2176c42591cc38b6675f7e94c55d8a98de\"" Jan 28 01:18:39.375950 containerd[1849]: time="2026-01-28T01:18:39.375860985Z" level=info msg="connecting to shim a613b8551af9b0fc562aaf9f77513d2176c42591cc38b6675f7e94c55d8a98de" address="unix:///run/containerd/s/78dc605dbf2df4b37b9dbd034e5985dc3fa3a0b1bbc6a9938b52f9f988d1c149" protocol=ttrpc version=3 Jan 28 01:18:39.380941 systemd[1]: Started cri-containerd-75ae5586186a29acb3fa5face99767761926623b2b06f45e8818c6fd2915b38c.scope - libcontainer container 75ae5586186a29acb3fa5face99767761926623b2b06f45e8818c6fd2915b38c. Jan 28 01:18:39.400131 systemd[1]: Started cri-containerd-da3f668fae05c09b96289d31e7028e7e41438746f3a57bdaa1806aee6f3a02e5.scope - libcontainer container da3f668fae05c09b96289d31e7028e7e41438746f3a57bdaa1806aee6f3a02e5. Jan 28 01:18:39.408974 systemd[1]: Started cri-containerd-a613b8551af9b0fc562aaf9f77513d2176c42591cc38b6675f7e94c55d8a98de.scope - libcontainer container a613b8551af9b0fc562aaf9f77513d2176c42591cc38b6675f7e94c55d8a98de. Jan 28 01:18:39.413000 audit: BPF prog-id=105 op=LOAD Jan 28 01:18:39.415000 audit: BPF prog-id=106 op=LOAD Jan 28 01:18:39.415000 audit[2992]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2873 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735616535353836313836613239616362336661356661636539393736 Jan 28 01:18:39.416000 audit: BPF prog-id=106 op=UNLOAD Jan 28 01:18:39.416000 audit[2992]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2873 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735616535353836313836613239616362336661356661636539393736 Jan 28 01:18:39.416000 audit: BPF prog-id=107 op=LOAD Jan 28 01:18:39.416000 audit[2992]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2873 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735616535353836313836613239616362336661356661636539393736 Jan 28 01:18:39.417000 audit: BPF prog-id=108 op=LOAD Jan 28 01:18:39.417000 audit[2992]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2873 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735616535353836313836613239616362336661356661636539393736 Jan 28 01:18:39.417000 audit: BPF prog-id=108 op=UNLOAD Jan 28 01:18:39.417000 audit[2992]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2873 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735616535353836313836613239616362336661356661636539393736 Jan 28 01:18:39.417000 audit: BPF prog-id=107 op=UNLOAD Jan 28 01:18:39.417000 audit[2992]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2873 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735616535353836313836613239616362336661356661636539393736 Jan 28 01:18:39.418000 audit: BPF prog-id=109 op=LOAD Jan 28 01:18:39.417000 audit: BPF prog-id=110 op=LOAD Jan 28 01:18:39.417000 audit[2992]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2873 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735616535353836313836613239616362336661356661636539393736 Jan 28 01:18:39.419000 audit: BPF prog-id=111 op=LOAD Jan 28 01:18:39.419000 audit[3003]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2877 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.419000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461336636363866616530356330396239363238396433316537303238 Jan 28 01:18:39.419000 audit: BPF prog-id=111 op=UNLOAD Jan 28 01:18:39.419000 audit[3003]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2877 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.419000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461336636363866616530356330396239363238396433316537303238 Jan 28 01:18:39.419000 audit: BPF prog-id=112 op=LOAD Jan 28 01:18:39.419000 audit[3003]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2877 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.419000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461336636363866616530356330396239363238396433316537303238 Jan 28 01:18:39.419000 audit: BPF prog-id=113 op=LOAD Jan 28 01:18:39.419000 audit[3003]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2877 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.419000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461336636363866616530356330396239363238396433316537303238 Jan 28 01:18:39.419000 audit: BPF prog-id=113 op=UNLOAD Jan 28 01:18:39.419000 audit[3003]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2877 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.419000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461336636363866616530356330396239363238396433316537303238 Jan 28 01:18:39.419000 audit: BPF prog-id=112 op=UNLOAD Jan 28 01:18:39.419000 audit[3003]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2877 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.419000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461336636363866616530356330396239363238396433316537303238 Jan 28 01:18:39.419000 audit: BPF prog-id=114 op=LOAD Jan 28 01:18:39.419000 audit[3003]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2877 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.419000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461336636363866616530356330396239363238396433316537303238 Jan 28 01:18:39.427000 audit: BPF prog-id=115 op=LOAD Jan 28 01:18:39.429000 audit: BPF prog-id=116 op=LOAD Jan 28 01:18:39.429000 audit[3004]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2869 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.429000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136313362383535316166396230666335363261616639663737353133 Jan 28 01:18:39.429000 audit: BPF prog-id=116 op=UNLOAD Jan 28 01:18:39.429000 audit[3004]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2869 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.429000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136313362383535316166396230666335363261616639663737353133 Jan 28 01:18:39.430000 audit: BPF prog-id=117 op=LOAD Jan 28 01:18:39.430000 audit[3004]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2869 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136313362383535316166396230666335363261616639663737353133 Jan 28 01:18:39.430000 audit: BPF prog-id=118 op=LOAD Jan 28 01:18:39.430000 audit[3004]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2869 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136313362383535316166396230666335363261616639663737353133 Jan 28 01:18:39.430000 audit: BPF prog-id=118 op=UNLOAD Jan 28 01:18:39.430000 audit[3004]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2869 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136313362383535316166396230666335363261616639663737353133 Jan 28 01:18:39.430000 audit: BPF prog-id=117 op=UNLOAD Jan 28 01:18:39.430000 audit[3004]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2869 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136313362383535316166396230666335363261616639663737353133 Jan 28 01:18:39.430000 audit: BPF prog-id=119 op=LOAD Jan 28 01:18:39.430000 audit[3004]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2869 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:39.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136313362383535316166396230666335363261616639663737353133 Jan 28 01:18:39.495718 containerd[1849]: time="2026-01-28T01:18:39.495635341Z" level=info msg="StartContainer for \"75ae5586186a29acb3fa5face99767761926623b2b06f45e8818c6fd2915b38c\" returns successfully" Jan 28 01:18:39.496430 containerd[1849]: time="2026-01-28T01:18:39.495792093Z" level=info msg="StartContainer for \"da3f668fae05c09b96289d31e7028e7e41438746f3a57bdaa1806aee6f3a02e5\" returns successfully" Jan 28 01:18:39.513594 containerd[1849]: time="2026-01-28T01:18:39.513554651Z" level=info msg="StartContainer for \"a613b8551af9b0fc562aaf9f77513d2176c42591cc38b6675f7e94c55d8a98de\" returns successfully" Jan 28 01:18:39.727972 kubelet[2814]: E0128 01:18:39.727926 2814 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.31.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-26&limit=500&resourceVersion=0\": dial tcp 172.31.31.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 28 01:18:39.795849 kubelet[2814]: E0128 01:18:39.793396 2814 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.31.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 28 01:18:39.800038 kubelet[2814]: E0128 01:18:39.799993 2814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-26?timeout=10s\": dial tcp 172.31.31.26:6443: connect: connection refused" interval="1.6s" Jan 28 01:18:39.979216 kubelet[2814]: E0128 01:18:39.978860 2814 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.31.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 28 01:18:39.997269 kubelet[2814]: I0128 01:18:39.997241 2814 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-26" Jan 28 01:18:39.997609 kubelet[2814]: E0128 01:18:39.997581 2814 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.26:6443/api/v1/nodes\": dial tcp 172.31.31.26:6443: connect: connection refused" node="ip-172-31-31-26" Jan 28 01:18:40.097998 kubelet[2814]: E0128 01:18:40.097870 2814 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.26:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.26:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-26.188ec047d5a684db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-26,UID:ip-172-31-31-26,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-26,},FirstTimestamp:2026-01-28 01:18:38.362854619 +0000 UTC m=+0.666802309,LastTimestamp:2026-01-28 01:18:38.362854619 +0000 UTC m=+0.666802309,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-26,}" Jan 28 01:18:40.373983 kubelet[2814]: E0128 01:18:40.373872 2814 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.31.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.26:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 28 01:18:40.460311 kubelet[2814]: E0128 01:18:40.460281 2814 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-26\" not found" node="ip-172-31-31-26" Jan 28 01:18:40.462530 kubelet[2814]: E0128 01:18:40.462500 2814 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-26\" not found" node="ip-172-31-31-26" Jan 28 01:18:40.468586 kubelet[2814]: E0128 01:18:40.468552 2814 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-26\" not found" node="ip-172-31-31-26" Jan 28 01:18:41.472905 kubelet[2814]: E0128 01:18:41.472700 2814 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-26\" not found" node="ip-172-31-31-26" Jan 28 01:18:41.475179 kubelet[2814]: E0128 01:18:41.474931 2814 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-26\" not found" node="ip-172-31-31-26" Jan 28 01:18:41.476425 kubelet[2814]: E0128 01:18:41.476185 2814 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-26\" not found" node="ip-172-31-31-26" Jan 28 01:18:41.600333 kubelet[2814]: I0128 01:18:41.599499 2814 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-26" Jan 28 01:18:42.475043 kubelet[2814]: E0128 01:18:42.475017 2814 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-26\" not found" node="ip-172-31-31-26" Jan 28 01:18:42.476031 kubelet[2814]: E0128 01:18:42.475695 2814 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-26\" not found" node="ip-172-31-31-26" Jan 28 01:18:42.824552 kubelet[2814]: E0128 01:18:42.824438 2814 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-31-26\" not found" node="ip-172-31-31-26" Jan 28 01:18:43.070013 kubelet[2814]: I0128 01:18:43.069941 2814 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-31-26" Jan 28 01:18:43.070013 kubelet[2814]: E0128 01:18:43.069984 2814 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-31-26\": node \"ip-172-31-31-26\" not found" Jan 28 01:18:43.072608 kubelet[2814]: I0128 01:18:43.072572 2814 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-26" Jan 28 01:18:43.106305 kubelet[2814]: E0128 01:18:43.106167 2814 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-31-26\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-31-26" Jan 28 01:18:43.106305 kubelet[2814]: I0128 01:18:43.106204 2814 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-26" Jan 28 01:18:43.109444 kubelet[2814]: E0128 01:18:43.109305 2814 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-31-26\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-31-26" Jan 28 01:18:43.109444 kubelet[2814]: I0128 01:18:43.109335 2814 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-26" Jan 28 01:18:43.115669 kubelet[2814]: E0128 01:18:43.115591 2814 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-31-26\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-31-26" Jan 28 01:18:43.360197 kubelet[2814]: I0128 01:18:43.359906 2814 apiserver.go:52] "Watching apiserver" Jan 28 01:18:43.383262 kubelet[2814]: I0128 01:18:43.383216 2814 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 01:18:45.165372 systemd[1]: Reload requested from client PID 3095 ('systemctl') (unit session-8.scope)... Jan 28 01:18:45.165393 systemd[1]: Reloading... Jan 28 01:18:45.304860 zram_generator::config[3143]: No configuration found. Jan 28 01:18:45.575406 systemd[1]: Reloading finished in 409 ms. Jan 28 01:18:45.603998 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:18:45.617237 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 01:18:45.617648 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:18:45.620857 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 28 01:18:45.620933 kernel: audit: type=1131 audit(1769563125.617:411): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:45.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:45.618825 systemd[1]: kubelet.service: Consumed 1.079s CPU time, 129M memory peak. Jan 28 01:18:45.622642 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:18:45.624803 kernel: audit: type=1334 audit(1769563125.622:412): prog-id=120 op=LOAD Jan 28 01:18:45.622000 audit: BPF prog-id=120 op=LOAD Jan 28 01:18:45.622000 audit: BPF prog-id=70 op=UNLOAD Jan 28 01:18:45.627865 kernel: audit: type=1334 audit(1769563125.622:413): prog-id=70 op=UNLOAD Jan 28 01:18:45.627933 kernel: audit: type=1334 audit(1769563125.622:414): prog-id=121 op=LOAD Jan 28 01:18:45.622000 audit: BPF prog-id=121 op=LOAD Jan 28 01:18:45.622000 audit: BPF prog-id=122 op=LOAD Jan 28 01:18:45.628928 kernel: audit: type=1334 audit(1769563125.622:415): prog-id=122 op=LOAD Jan 28 01:18:45.622000 audit: BPF prog-id=71 op=UNLOAD Jan 28 01:18:45.630108 kernel: audit: type=1334 audit(1769563125.622:416): prog-id=71 op=UNLOAD Jan 28 01:18:45.631200 kernel: audit: type=1334 audit(1769563125.622:417): prog-id=72 op=UNLOAD Jan 28 01:18:45.622000 audit: BPF prog-id=72 op=UNLOAD Jan 28 01:18:45.624000 audit: BPF prog-id=123 op=LOAD Jan 28 01:18:45.633783 kernel: audit: type=1334 audit(1769563125.624:418): prog-id=123 op=LOAD Jan 28 01:18:45.624000 audit: BPF prog-id=76 op=UNLOAD Jan 28 01:18:45.638643 kernel: audit: type=1334 audit(1769563125.624:419): prog-id=76 op=UNLOAD Jan 28 01:18:45.638724 kernel: audit: type=1334 audit(1769563125.633:420): prog-id=124 op=LOAD Jan 28 01:18:45.633000 audit: BPF prog-id=124 op=LOAD Jan 28 01:18:45.633000 audit: BPF prog-id=77 op=UNLOAD Jan 28 01:18:45.634000 audit: BPF prog-id=125 op=LOAD Jan 28 01:18:45.634000 audit: BPF prog-id=126 op=LOAD Jan 28 01:18:45.634000 audit: BPF prog-id=78 op=UNLOAD Jan 28 01:18:45.634000 audit: BPF prog-id=79 op=UNLOAD Jan 28 01:18:45.636000 audit: BPF prog-id=127 op=LOAD Jan 28 01:18:45.636000 audit: BPF prog-id=86 op=UNLOAD Jan 28 01:18:45.637000 audit: BPF prog-id=128 op=LOAD Jan 28 01:18:45.637000 audit: BPF prog-id=129 op=LOAD Jan 28 01:18:45.637000 audit: BPF prog-id=87 op=UNLOAD Jan 28 01:18:45.637000 audit: BPF prog-id=88 op=UNLOAD Jan 28 01:18:45.638000 audit: BPF prog-id=130 op=LOAD Jan 28 01:18:45.638000 audit: BPF prog-id=89 op=UNLOAD Jan 28 01:18:45.640000 audit: BPF prog-id=131 op=LOAD Jan 28 01:18:45.640000 audit: BPF prog-id=73 op=UNLOAD Jan 28 01:18:45.643000 audit: BPF prog-id=132 op=LOAD Jan 28 01:18:45.643000 audit: BPF prog-id=133 op=LOAD Jan 28 01:18:45.643000 audit: BPF prog-id=74 op=UNLOAD Jan 28 01:18:45.643000 audit: BPF prog-id=75 op=UNLOAD Jan 28 01:18:45.644000 audit: BPF prog-id=134 op=LOAD Jan 28 01:18:45.644000 audit: BPF prog-id=83 op=UNLOAD Jan 28 01:18:45.644000 audit: BPF prog-id=135 op=LOAD Jan 28 01:18:45.644000 audit: BPF prog-id=136 op=LOAD Jan 28 01:18:45.644000 audit: BPF prog-id=84 op=UNLOAD Jan 28 01:18:45.644000 audit: BPF prog-id=85 op=UNLOAD Jan 28 01:18:45.646000 audit: BPF prog-id=137 op=LOAD Jan 28 01:18:45.646000 audit: BPF prog-id=80 op=UNLOAD Jan 28 01:18:45.646000 audit: BPF prog-id=138 op=LOAD Jan 28 01:18:45.646000 audit: BPF prog-id=139 op=LOAD Jan 28 01:18:45.646000 audit: BPF prog-id=81 op=UNLOAD Jan 28 01:18:45.646000 audit: BPF prog-id=82 op=UNLOAD Jan 28 01:18:45.951354 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:18:45.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:18:45.965132 (kubelet)[3202]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 01:18:46.043782 kubelet[3202]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:18:46.043782 kubelet[3202]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 01:18:46.043782 kubelet[3202]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:18:46.044341 kubelet[3202]: I0128 01:18:46.044293 3202 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 01:18:46.053731 kubelet[3202]: I0128 01:18:46.053693 3202 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 28 01:18:46.053731 kubelet[3202]: I0128 01:18:46.053723 3202 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 01:18:46.054045 kubelet[3202]: I0128 01:18:46.054020 3202 server.go:956] "Client rotation is on, will bootstrap in background" Jan 28 01:18:46.055558 kubelet[3202]: I0128 01:18:46.055529 3202 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 28 01:18:46.058408 kubelet[3202]: I0128 01:18:46.058378 3202 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 01:18:46.086072 kubelet[3202]: I0128 01:18:46.086025 3202 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 01:18:46.096181 kubelet[3202]: I0128 01:18:46.096149 3202 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 01:18:46.098211 kubelet[3202]: I0128 01:18:46.097697 3202 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 01:18:46.098363 kubelet[3202]: I0128 01:18:46.097943 3202 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-26","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 01:18:46.098363 kubelet[3202]: I0128 01:18:46.098305 3202 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 01:18:46.098363 kubelet[3202]: I0128 01:18:46.098321 3202 container_manager_linux.go:303] "Creating device plugin manager" Jan 28 01:18:46.098569 kubelet[3202]: I0128 01:18:46.098377 3202 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:18:46.098569 kubelet[3202]: I0128 01:18:46.098564 3202 kubelet.go:480] "Attempting to sync node with API server" Jan 28 01:18:46.098644 kubelet[3202]: I0128 01:18:46.098580 3202 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 01:18:46.098644 kubelet[3202]: I0128 01:18:46.098610 3202 kubelet.go:386] "Adding apiserver pod source" Jan 28 01:18:46.098644 kubelet[3202]: I0128 01:18:46.098626 3202 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 01:18:46.100845 kubelet[3202]: I0128 01:18:46.100236 3202 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 28 01:18:46.102568 kubelet[3202]: I0128 01:18:46.101943 3202 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 28 01:18:46.110925 kubelet[3202]: I0128 01:18:46.110795 3202 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 01:18:46.110925 kubelet[3202]: I0128 01:18:46.110858 3202 server.go:1289] "Started kubelet" Jan 28 01:18:46.116104 kubelet[3202]: I0128 01:18:46.116061 3202 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 01:18:46.126559 kubelet[3202]: I0128 01:18:46.126363 3202 server.go:317] "Adding debug handlers to kubelet server" Jan 28 01:18:46.127786 kubelet[3202]: I0128 01:18:46.127454 3202 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 01:18:46.127891 kubelet[3202]: I0128 01:18:46.127858 3202 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 01:18:46.144071 kubelet[3202]: I0128 01:18:46.142210 3202 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 01:18:46.148226 kubelet[3202]: I0128 01:18:46.148188 3202 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 01:18:46.157959 kubelet[3202]: I0128 01:18:46.157924 3202 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 01:18:46.160869 kubelet[3202]: I0128 01:18:46.160022 3202 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 01:18:46.161254 kubelet[3202]: I0128 01:18:46.161227 3202 reconciler.go:26] "Reconciler: start to sync state" Jan 28 01:18:46.165493 kubelet[3202]: I0128 01:18:46.165451 3202 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 28 01:18:46.171327 kubelet[3202]: E0128 01:18:46.171247 3202 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 01:18:46.174997 kubelet[3202]: I0128 01:18:46.174972 3202 factory.go:223] Registration of the systemd container factory successfully Jan 28 01:18:46.175101 kubelet[3202]: I0128 01:18:46.175082 3202 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 01:18:46.189792 kubelet[3202]: I0128 01:18:46.187888 3202 factory.go:223] Registration of the containerd container factory successfully Jan 28 01:18:46.203277 kubelet[3202]: I0128 01:18:46.200828 3202 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 28 01:18:46.203277 kubelet[3202]: I0128 01:18:46.200860 3202 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 28 01:18:46.203277 kubelet[3202]: I0128 01:18:46.200886 3202 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 01:18:46.203277 kubelet[3202]: I0128 01:18:46.200896 3202 kubelet.go:2436] "Starting kubelet main sync loop" Jan 28 01:18:46.203277 kubelet[3202]: E0128 01:18:46.200960 3202 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 01:18:46.265054 kubelet[3202]: I0128 01:18:46.265034 3202 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 01:18:46.265299 kubelet[3202]: I0128 01:18:46.265283 3202 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 01:18:46.265403 kubelet[3202]: I0128 01:18:46.265394 3202 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:18:46.265631 kubelet[3202]: I0128 01:18:46.265619 3202 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 28 01:18:46.265719 kubelet[3202]: I0128 01:18:46.265696 3202 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 28 01:18:46.265811 kubelet[3202]: I0128 01:18:46.265803 3202 policy_none.go:49] "None policy: Start" Jan 28 01:18:46.265882 kubelet[3202]: I0128 01:18:46.265875 3202 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 01:18:46.265944 kubelet[3202]: I0128 01:18:46.265937 3202 state_mem.go:35] "Initializing new in-memory state store" Jan 28 01:18:46.266125 kubelet[3202]: I0128 01:18:46.266115 3202 state_mem.go:75] "Updated machine memory state" Jan 28 01:18:46.271893 kubelet[3202]: E0128 01:18:46.271872 3202 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 28 01:18:46.272302 kubelet[3202]: I0128 01:18:46.272287 3202 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 01:18:46.272419 kubelet[3202]: I0128 01:18:46.272388 3202 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 01:18:46.273313 kubelet[3202]: I0128 01:18:46.273297 3202 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 01:18:46.279990 kubelet[3202]: E0128 01:18:46.279432 3202 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 01:18:46.308328 kubelet[3202]: I0128 01:18:46.308300 3202 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-26" Jan 28 01:18:46.309514 kubelet[3202]: I0128 01:18:46.308693 3202 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-26" Jan 28 01:18:46.314024 kubelet[3202]: I0128 01:18:46.313722 3202 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-26" Jan 28 01:18:46.393341 kubelet[3202]: I0128 01:18:46.392785 3202 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-26" Jan 28 01:18:46.403234 kubelet[3202]: I0128 01:18:46.403061 3202 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-31-26" Jan 28 01:18:46.403435 kubelet[3202]: I0128 01:18:46.403398 3202 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-31-26" Jan 28 01:18:46.462720 kubelet[3202]: I0128 01:18:46.462603 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3704ffb520fb951fa414a30bee049c5c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-26\" (UID: \"3704ffb520fb951fa414a30bee049c5c\") " pod="kube-system/kube-apiserver-ip-172-31-31-26" Jan 28 01:18:46.462720 kubelet[3202]: I0128 01:18:46.462660 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/34df33dec0dc066dc3751b0a759f882f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-26\" (UID: \"34df33dec0dc066dc3751b0a759f882f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-26" Jan 28 01:18:46.462720 kubelet[3202]: I0128 01:18:46.462688 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34df33dec0dc066dc3751b0a759f882f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-26\" (UID: \"34df33dec0dc066dc3751b0a759f882f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-26" Jan 28 01:18:46.462720 kubelet[3202]: I0128 01:18:46.462710 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/985f6ee425c680a0de440ff2e74a684c-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-26\" (UID: \"985f6ee425c680a0de440ff2e74a684c\") " pod="kube-system/kube-scheduler-ip-172-31-31-26" Jan 28 01:18:46.463724 kubelet[3202]: I0128 01:18:46.462734 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3704ffb520fb951fa414a30bee049c5c-ca-certs\") pod \"kube-apiserver-ip-172-31-31-26\" (UID: \"3704ffb520fb951fa414a30bee049c5c\") " pod="kube-system/kube-apiserver-ip-172-31-31-26" Jan 28 01:18:46.463724 kubelet[3202]: I0128 01:18:46.462777 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34df33dec0dc066dc3751b0a759f882f-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-26\" (UID: \"34df33dec0dc066dc3751b0a759f882f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-26" Jan 28 01:18:46.463724 kubelet[3202]: I0128 01:18:46.462796 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34df33dec0dc066dc3751b0a759f882f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-26\" (UID: \"34df33dec0dc066dc3751b0a759f882f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-26" Jan 28 01:18:46.463724 kubelet[3202]: I0128 01:18:46.462818 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34df33dec0dc066dc3751b0a759f882f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-26\" (UID: \"34df33dec0dc066dc3751b0a759f882f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-26" Jan 28 01:18:46.463724 kubelet[3202]: I0128 01:18:46.463264 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3704ffb520fb951fa414a30bee049c5c-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-26\" (UID: \"3704ffb520fb951fa414a30bee049c5c\") " pod="kube-system/kube-apiserver-ip-172-31-31-26" Jan 28 01:18:47.111313 kubelet[3202]: I0128 01:18:47.110932 3202 apiserver.go:52] "Watching apiserver" Jan 28 01:18:47.161664 kubelet[3202]: I0128 01:18:47.161619 3202 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 01:18:47.244213 kubelet[3202]: I0128 01:18:47.244174 3202 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-26" Jan 28 01:18:47.246553 kubelet[3202]: I0128 01:18:47.246516 3202 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-26" Jan 28 01:18:47.252969 kubelet[3202]: E0128 01:18:47.252936 3202 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-31-26\" already exists" pod="kube-system/kube-scheduler-ip-172-31-31-26" Jan 28 01:18:47.259636 kubelet[3202]: E0128 01:18:47.259423 3202 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-31-26\" already exists" pod="kube-system/kube-apiserver-ip-172-31-31-26" Jan 28 01:18:47.302375 kubelet[3202]: I0128 01:18:47.302271 3202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-26" podStartSLOduration=1.302181197 podStartE2EDuration="1.302181197s" podCreationTimestamp="2026-01-28 01:18:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:18:47.288815812 +0000 UTC m=+1.306702800" watchObservedRunningTime="2026-01-28 01:18:47.302181197 +0000 UTC m=+1.320068186" Jan 28 01:18:47.315768 kubelet[3202]: I0128 01:18:47.315570 3202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-31-26" podStartSLOduration=1.3155492309999999 podStartE2EDuration="1.315549231s" podCreationTimestamp="2026-01-28 01:18:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:18:47.303097732 +0000 UTC m=+1.320984723" watchObservedRunningTime="2026-01-28 01:18:47.315549231 +0000 UTC m=+1.333436219" Jan 28 01:18:47.332382 kubelet[3202]: I0128 01:18:47.332019 3202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-26" podStartSLOduration=1.33199789 podStartE2EDuration="1.33199789s" podCreationTimestamp="2026-01-28 01:18:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:18:47.316617952 +0000 UTC m=+1.334504943" watchObservedRunningTime="2026-01-28 01:18:47.33199789 +0000 UTC m=+1.349884881" Jan 28 01:18:48.763389 update_engine[1835]: I20260128 01:18:48.763298 1835 update_attempter.cc:509] Updating boot flags... Jan 28 01:18:51.803277 kubelet[3202]: I0128 01:18:51.803211 3202 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 28 01:18:51.821270 containerd[1849]: time="2026-01-28T01:18:51.821216342Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 28 01:18:51.821899 kubelet[3202]: I0128 01:18:51.821857 3202 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 28 01:18:52.513679 systemd[1]: Created slice kubepods-besteffort-pod4cce8295_21ac_427b_9e68_070492064630.slice - libcontainer container kubepods-besteffort-pod4cce8295_21ac_427b_9e68_070492064630.slice. Jan 28 01:18:52.601908 kubelet[3202]: I0128 01:18:52.601870 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4cce8295-21ac-427b-9e68-070492064630-kube-proxy\") pod \"kube-proxy-55h25\" (UID: \"4cce8295-21ac-427b-9e68-070492064630\") " pod="kube-system/kube-proxy-55h25" Jan 28 01:18:52.602303 kubelet[3202]: I0128 01:18:52.602269 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4cce8295-21ac-427b-9e68-070492064630-lib-modules\") pod \"kube-proxy-55h25\" (UID: \"4cce8295-21ac-427b-9e68-070492064630\") " pod="kube-system/kube-proxy-55h25" Jan 28 01:18:52.602412 kubelet[3202]: I0128 01:18:52.602400 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tgtd\" (UniqueName: \"kubernetes.io/projected/4cce8295-21ac-427b-9e68-070492064630-kube-api-access-5tgtd\") pod \"kube-proxy-55h25\" (UID: \"4cce8295-21ac-427b-9e68-070492064630\") " pod="kube-system/kube-proxy-55h25" Jan 28 01:18:52.602553 kubelet[3202]: I0128 01:18:52.602542 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4cce8295-21ac-427b-9e68-070492064630-xtables-lock\") pod \"kube-proxy-55h25\" (UID: \"4cce8295-21ac-427b-9e68-070492064630\") " pod="kube-system/kube-proxy-55h25" Jan 28 01:18:52.745883 systemd[1]: Created slice kubepods-besteffort-pod46f4cc39_4770_42f7_a369_5e43b14a884c.slice - libcontainer container kubepods-besteffort-pod46f4cc39_4770_42f7_a369_5e43b14a884c.slice. Jan 28 01:18:52.804126 kubelet[3202]: I0128 01:18:52.803966 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd7hd\" (UniqueName: \"kubernetes.io/projected/46f4cc39-4770-42f7-a369-5e43b14a884c-kube-api-access-rd7hd\") pod \"tigera-operator-7dcd859c48-t2k5j\" (UID: \"46f4cc39-4770-42f7-a369-5e43b14a884c\") " pod="tigera-operator/tigera-operator-7dcd859c48-t2k5j" Jan 28 01:18:52.804126 kubelet[3202]: I0128 01:18:52.804011 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/46f4cc39-4770-42f7-a369-5e43b14a884c-var-lib-calico\") pod \"tigera-operator-7dcd859c48-t2k5j\" (UID: \"46f4cc39-4770-42f7-a369-5e43b14a884c\") " pod="tigera-operator/tigera-operator-7dcd859c48-t2k5j" Jan 28 01:18:52.820860 containerd[1849]: time="2026-01-28T01:18:52.820801773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-55h25,Uid:4cce8295-21ac-427b-9e68-070492064630,Namespace:kube-system,Attempt:0,}" Jan 28 01:18:52.857766 containerd[1849]: time="2026-01-28T01:18:52.857643911Z" level=info msg="connecting to shim 5e74c8bad383b59a8b14bd53ed5c28f546a622c3643b27aa87242894ca3da7d8" address="unix:///run/containerd/s/2696c5fc224b014d242edaaae000d175373e13be77bc01389f73b37c256c987c" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:18:52.893041 systemd[1]: Started cri-containerd-5e74c8bad383b59a8b14bd53ed5c28f546a622c3643b27aa87242894ca3da7d8.scope - libcontainer container 5e74c8bad383b59a8b14bd53ed5c28f546a622c3643b27aa87242894ca3da7d8. Jan 28 01:18:52.907851 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 28 01:18:52.907953 kernel: audit: type=1334 audit(1769563132.904:453): prog-id=140 op=LOAD Jan 28 01:18:52.904000 audit: BPF prog-id=140 op=LOAD Jan 28 01:18:52.907000 audit: BPF prog-id=141 op=LOAD Jan 28 01:18:52.914528 kernel: audit: type=1334 audit(1769563132.907:454): prog-id=141 op=LOAD Jan 28 01:18:52.914701 kernel: audit: type=1300 audit(1769563132.907:454): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3357 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:52.907000 audit[3370]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3357 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:52.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565373463386261643338336235396138623134626435336564356332 Jan 28 01:18:52.921787 kernel: audit: type=1327 audit(1769563132.907:454): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565373463386261643338336235396138623134626435336564356332 Jan 28 01:18:52.907000 audit: BPF prog-id=141 op=UNLOAD Jan 28 01:18:52.929192 kernel: audit: type=1334 audit(1769563132.907:455): prog-id=141 op=UNLOAD Jan 28 01:18:52.929300 kernel: audit: type=1300 audit(1769563132.907:455): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3357 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:52.907000 audit[3370]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3357 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:52.934259 kernel: audit: type=1327 audit(1769563132.907:455): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565373463386261643338336235396138623134626435336564356332 Jan 28 01:18:52.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565373463386261643338336235396138623134626435336564356332 Jan 28 01:18:52.907000 audit: BPF prog-id=142 op=LOAD Jan 28 01:18:52.936290 kernel: audit: type=1334 audit(1769563132.907:456): prog-id=142 op=LOAD Jan 28 01:18:52.936347 kernel: audit: type=1300 audit(1769563132.907:456): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3357 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:52.907000 audit[3370]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3357 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:52.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565373463386261643338336235396138623134626435336564356332 Jan 28 01:18:52.941657 kernel: audit: type=1327 audit(1769563132.907:456): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565373463386261643338336235396138623134626435336564356332 Jan 28 01:18:52.907000 audit: BPF prog-id=143 op=LOAD Jan 28 01:18:52.907000 audit[3370]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3357 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:52.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565373463386261643338336235396138623134626435336564356332 Jan 28 01:18:52.907000 audit: BPF prog-id=143 op=UNLOAD Jan 28 01:18:52.907000 audit[3370]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3357 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:52.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565373463386261643338336235396138623134626435336564356332 Jan 28 01:18:52.907000 audit: BPF prog-id=142 op=UNLOAD Jan 28 01:18:52.907000 audit[3370]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3357 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:52.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565373463386261643338336235396138623134626435336564356332 Jan 28 01:18:52.907000 audit: BPF prog-id=144 op=LOAD Jan 28 01:18:52.907000 audit[3370]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3357 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:52.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565373463386261643338336235396138623134626435336564356332 Jan 28 01:18:52.946965 containerd[1849]: time="2026-01-28T01:18:52.946788867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-55h25,Uid:4cce8295-21ac-427b-9e68-070492064630,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e74c8bad383b59a8b14bd53ed5c28f546a622c3643b27aa87242894ca3da7d8\"" Jan 28 01:18:52.957361 containerd[1849]: time="2026-01-28T01:18:52.957302607Z" level=info msg="CreateContainer within sandbox \"5e74c8bad383b59a8b14bd53ed5c28f546a622c3643b27aa87242894ca3da7d8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 28 01:18:53.062338 containerd[1849]: time="2026-01-28T01:18:53.061868862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-t2k5j,Uid:46f4cc39-4770-42f7-a369-5e43b14a884c,Namespace:tigera-operator,Attempt:0,}" Jan 28 01:18:53.089118 containerd[1849]: time="2026-01-28T01:18:53.089059303Z" level=info msg="Container 2f52677cdd9b3c9da7df9b37e3e87d1eb75a97a08bf5c27ed4c0775de06bda7e: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:18:53.091986 containerd[1849]: time="2026-01-28T01:18:53.091912187Z" level=info msg="connecting to shim bc3c4103fc03007385005856119110a170a4caa740f65784f0a2408de35c46a0" address="unix:///run/containerd/s/7cf614ac5997902b155d896bdd3877b873629e33278dec004eb366bf590293f8" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:18:53.134300 systemd[1]: Started cri-containerd-bc3c4103fc03007385005856119110a170a4caa740f65784f0a2408de35c46a0.scope - libcontainer container bc3c4103fc03007385005856119110a170a4caa740f65784f0a2408de35c46a0. Jan 28 01:18:53.148240 containerd[1849]: time="2026-01-28T01:18:53.148039362Z" level=info msg="CreateContainer within sandbox \"5e74c8bad383b59a8b14bd53ed5c28f546a622c3643b27aa87242894ca3da7d8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2f52677cdd9b3c9da7df9b37e3e87d1eb75a97a08bf5c27ed4c0775de06bda7e\"" Jan 28 01:18:53.150998 containerd[1849]: time="2026-01-28T01:18:53.150578896Z" level=info msg="StartContainer for \"2f52677cdd9b3c9da7df9b37e3e87d1eb75a97a08bf5c27ed4c0775de06bda7e\"" Jan 28 01:18:53.155294 containerd[1849]: time="2026-01-28T01:18:53.155233833Z" level=info msg="connecting to shim 2f52677cdd9b3c9da7df9b37e3e87d1eb75a97a08bf5c27ed4c0775de06bda7e" address="unix:///run/containerd/s/2696c5fc224b014d242edaaae000d175373e13be77bc01389f73b37c256c987c" protocol=ttrpc version=3 Jan 28 01:18:53.160000 audit: BPF prog-id=145 op=LOAD Jan 28 01:18:53.162000 audit: BPF prog-id=146 op=LOAD Jan 28 01:18:53.162000 audit[3418]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3407 pid=3418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:53.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263336334313033666330333030373338353030353835363131393131 Jan 28 01:18:53.162000 audit: BPF prog-id=146 op=UNLOAD Jan 28 01:18:53.162000 audit[3418]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3407 pid=3418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:53.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263336334313033666330333030373338353030353835363131393131 Jan 28 01:18:53.162000 audit: BPF prog-id=147 op=LOAD Jan 28 01:18:53.162000 audit[3418]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3407 pid=3418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:53.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263336334313033666330333030373338353030353835363131393131 Jan 28 01:18:53.163000 audit: BPF prog-id=148 op=LOAD Jan 28 01:18:53.163000 audit[3418]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3407 pid=3418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:53.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263336334313033666330333030373338353030353835363131393131 Jan 28 01:18:53.163000 audit: BPF prog-id=148 op=UNLOAD Jan 28 01:18:53.163000 audit[3418]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3407 pid=3418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:53.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263336334313033666330333030373338353030353835363131393131 Jan 28 01:18:53.163000 audit: BPF prog-id=147 op=UNLOAD Jan 28 01:18:53.163000 audit[3418]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3407 pid=3418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:53.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263336334313033666330333030373338353030353835363131393131 Jan 28 01:18:53.163000 audit: BPF prog-id=149 op=LOAD Jan 28 01:18:53.163000 audit[3418]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3407 pid=3418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:53.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263336334313033666330333030373338353030353835363131393131 Jan 28 01:18:53.193147 systemd[1]: Started cri-containerd-2f52677cdd9b3c9da7df9b37e3e87d1eb75a97a08bf5c27ed4c0775de06bda7e.scope - libcontainer container 2f52677cdd9b3c9da7df9b37e3e87d1eb75a97a08bf5c27ed4c0775de06bda7e. Jan 28 01:18:53.234435 containerd[1849]: time="2026-01-28T01:18:53.234296239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-t2k5j,Uid:46f4cc39-4770-42f7-a369-5e43b14a884c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bc3c4103fc03007385005856119110a170a4caa740f65784f0a2408de35c46a0\"" Jan 28 01:18:53.240810 containerd[1849]: time="2026-01-28T01:18:53.240294396Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 28 01:18:53.268000 audit: BPF prog-id=150 op=LOAD Jan 28 01:18:53.268000 audit[3438]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3357 pid=3438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:53.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266353236373763646439623363396461376466396233376533653837 Jan 28 01:18:53.268000 audit: BPF prog-id=151 op=LOAD Jan 28 01:18:53.268000 audit[3438]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3357 pid=3438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:53.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266353236373763646439623363396461376466396233376533653837 Jan 28 01:18:53.268000 audit: BPF prog-id=151 op=UNLOAD Jan 28 01:18:53.268000 audit[3438]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3357 pid=3438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:53.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266353236373763646439623363396461376466396233376533653837 Jan 28 01:18:53.268000 audit: BPF prog-id=150 op=UNLOAD Jan 28 01:18:53.268000 audit[3438]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3357 pid=3438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:53.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266353236373763646439623363396461376466396233376533653837 Jan 28 01:18:53.268000 audit: BPF prog-id=152 op=LOAD Jan 28 01:18:53.268000 audit[3438]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3357 pid=3438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:53.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266353236373763646439623363396461376466396233376533653837 Jan 28 01:18:53.307456 containerd[1849]: time="2026-01-28T01:18:53.307388006Z" level=info msg="StartContainer for \"2f52677cdd9b3c9da7df9b37e3e87d1eb75a97a08bf5c27ed4c0775de06bda7e\" returns successfully" Jan 28 01:18:53.719672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1342982755.mount: Deactivated successfully. Jan 28 01:18:54.589000 audit[3510]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3510 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:54.589000 audit[3510]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffff0d47290 a2=0 a3=7ffff0d4727c items=0 ppid=3451 pid=3510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.589000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 28 01:18:54.591000 audit[3511]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3511 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:54.591000 audit[3511]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff46f0cc10 a2=0 a3=7fff46f0cbfc items=0 ppid=3451 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.591000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 28 01:18:54.593000 audit[3513]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_chain pid=3513 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:54.593000 audit[3513]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffd46e4000 a2=0 a3=7fffd46e3fec items=0 ppid=3451 pid=3513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.593000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 28 01:18:54.594000 audit[3514]: NETFILTER_CFG table=mangle:57 family=10 entries=1 op=nft_register_chain pid=3514 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:54.594000 audit[3514]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe4607d2b0 a2=0 a3=7ffe4607d29c items=0 ppid=3451 pid=3514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.594000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 28 01:18:54.595000 audit[3515]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3515 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:54.595000 audit[3515]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc53cd10c0 a2=0 a3=7ffc53cd10ac items=0 ppid=3451 pid=3515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.595000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 28 01:18:54.599000 audit[3517]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3517 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:54.599000 audit[3517]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffed6265fe0 a2=0 a3=7ffed6265fcc items=0 ppid=3451 pid=3517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.599000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 28 01:18:54.643657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2902443407.mount: Deactivated successfully. Jan 28 01:18:54.702000 audit[3523]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3523 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:54.702000 audit[3523]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc5c5c61b0 a2=0 a3=7ffc5c5c619c items=0 ppid=3451 pid=3523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.702000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 28 01:18:54.707000 audit[3525]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3525 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:54.707000 audit[3525]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffccc3c8c80 a2=0 a3=7ffccc3c8c6c items=0 ppid=3451 pid=3525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.707000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 28 01:18:54.711000 audit[3528]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3528 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:54.711000 audit[3528]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffecbd28400 a2=0 a3=7ffecbd283ec items=0 ppid=3451 pid=3528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.711000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 28 01:18:54.712000 audit[3529]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3529 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:54.712000 audit[3529]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff819cbc20 a2=0 a3=7fff819cbc0c items=0 ppid=3451 pid=3529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.712000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 28 01:18:54.715000 audit[3531]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3531 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:54.715000 audit[3531]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe42b8a0e0 a2=0 a3=7ffe42b8a0cc items=0 ppid=3451 pid=3531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.715000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 28 01:18:54.717000 audit[3532]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3532 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:54.717000 audit[3532]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffc06e1320 a2=0 a3=7fffc06e130c items=0 ppid=3451 pid=3532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.717000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 28 01:18:54.720000 audit[3534]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3534 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:54.720000 audit[3534]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffea6e8d460 a2=0 a3=7ffea6e8d44c items=0 ppid=3451 pid=3534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.720000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 28 01:18:54.724000 audit[3537]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3537 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:54.724000 audit[3537]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe6c549f80 a2=0 a3=7ffe6c549f6c items=0 ppid=3451 pid=3537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.724000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 28 01:18:54.726000 audit[3538]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3538 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:54.726000 audit[3538]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe8e453680 a2=0 a3=7ffe8e45366c items=0 ppid=3451 pid=3538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.726000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 28 01:18:54.729000 audit[3540]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3540 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:54.729000 audit[3540]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffa00da2e0 a2=0 a3=7fffa00da2cc items=0 ppid=3451 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.729000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 28 01:18:54.730000 audit[3541]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3541 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:54.730000 audit[3541]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffca7c82730 a2=0 a3=7ffca7c8271c items=0 ppid=3451 pid=3541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.730000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 28 01:18:54.733000 audit[3543]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3543 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:54.733000 audit[3543]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffea44c4700 a2=0 a3=7ffea44c46ec items=0 ppid=3451 pid=3543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.733000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 28 01:18:54.737000 audit[3546]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3546 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:54.737000 audit[3546]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe6d64c9e0 a2=0 a3=7ffe6d64c9cc items=0 ppid=3451 pid=3546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.737000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 28 01:18:54.742000 audit[3549]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3549 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:54.742000 audit[3549]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe341647a0 a2=0 a3=7ffe3416478c items=0 ppid=3451 pid=3549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.742000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 28 01:18:54.743000 audit[3550]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3550 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:54.743000 audit[3550]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd11ff2880 a2=0 a3=7ffd11ff286c items=0 ppid=3451 pid=3550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.743000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 28 01:18:54.748000 audit[3552]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3552 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:54.748000 audit[3552]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fffe29230c0 a2=0 a3=7fffe29230ac items=0 ppid=3451 pid=3552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.748000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 01:18:54.752000 audit[3555]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3555 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:54.752000 audit[3555]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffde0737160 a2=0 a3=7ffde073714c items=0 ppid=3451 pid=3555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.752000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 01:18:54.753000 audit[3556]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3556 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:54.753000 audit[3556]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeff5ca4b0 a2=0 a3=7ffeff5ca49c items=0 ppid=3451 pid=3556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.753000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 28 01:18:54.755000 audit[3558]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3558 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:18:54.755000 audit[3558]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffddd385a10 a2=0 a3=7ffddd3859fc items=0 ppid=3451 pid=3558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.755000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 28 01:18:54.844000 audit[3564]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3564 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:18:54.844000 audit[3564]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc1ca53a20 a2=0 a3=7ffc1ca53a0c items=0 ppid=3451 pid=3564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.844000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:18:54.852000 audit[3564]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3564 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:18:54.852000 audit[3564]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffc1ca53a20 a2=0 a3=7ffc1ca53a0c items=0 ppid=3451 pid=3564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.852000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:18:54.854000 audit[3569]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3569 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:54.854000 audit[3569]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff4f155360 a2=0 a3=7fff4f15534c items=0 ppid=3451 pid=3569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.854000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 28 01:18:54.857000 audit[3571]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3571 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:54.857000 audit[3571]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe48bc95d0 a2=0 a3=7ffe48bc95bc items=0 ppid=3451 pid=3571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.857000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 28 01:18:54.861000 audit[3574]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3574 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:54.861000 audit[3574]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff9d4a07b0 a2=0 a3=7fff9d4a079c items=0 ppid=3451 pid=3574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.861000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 28 01:18:54.862000 audit[3575]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3575 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:54.862000 audit[3575]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd494dce90 a2=0 a3=7ffd494dce7c items=0 ppid=3451 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.862000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 28 01:18:54.865000 audit[3577]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3577 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:54.865000 audit[3577]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffa13d4020 a2=0 a3=7fffa13d400c items=0 ppid=3451 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.865000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 28 01:18:54.867000 audit[3578]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3578 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:54.867000 audit[3578]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb22d1b80 a2=0 a3=7fffb22d1b6c items=0 ppid=3451 pid=3578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.867000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 28 01:18:54.870000 audit[3580]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3580 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:54.870000 audit[3580]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcf47a2900 a2=0 a3=7ffcf47a28ec items=0 ppid=3451 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.870000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 28 01:18:54.874000 audit[3583]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3583 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:54.874000 audit[3583]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffe4786c160 a2=0 a3=7ffe4786c14c items=0 ppid=3451 pid=3583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.874000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 28 01:18:54.875000 audit[3584]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3584 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:54.875000 audit[3584]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2b1f6700 a2=0 a3=7fff2b1f66ec items=0 ppid=3451 pid=3584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.875000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 28 01:18:54.878000 audit[3586]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3586 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:54.878000 audit[3586]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc23609e80 a2=0 a3=7ffc23609e6c items=0 ppid=3451 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.878000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 28 01:18:54.880000 audit[3587]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3587 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:54.880000 audit[3587]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd6e1f2d10 a2=0 a3=7ffd6e1f2cfc items=0 ppid=3451 pid=3587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.880000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 28 01:18:54.883000 audit[3589]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3589 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:54.883000 audit[3589]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc91a3b9c0 a2=0 a3=7ffc91a3b9ac items=0 ppid=3451 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.883000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 28 01:18:54.888000 audit[3592]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3592 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:54.888000 audit[3592]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffa4afbc40 a2=0 a3=7fffa4afbc2c items=0 ppid=3451 pid=3592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.888000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 28 01:18:54.893000 audit[3595]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3595 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:54.893000 audit[3595]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc4510eeb0 a2=0 a3=7ffc4510ee9c items=0 ppid=3451 pid=3595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.893000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 28 01:18:54.895000 audit[3596]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3596 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:54.895000 audit[3596]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc8bacf7e0 a2=0 a3=7ffc8bacf7cc items=0 ppid=3451 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.895000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 28 01:18:54.899000 audit[3598]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3598 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:54.899000 audit[3598]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffddd3f79c0 a2=0 a3=7ffddd3f79ac items=0 ppid=3451 pid=3598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.899000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 01:18:54.905000 audit[3601]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3601 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:54.905000 audit[3601]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffb66fb4d0 a2=0 a3=7fffb66fb4bc items=0 ppid=3451 pid=3601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.905000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 01:18:54.908000 audit[3602]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3602 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:54.908000 audit[3602]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc24128620 a2=0 a3=7ffc2412860c items=0 ppid=3451 pid=3602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.908000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 28 01:18:54.912000 audit[3604]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3604 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:54.912000 audit[3604]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fffcee808d0 a2=0 a3=7fffcee808bc items=0 ppid=3451 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.912000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 28 01:18:54.914000 audit[3605]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3605 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:54.914000 audit[3605]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffecad10c40 a2=0 a3=7ffecad10c2c items=0 ppid=3451 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.914000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 28 01:18:54.918000 audit[3607]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3607 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:54.918000 audit[3607]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd65d13030 a2=0 a3=7ffd65d1301c items=0 ppid=3451 pid=3607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.918000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 01:18:54.923000 audit[3610]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3610 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:18:54.923000 audit[3610]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe94507320 a2=0 a3=7ffe9450730c items=0 ppid=3451 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.923000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 01:18:54.927000 audit[3612]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3612 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 28 01:18:54.927000 audit[3612]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffc1e20ba00 a2=0 a3=7ffc1e20b9ec items=0 ppid=3451 pid=3612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.927000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:18:54.927000 audit[3612]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3612 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 28 01:18:54.927000 audit[3612]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc1e20ba00 a2=0 a3=7ffc1e20b9ec items=0 ppid=3451 pid=3612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:54.927000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:18:57.228109 containerd[1849]: time="2026-01-28T01:18:57.228040740Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:18:57.230343 containerd[1849]: time="2026-01-28T01:18:57.230013117Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 28 01:18:57.232602 containerd[1849]: time="2026-01-28T01:18:57.232570571Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:18:57.235799 containerd[1849]: time="2026-01-28T01:18:57.235768238Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:18:57.236266 containerd[1849]: time="2026-01-28T01:18:57.236234469Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.995892684s" Jan 28 01:18:57.236266 containerd[1849]: time="2026-01-28T01:18:57.236267362Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 28 01:18:57.242630 containerd[1849]: time="2026-01-28T01:18:57.242596951Z" level=info msg="CreateContainer within sandbox \"bc3c4103fc03007385005856119110a170a4caa740f65784f0a2408de35c46a0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 28 01:18:57.258522 containerd[1849]: time="2026-01-28T01:18:57.257913565Z" level=info msg="Container ea9363c3d913de59a074bb70af6d08931c92bdabf737955310fb0d2a3ddd64f7: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:18:57.269771 containerd[1849]: time="2026-01-28T01:18:57.268668849Z" level=info msg="CreateContainer within sandbox \"bc3c4103fc03007385005856119110a170a4caa740f65784f0a2408de35c46a0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ea9363c3d913de59a074bb70af6d08931c92bdabf737955310fb0d2a3ddd64f7\"" Jan 28 01:18:57.270593 containerd[1849]: time="2026-01-28T01:18:57.270556991Z" level=info msg="StartContainer for \"ea9363c3d913de59a074bb70af6d08931c92bdabf737955310fb0d2a3ddd64f7\"" Jan 28 01:18:57.271371 containerd[1849]: time="2026-01-28T01:18:57.271329861Z" level=info msg="connecting to shim ea9363c3d913de59a074bb70af6d08931c92bdabf737955310fb0d2a3ddd64f7" address="unix:///run/containerd/s/7cf614ac5997902b155d896bdd3877b873629e33278dec004eb366bf590293f8" protocol=ttrpc version=3 Jan 28 01:18:57.295975 systemd[1]: Started cri-containerd-ea9363c3d913de59a074bb70af6d08931c92bdabf737955310fb0d2a3ddd64f7.scope - libcontainer container ea9363c3d913de59a074bb70af6d08931c92bdabf737955310fb0d2a3ddd64f7. Jan 28 01:18:57.307000 audit: BPF prog-id=153 op=LOAD Jan 28 01:18:57.308000 audit: BPF prog-id=154 op=LOAD Jan 28 01:18:57.308000 audit[3617]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3407 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:57.308000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561393336336333643931336465353961303734626237306166366430 Jan 28 01:18:57.308000 audit: BPF prog-id=154 op=UNLOAD Jan 28 01:18:57.308000 audit[3617]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3407 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:57.308000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561393336336333643931336465353961303734626237306166366430 Jan 28 01:18:57.308000 audit: BPF prog-id=155 op=LOAD Jan 28 01:18:57.308000 audit[3617]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3407 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:57.308000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561393336336333643931336465353961303734626237306166366430 Jan 28 01:18:57.308000 audit: BPF prog-id=156 op=LOAD Jan 28 01:18:57.308000 audit[3617]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3407 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:57.308000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561393336336333643931336465353961303734626237306166366430 Jan 28 01:18:57.308000 audit: BPF prog-id=156 op=UNLOAD Jan 28 01:18:57.308000 audit[3617]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3407 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:57.308000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561393336336333643931336465353961303734626237306166366430 Jan 28 01:18:57.308000 audit: BPF prog-id=155 op=UNLOAD Jan 28 01:18:57.308000 audit[3617]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3407 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:57.308000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561393336336333643931336465353961303734626237306166366430 Jan 28 01:18:57.308000 audit: BPF prog-id=157 op=LOAD Jan 28 01:18:57.308000 audit[3617]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3407 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:18:57.308000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561393336336333643931336465353961303734626237306166366430 Jan 28 01:18:57.331199 containerd[1849]: time="2026-01-28T01:18:57.331163232Z" level=info msg="StartContainer for \"ea9363c3d913de59a074bb70af6d08931c92bdabf737955310fb0d2a3ddd64f7\" returns successfully" Jan 28 01:18:57.641918 kubelet[3202]: I0128 01:18:57.641736 3202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-55h25" podStartSLOduration=5.641666909 podStartE2EDuration="5.641666909s" podCreationTimestamp="2026-01-28 01:18:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:18:54.283841844 +0000 UTC m=+8.301728832" watchObservedRunningTime="2026-01-28 01:18:57.641666909 +0000 UTC m=+11.659553896" Jan 28 01:18:58.978085 kubelet[3202]: I0128 01:18:58.977905 3202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-t2k5j" podStartSLOduration=2.979672446 podStartE2EDuration="6.97788602s" podCreationTimestamp="2026-01-28 01:18:52 +0000 UTC" firstStartedPulling="2026-01-28 01:18:53.239071975 +0000 UTC m=+7.256958942" lastFinishedPulling="2026-01-28 01:18:57.237285547 +0000 UTC m=+11.255172516" observedRunningTime="2026-01-28 01:18:58.299043213 +0000 UTC m=+12.316930202" watchObservedRunningTime="2026-01-28 01:18:58.97788602 +0000 UTC m=+12.995773022" Jan 28 01:19:02.722579 sudo[2229]: pam_unix(sudo:session): session closed for user root Jan 28 01:19:02.730146 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 28 01:19:02.730291 kernel: audit: type=1106 audit(1769563142.722:533): pid=2229 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:19:02.722000 audit[2229]: USER_END pid=2229 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:19:02.722000 audit[2229]: CRED_DISP pid=2229 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:19:02.738783 kernel: audit: type=1104 audit(1769563142.722:534): pid=2229 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:19:02.807453 sshd[2228]: Connection closed by 68.220.241.50 port 44954 Jan 28 01:19:02.808345 sshd-session[2224]: pam_unix(sshd:session): session closed for user core Jan 28 01:19:02.811000 audit[2224]: USER_END pid=2224 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:19:02.819774 kernel: audit: type=1106 audit(1769563142.811:535): pid=2224 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:19:02.811000 audit[2224]: CRED_DISP pid=2224 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:19:02.825781 kernel: audit: type=1104 audit(1769563142.811:536): pid=2224 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:19:02.829552 systemd[1]: sshd@6-172.31.31.26:22-68.220.241.50:44954.service: Deactivated successfully. Jan 28 01:19:02.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.31.26:22-68.220.241.50:44954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:19:02.835770 kernel: audit: type=1131 audit(1769563142.829:537): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.31.26:22-68.220.241.50:44954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:19:02.840255 systemd[1]: session-8.scope: Deactivated successfully. Jan 28 01:19:02.840557 systemd[1]: session-8.scope: Consumed 5.481s CPU time, 152.4M memory peak. Jan 28 01:19:02.845956 systemd-logind[1833]: Session 8 logged out. Waiting for processes to exit. Jan 28 01:19:02.851031 systemd-logind[1833]: Removed session 8. Jan 28 01:19:03.790000 audit[3697]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3697 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:03.802441 kernel: audit: type=1325 audit(1769563143.790:538): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3697 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:03.828058 kernel: audit: type=1300 audit(1769563143.790:538): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdd37df4a0 a2=0 a3=7ffdd37df48c items=0 ppid=3451 pid=3697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:03.828130 kernel: audit: type=1327 audit(1769563143.790:538): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:03.828169 kernel: audit: type=1325 audit(1769563143.803:539): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3697 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:03.828198 kernel: audit: type=1300 audit(1769563143.803:539): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdd37df4a0 a2=0 a3=0 items=0 ppid=3451 pid=3697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:03.790000 audit[3697]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdd37df4a0 a2=0 a3=7ffdd37df48c items=0 ppid=3451 pid=3697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:03.790000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:03.803000 audit[3697]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3697 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:03.803000 audit[3697]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdd37df4a0 a2=0 a3=0 items=0 ppid=3451 pid=3697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:03.803000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:03.836000 audit[3699]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3699 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:03.836000 audit[3699]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd6fddf1f0 a2=0 a3=7ffd6fddf1dc items=0 ppid=3451 pid=3699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:03.836000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:03.843000 audit[3699]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3699 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:03.843000 audit[3699]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd6fddf1f0 a2=0 a3=0 items=0 ppid=3451 pid=3699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:03.843000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:07.848667 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 28 01:19:07.848861 kernel: audit: type=1325 audit(1769563147.842:542): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3702 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:07.842000 audit[3702]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3702 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:07.842000 audit[3702]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffdd1ee1700 a2=0 a3=7ffdd1ee16ec items=0 ppid=3451 pid=3702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:07.858160 kernel: audit: type=1300 audit(1769563147.842:542): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffdd1ee1700 a2=0 a3=7ffdd1ee16ec items=0 ppid=3451 pid=3702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:07.842000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:07.865771 kernel: audit: type=1327 audit(1769563147.842:542): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:07.867000 audit[3702]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3702 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:07.871889 kernel: audit: type=1325 audit(1769563147.867:543): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3702 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:07.867000 audit[3702]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdd1ee1700 a2=0 a3=0 items=0 ppid=3451 pid=3702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:07.880771 kernel: audit: type=1300 audit(1769563147.867:543): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdd1ee1700 a2=0 a3=0 items=0 ppid=3451 pid=3702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:07.867000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:07.888787 kernel: audit: type=1327 audit(1769563147.867:543): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:07.952000 audit[3704]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3704 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:07.956798 kernel: audit: type=1325 audit(1769563147.952:544): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3704 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:07.952000 audit[3704]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffecd2690f0 a2=0 a3=7ffecd2690dc items=0 ppid=3451 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:07.966462 kernel: audit: type=1300 audit(1769563147.952:544): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffecd2690f0 a2=0 a3=7ffecd2690dc items=0 ppid=3451 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:07.952000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:07.971831 kernel: audit: type=1327 audit(1769563147.952:544): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:07.965000 audit[3704]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3704 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:07.975802 kernel: audit: type=1325 audit(1769563147.965:545): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3704 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:07.965000 audit[3704]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffecd2690f0 a2=0 a3=0 items=0 ppid=3451 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:07.965000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:09.018000 audit[3706]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3706 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:09.018000 audit[3706]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe7a08f2e0 a2=0 a3=7ffe7a08f2cc items=0 ppid=3451 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:09.018000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:09.023000 audit[3706]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3706 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:09.023000 audit[3706]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe7a08f2e0 a2=0 a3=0 items=0 ppid=3451 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:09.023000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:09.908000 audit[3710]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3710 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:09.908000 audit[3710]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffdc2541fe0 a2=0 a3=7ffdc2541fcc items=0 ppid=3451 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:09.908000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:09.911000 audit[3710]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3710 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:09.911000 audit[3710]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdc2541fe0 a2=0 a3=0 items=0 ppid=3451 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:09.911000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:09.955789 systemd[1]: Created slice kubepods-besteffort-pod4f8be7f5_f430_4d5b_bfd8_17a8886e07c4.slice - libcontainer container kubepods-besteffort-pod4f8be7f5_f430_4d5b_bfd8_17a8886e07c4.slice. Jan 28 01:19:10.132533 systemd[1]: Created slice kubepods-besteffort-pod06859e4a_9f17_461b_8ad9_92a2252a593d.slice - libcontainer container kubepods-besteffort-pod06859e4a_9f17_461b_8ad9_92a2252a593d.slice. Jan 28 01:19:10.142395 kubelet[3202]: I0128 01:19:10.142352 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/06859e4a-9f17-461b-8ad9-92a2252a593d-var-lib-calico\") pod \"calico-node-vfzjm\" (UID: \"06859e4a-9f17-461b-8ad9-92a2252a593d\") " pod="calico-system/calico-node-vfzjm" Jan 28 01:19:10.142395 kubelet[3202]: I0128 01:19:10.142390 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/06859e4a-9f17-461b-8ad9-92a2252a593d-node-certs\") pod \"calico-node-vfzjm\" (UID: \"06859e4a-9f17-461b-8ad9-92a2252a593d\") " pod="calico-system/calico-node-vfzjm" Jan 28 01:19:10.142847 kubelet[3202]: I0128 01:19:10.142408 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f8be7f5-f430-4d5b-bfd8-17a8886e07c4-tigera-ca-bundle\") pod \"calico-typha-54cb8b4c94-2phzc\" (UID: \"4f8be7f5-f430-4d5b-bfd8-17a8886e07c4\") " pod="calico-system/calico-typha-54cb8b4c94-2phzc" Jan 28 01:19:10.142847 kubelet[3202]: I0128 01:19:10.142435 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4f8be7f5-f430-4d5b-bfd8-17a8886e07c4-typha-certs\") pod \"calico-typha-54cb8b4c94-2phzc\" (UID: \"4f8be7f5-f430-4d5b-bfd8-17a8886e07c4\") " pod="calico-system/calico-typha-54cb8b4c94-2phzc" Jan 28 01:19:10.142847 kubelet[3202]: I0128 01:19:10.142449 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06859e4a-9f17-461b-8ad9-92a2252a593d-tigera-ca-bundle\") pod \"calico-node-vfzjm\" (UID: \"06859e4a-9f17-461b-8ad9-92a2252a593d\") " pod="calico-system/calico-node-vfzjm" Jan 28 01:19:10.142847 kubelet[3202]: I0128 01:19:10.142463 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/06859e4a-9f17-461b-8ad9-92a2252a593d-cni-log-dir\") pod \"calico-node-vfzjm\" (UID: \"06859e4a-9f17-461b-8ad9-92a2252a593d\") " pod="calico-system/calico-node-vfzjm" Jan 28 01:19:10.142847 kubelet[3202]: I0128 01:19:10.142481 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06859e4a-9f17-461b-8ad9-92a2252a593d-xtables-lock\") pod \"calico-node-vfzjm\" (UID: \"06859e4a-9f17-461b-8ad9-92a2252a593d\") " pod="calico-system/calico-node-vfzjm" Jan 28 01:19:10.142970 kubelet[3202]: I0128 01:19:10.142495 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/06859e4a-9f17-461b-8ad9-92a2252a593d-cni-net-dir\") pod \"calico-node-vfzjm\" (UID: \"06859e4a-9f17-461b-8ad9-92a2252a593d\") " pod="calico-system/calico-node-vfzjm" Jan 28 01:19:10.142970 kubelet[3202]: I0128 01:19:10.142509 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/06859e4a-9f17-461b-8ad9-92a2252a593d-var-run-calico\") pod \"calico-node-vfzjm\" (UID: \"06859e4a-9f17-461b-8ad9-92a2252a593d\") " pod="calico-system/calico-node-vfzjm" Jan 28 01:19:10.142970 kubelet[3202]: I0128 01:19:10.142523 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgflt\" (UniqueName: \"kubernetes.io/projected/06859e4a-9f17-461b-8ad9-92a2252a593d-kube-api-access-cgflt\") pod \"calico-node-vfzjm\" (UID: \"06859e4a-9f17-461b-8ad9-92a2252a593d\") " pod="calico-system/calico-node-vfzjm" Jan 28 01:19:10.142970 kubelet[3202]: I0128 01:19:10.142540 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7jz2\" (UniqueName: \"kubernetes.io/projected/4f8be7f5-f430-4d5b-bfd8-17a8886e07c4-kube-api-access-p7jz2\") pod \"calico-typha-54cb8b4c94-2phzc\" (UID: \"4f8be7f5-f430-4d5b-bfd8-17a8886e07c4\") " pod="calico-system/calico-typha-54cb8b4c94-2phzc" Jan 28 01:19:10.142970 kubelet[3202]: I0128 01:19:10.142554 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/06859e4a-9f17-461b-8ad9-92a2252a593d-cni-bin-dir\") pod \"calico-node-vfzjm\" (UID: \"06859e4a-9f17-461b-8ad9-92a2252a593d\") " pod="calico-system/calico-node-vfzjm" Jan 28 01:19:10.143146 kubelet[3202]: I0128 01:19:10.142569 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/06859e4a-9f17-461b-8ad9-92a2252a593d-flexvol-driver-host\") pod \"calico-node-vfzjm\" (UID: \"06859e4a-9f17-461b-8ad9-92a2252a593d\") " pod="calico-system/calico-node-vfzjm" Jan 28 01:19:10.143146 kubelet[3202]: I0128 01:19:10.142583 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/06859e4a-9f17-461b-8ad9-92a2252a593d-policysync\") pod \"calico-node-vfzjm\" (UID: \"06859e4a-9f17-461b-8ad9-92a2252a593d\") " pod="calico-system/calico-node-vfzjm" Jan 28 01:19:10.143146 kubelet[3202]: I0128 01:19:10.142599 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06859e4a-9f17-461b-8ad9-92a2252a593d-lib-modules\") pod \"calico-node-vfzjm\" (UID: \"06859e4a-9f17-461b-8ad9-92a2252a593d\") " pod="calico-system/calico-node-vfzjm" Jan 28 01:19:10.252130 kubelet[3202]: E0128 01:19:10.249971 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.252130 kubelet[3202]: W0128 01:19:10.250003 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.252130 kubelet[3202]: E0128 01:19:10.250042 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.272328 kubelet[3202]: E0128 01:19:10.272000 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.272328 kubelet[3202]: W0128 01:19:10.272019 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.272328 kubelet[3202]: E0128 01:19:10.272038 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.272328 kubelet[3202]: E0128 01:19:10.272212 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.272328 kubelet[3202]: W0128 01:19:10.272219 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.272328 kubelet[3202]: E0128 01:19:10.272228 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.283923 kubelet[3202]: E0128 01:19:10.283838 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.283923 kubelet[3202]: W0128 01:19:10.283861 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.283923 kubelet[3202]: E0128 01:19:10.283881 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.286842 kubelet[3202]: E0128 01:19:10.286814 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.286842 kubelet[3202]: W0128 01:19:10.286838 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.286951 kubelet[3202]: E0128 01:19:10.286859 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.330666 kubelet[3202]: E0128 01:19:10.330576 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5fjx2" podUID="6c6224c1-45a4-4e67-9483-34412dd5913e" Jan 28 01:19:10.343578 kubelet[3202]: E0128 01:19:10.343540 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.343821 kubelet[3202]: W0128 01:19:10.343646 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.343821 kubelet[3202]: E0128 01:19:10.343669 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.344099 kubelet[3202]: E0128 01:19:10.344086 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.344147 kubelet[3202]: W0128 01:19:10.344115 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.344186 kubelet[3202]: E0128 01:19:10.344170 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.344484 kubelet[3202]: E0128 01:19:10.344461 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.344484 kubelet[3202]: W0128 01:19:10.344475 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.344484 kubelet[3202]: E0128 01:19:10.344486 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.346915 kubelet[3202]: E0128 01:19:10.346816 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.346915 kubelet[3202]: W0128 01:19:10.346829 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.346915 kubelet[3202]: E0128 01:19:10.346841 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.347263 kubelet[3202]: E0128 01:19:10.347234 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.347263 kubelet[3202]: W0128 01:19:10.347242 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.347263 kubelet[3202]: E0128 01:19:10.347251 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.347551 kubelet[3202]: E0128 01:19:10.347525 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.347551 kubelet[3202]: W0128 01:19:10.347539 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.347551 kubelet[3202]: E0128 01:19:10.347548 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.348030 kubelet[3202]: E0128 01:19:10.348009 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.348030 kubelet[3202]: W0128 01:19:10.348023 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.348030 kubelet[3202]: E0128 01:19:10.348033 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.348264 kubelet[3202]: E0128 01:19:10.348240 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.348264 kubelet[3202]: W0128 01:19:10.348247 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.348264 kubelet[3202]: E0128 01:19:10.348256 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.348862 kubelet[3202]: E0128 01:19:10.348844 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.348862 kubelet[3202]: W0128 01:19:10.348859 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.349001 kubelet[3202]: E0128 01:19:10.348869 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.349765 kubelet[3202]: E0128 01:19:10.349647 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.349765 kubelet[3202]: W0128 01:19:10.349662 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.349765 kubelet[3202]: E0128 01:19:10.349672 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.349924 kubelet[3202]: E0128 01:19:10.349842 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.349924 kubelet[3202]: W0128 01:19:10.349848 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.349924 kubelet[3202]: E0128 01:19:10.349855 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.350158 kubelet[3202]: E0128 01:19:10.349976 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.350158 kubelet[3202]: W0128 01:19:10.349989 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.350158 kubelet[3202]: E0128 01:19:10.349996 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.350396 kubelet[3202]: E0128 01:19:10.350359 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.350396 kubelet[3202]: W0128 01:19:10.350367 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.350396 kubelet[3202]: E0128 01:19:10.350376 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.350861 kubelet[3202]: E0128 01:19:10.350826 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.350861 kubelet[3202]: W0128 01:19:10.350841 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.350861 kubelet[3202]: E0128 01:19:10.350850 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.351218 kubelet[3202]: E0128 01:19:10.351193 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.351218 kubelet[3202]: W0128 01:19:10.351208 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.351218 kubelet[3202]: E0128 01:19:10.351219 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.351682 kubelet[3202]: E0128 01:19:10.351666 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.351682 kubelet[3202]: W0128 01:19:10.351680 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.351902 kubelet[3202]: E0128 01:19:10.351691 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.352782 kubelet[3202]: E0128 01:19:10.352734 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.352782 kubelet[3202]: W0128 01:19:10.352775 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.352887 kubelet[3202]: E0128 01:19:10.352786 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.353183 kubelet[3202]: E0128 01:19:10.353169 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.353183 kubelet[3202]: W0128 01:19:10.353181 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.353295 kubelet[3202]: E0128 01:19:10.353191 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.353559 kubelet[3202]: E0128 01:19:10.353539 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.353666 kubelet[3202]: W0128 01:19:10.353648 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.353816 kubelet[3202]: E0128 01:19:10.353727 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.354160 kubelet[3202]: E0128 01:19:10.354142 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.354160 kubelet[3202]: W0128 01:19:10.354154 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.354248 kubelet[3202]: E0128 01:19:10.354164 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.354467 kubelet[3202]: E0128 01:19:10.354418 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.354467 kubelet[3202]: W0128 01:19:10.354431 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.354467 kubelet[3202]: E0128 01:19:10.354439 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.354467 kubelet[3202]: I0128 01:19:10.354461 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c6224c1-45a4-4e67-9483-34412dd5913e-kubelet-dir\") pod \"csi-node-driver-5fjx2\" (UID: \"6c6224c1-45a4-4e67-9483-34412dd5913e\") " pod="calico-system/csi-node-driver-5fjx2" Jan 28 01:19:10.354704 kubelet[3202]: E0128 01:19:10.354679 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.354704 kubelet[3202]: W0128 01:19:10.354697 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.354927 kubelet[3202]: E0128 01:19:10.354709 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.354927 kubelet[3202]: I0128 01:19:10.354736 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6c6224c1-45a4-4e67-9483-34412dd5913e-registration-dir\") pod \"csi-node-driver-5fjx2\" (UID: \"6c6224c1-45a4-4e67-9483-34412dd5913e\") " pod="calico-system/csi-node-driver-5fjx2" Jan 28 01:19:10.354996 kubelet[3202]: E0128 01:19:10.354946 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.354996 kubelet[3202]: W0128 01:19:10.354954 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.354996 kubelet[3202]: E0128 01:19:10.354962 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.354996 kubelet[3202]: I0128 01:19:10.354981 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6c6224c1-45a4-4e67-9483-34412dd5913e-varrun\") pod \"csi-node-driver-5fjx2\" (UID: \"6c6224c1-45a4-4e67-9483-34412dd5913e\") " pod="calico-system/csi-node-driver-5fjx2" Jan 28 01:19:10.355267 kubelet[3202]: E0128 01:19:10.355243 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.355301 kubelet[3202]: W0128 01:19:10.355267 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.355301 kubelet[3202]: E0128 01:19:10.355279 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.355348 kubelet[3202]: I0128 01:19:10.355306 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bxdn\" (UniqueName: \"kubernetes.io/projected/6c6224c1-45a4-4e67-9483-34412dd5913e-kube-api-access-4bxdn\") pod \"csi-node-driver-5fjx2\" (UID: \"6c6224c1-45a4-4e67-9483-34412dd5913e\") " pod="calico-system/csi-node-driver-5fjx2" Jan 28 01:19:10.355545 kubelet[3202]: E0128 01:19:10.355525 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.355545 kubelet[3202]: W0128 01:19:10.355541 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.355673 kubelet[3202]: E0128 01:19:10.355552 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.355673 kubelet[3202]: I0128 01:19:10.355575 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6c6224c1-45a4-4e67-9483-34412dd5913e-socket-dir\") pod \"csi-node-driver-5fjx2\" (UID: \"6c6224c1-45a4-4e67-9483-34412dd5913e\") " pod="calico-system/csi-node-driver-5fjx2" Jan 28 01:19:10.355790 kubelet[3202]: E0128 01:19:10.355768 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.355790 kubelet[3202]: W0128 01:19:10.355787 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.355847 kubelet[3202]: E0128 01:19:10.355794 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.355988 kubelet[3202]: E0128 01:19:10.355966 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.355988 kubelet[3202]: W0128 01:19:10.355986 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.356045 kubelet[3202]: E0128 01:19:10.355994 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.356172 kubelet[3202]: E0128 01:19:10.356158 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.356172 kubelet[3202]: W0128 01:19:10.356167 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.356235 kubelet[3202]: E0128 01:19:10.356174 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.356380 kubelet[3202]: E0128 01:19:10.356359 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.356380 kubelet[3202]: W0128 01:19:10.356374 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.356542 kubelet[3202]: E0128 01:19:10.356386 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.356575 kubelet[3202]: E0128 01:19:10.356567 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.356598 kubelet[3202]: W0128 01:19:10.356574 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.356598 kubelet[3202]: E0128 01:19:10.356581 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.356818 kubelet[3202]: E0128 01:19:10.356743 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.356818 kubelet[3202]: W0128 01:19:10.356815 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.356883 kubelet[3202]: E0128 01:19:10.356823 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.357027 kubelet[3202]: E0128 01:19:10.357012 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.357027 kubelet[3202]: W0128 01:19:10.357022 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.357083 kubelet[3202]: E0128 01:19:10.357030 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.357227 kubelet[3202]: E0128 01:19:10.357208 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.357227 kubelet[3202]: W0128 01:19:10.357220 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.357311 kubelet[3202]: E0128 01:19:10.357231 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.357477 kubelet[3202]: E0128 01:19:10.357455 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.357477 kubelet[3202]: W0128 01:19:10.357472 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.357559 kubelet[3202]: E0128 01:19:10.357484 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.357680 kubelet[3202]: E0128 01:19:10.357666 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.357680 kubelet[3202]: W0128 01:19:10.357677 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.357738 kubelet[3202]: E0128 01:19:10.357684 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.436533 containerd[1849]: time="2026-01-28T01:19:10.436457171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vfzjm,Uid:06859e4a-9f17-461b-8ad9-92a2252a593d,Namespace:calico-system,Attempt:0,}" Jan 28 01:19:10.456392 kubelet[3202]: E0128 01:19:10.456307 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.456392 kubelet[3202]: W0128 01:19:10.456331 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.456392 kubelet[3202]: E0128 01:19:10.456349 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.456764 kubelet[3202]: E0128 01:19:10.456558 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.456764 kubelet[3202]: W0128 01:19:10.456565 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.456764 kubelet[3202]: E0128 01:19:10.456574 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.457077 kubelet[3202]: E0128 01:19:10.456836 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.457077 kubelet[3202]: W0128 01:19:10.456852 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.457077 kubelet[3202]: E0128 01:19:10.456866 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.457223 kubelet[3202]: E0128 01:19:10.457203 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.457223 kubelet[3202]: W0128 01:19:10.457215 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.457223 kubelet[3202]: E0128 01:19:10.457225 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.457409 kubelet[3202]: E0128 01:19:10.457394 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.457409 kubelet[3202]: W0128 01:19:10.457404 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.457475 kubelet[3202]: E0128 01:19:10.457412 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.457593 kubelet[3202]: E0128 01:19:10.457567 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.457593 kubelet[3202]: W0128 01:19:10.457584 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.457593 kubelet[3202]: E0128 01:19:10.457595 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.457856 kubelet[3202]: E0128 01:19:10.457841 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.457856 kubelet[3202]: W0128 01:19:10.457852 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.457968 kubelet[3202]: E0128 01:19:10.457862 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.458125 kubelet[3202]: E0128 01:19:10.458107 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.458125 kubelet[3202]: W0128 01:19:10.458121 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.458212 kubelet[3202]: E0128 01:19:10.458130 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.458367 kubelet[3202]: E0128 01:19:10.458330 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.458367 kubelet[3202]: W0128 01:19:10.458347 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.458367 kubelet[3202]: E0128 01:19:10.458355 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.458568 kubelet[3202]: E0128 01:19:10.458525 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.458568 kubelet[3202]: W0128 01:19:10.458531 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.458568 kubelet[3202]: E0128 01:19:10.458538 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.458765 kubelet[3202]: E0128 01:19:10.458707 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.458765 kubelet[3202]: W0128 01:19:10.458713 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.458765 kubelet[3202]: E0128 01:19:10.458720 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.458911 kubelet[3202]: E0128 01:19:10.458897 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.458911 kubelet[3202]: W0128 01:19:10.458907 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.458984 kubelet[3202]: E0128 01:19:10.458913 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.459113 kubelet[3202]: E0128 01:19:10.459091 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.459113 kubelet[3202]: W0128 01:19:10.459106 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.459198 kubelet[3202]: E0128 01:19:10.459120 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.459575 kubelet[3202]: E0128 01:19:10.459504 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.459575 kubelet[3202]: W0128 01:19:10.459515 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.459575 kubelet[3202]: E0128 01:19:10.459523 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.459879 kubelet[3202]: E0128 01:19:10.459860 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.459879 kubelet[3202]: W0128 01:19:10.459873 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.459879 kubelet[3202]: E0128 01:19:10.459882 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.460271 kubelet[3202]: E0128 01:19:10.460093 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.460271 kubelet[3202]: W0128 01:19:10.460102 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.460271 kubelet[3202]: E0128 01:19:10.460110 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.460359 kubelet[3202]: E0128 01:19:10.460314 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.460359 kubelet[3202]: W0128 01:19:10.460321 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.460359 kubelet[3202]: E0128 01:19:10.460328 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.460516 kubelet[3202]: E0128 01:19:10.460498 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.460516 kubelet[3202]: W0128 01:19:10.460509 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.460516 kubelet[3202]: E0128 01:19:10.460516 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.460708 kubelet[3202]: E0128 01:19:10.460696 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.460708 kubelet[3202]: W0128 01:19:10.460707 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.460795 kubelet[3202]: E0128 01:19:10.460715 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.460982 kubelet[3202]: E0128 01:19:10.460963 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.460982 kubelet[3202]: W0128 01:19:10.460975 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.488122 kubelet[3202]: E0128 01:19:10.460983 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.488122 kubelet[3202]: E0128 01:19:10.461178 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.488122 kubelet[3202]: W0128 01:19:10.461185 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.488122 kubelet[3202]: E0128 01:19:10.461192 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.488122 kubelet[3202]: E0128 01:19:10.461382 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.488122 kubelet[3202]: W0128 01:19:10.461388 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.488122 kubelet[3202]: E0128 01:19:10.461403 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.488122 kubelet[3202]: E0128 01:19:10.461623 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.488122 kubelet[3202]: W0128 01:19:10.461635 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.488122 kubelet[3202]: E0128 01:19:10.461647 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.488362 kubelet[3202]: E0128 01:19:10.462054 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.488362 kubelet[3202]: W0128 01:19:10.462065 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.488362 kubelet[3202]: E0128 01:19:10.462078 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.488362 kubelet[3202]: E0128 01:19:10.463851 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.488362 kubelet[3202]: W0128 01:19:10.463866 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.488362 kubelet[3202]: E0128 01:19:10.463878 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.488362 kubelet[3202]: E0128 01:19:10.471822 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:19:10.488362 kubelet[3202]: W0128 01:19:10.471837 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:19:10.488362 kubelet[3202]: E0128 01:19:10.471854 3202 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:19:10.502835 containerd[1849]: time="2026-01-28T01:19:10.502350942Z" level=info msg="connecting to shim 70dc0d9d5f990c4d834aa375b60cd40beda92a1e5afefe3b155257e740eac679" address="unix:///run/containerd/s/6691a1b60f1fff6e47140d25c711b7e0799ab7effedc660b5f01a265b72cdd62" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:19:10.530097 systemd[1]: Started cri-containerd-70dc0d9d5f990c4d834aa375b60cd40beda92a1e5afefe3b155257e740eac679.scope - libcontainer container 70dc0d9d5f990c4d834aa375b60cd40beda92a1e5afefe3b155257e740eac679. Jan 28 01:19:10.541000 audit: BPF prog-id=158 op=LOAD Jan 28 01:19:10.541000 audit: BPF prog-id=159 op=LOAD Jan 28 01:19:10.541000 audit[3813]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3802 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:10.541000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730646330643964356639393063346438333461613337356236306364 Jan 28 01:19:10.542000 audit: BPF prog-id=159 op=UNLOAD Jan 28 01:19:10.542000 audit[3813]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3802 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:10.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730646330643964356639393063346438333461613337356236306364 Jan 28 01:19:10.542000 audit: BPF prog-id=160 op=LOAD Jan 28 01:19:10.542000 audit[3813]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3802 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:10.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730646330643964356639393063346438333461613337356236306364 Jan 28 01:19:10.542000 audit: BPF prog-id=161 op=LOAD Jan 28 01:19:10.542000 audit[3813]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3802 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:10.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730646330643964356639393063346438333461613337356236306364 Jan 28 01:19:10.542000 audit: BPF prog-id=161 op=UNLOAD Jan 28 01:19:10.542000 audit[3813]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3802 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:10.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730646330643964356639393063346438333461613337356236306364 Jan 28 01:19:10.542000 audit: BPF prog-id=160 op=UNLOAD Jan 28 01:19:10.542000 audit[3813]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3802 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:10.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730646330643964356639393063346438333461613337356236306364 Jan 28 01:19:10.543000 audit: BPF prog-id=162 op=LOAD Jan 28 01:19:10.543000 audit[3813]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3802 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:10.543000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730646330643964356639393063346438333461613337356236306364 Jan 28 01:19:10.561660 containerd[1849]: time="2026-01-28T01:19:10.561594067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vfzjm,Uid:06859e4a-9f17-461b-8ad9-92a2252a593d,Namespace:calico-system,Attempt:0,} returns sandbox id \"70dc0d9d5f990c4d834aa375b60cd40beda92a1e5afefe3b155257e740eac679\"" Jan 28 01:19:10.563620 containerd[1849]: time="2026-01-28T01:19:10.563580919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 28 01:19:10.572072 containerd[1849]: time="2026-01-28T01:19:10.572034840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54cb8b4c94-2phzc,Uid:4f8be7f5-f430-4d5b-bfd8-17a8886e07c4,Namespace:calico-system,Attempt:0,}" Jan 28 01:19:10.610472 containerd[1849]: time="2026-01-28T01:19:10.610392081Z" level=info msg="connecting to shim 109171bc269b85335d4c8fe9a96f719545f88d3c25ed4a406381d24eec338552" address="unix:///run/containerd/s/51e0f9eebcd958f0834c8b5106cd1e09f054d703d5414771a37a3b6ca4c38fe6" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:19:10.632087 systemd[1]: Started cri-containerd-109171bc269b85335d4c8fe9a96f719545f88d3c25ed4a406381d24eec338552.scope - libcontainer container 109171bc269b85335d4c8fe9a96f719545f88d3c25ed4a406381d24eec338552. Jan 28 01:19:10.643000 audit: BPF prog-id=163 op=LOAD Jan 28 01:19:10.644000 audit: BPF prog-id=164 op=LOAD Jan 28 01:19:10.644000 audit[3861]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3849 pid=3861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:10.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130393137316263323639623835333335643463386665396139366637 Jan 28 01:19:10.644000 audit: BPF prog-id=164 op=UNLOAD Jan 28 01:19:10.644000 audit[3861]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3849 pid=3861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:10.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130393137316263323639623835333335643463386665396139366637 Jan 28 01:19:10.644000 audit: BPF prog-id=165 op=LOAD Jan 28 01:19:10.644000 audit[3861]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3849 pid=3861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:10.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130393137316263323639623835333335643463386665396139366637 Jan 28 01:19:10.644000 audit: BPF prog-id=166 op=LOAD Jan 28 01:19:10.644000 audit[3861]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3849 pid=3861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:10.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130393137316263323639623835333335643463386665396139366637 Jan 28 01:19:10.644000 audit: BPF prog-id=166 op=UNLOAD Jan 28 01:19:10.644000 audit[3861]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3849 pid=3861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:10.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130393137316263323639623835333335643463386665396139366637 Jan 28 01:19:10.644000 audit: BPF prog-id=165 op=UNLOAD Jan 28 01:19:10.644000 audit[3861]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3849 pid=3861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:10.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130393137316263323639623835333335643463386665396139366637 Jan 28 01:19:10.644000 audit: BPF prog-id=167 op=LOAD Jan 28 01:19:10.644000 audit[3861]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3849 pid=3861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:10.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130393137316263323639623835333335643463386665396139366637 Jan 28 01:19:10.693546 containerd[1849]: time="2026-01-28T01:19:10.693511935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54cb8b4c94-2phzc,Uid:4f8be7f5-f430-4d5b-bfd8-17a8886e07c4,Namespace:calico-system,Attempt:0,} returns sandbox id \"109171bc269b85335d4c8fe9a96f719545f88d3c25ed4a406381d24eec338552\"" Jan 28 01:19:10.927000 audit[3888]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3888 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:10.927000 audit[3888]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcc5ae6370 a2=0 a3=7ffcc5ae635c items=0 ppid=3451 pid=3888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:10.927000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:10.931000 audit[3888]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3888 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:10.931000 audit[3888]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcc5ae6370 a2=0 a3=0 items=0 ppid=3451 pid=3888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:10.931000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:12.002547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3569349453.mount: Deactivated successfully. Jan 28 01:19:12.137032 containerd[1849]: time="2026-01-28T01:19:12.136971319Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:19:12.140956 containerd[1849]: time="2026-01-28T01:19:12.140763030Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 28 01:19:12.145922 containerd[1849]: time="2026-01-28T01:19:12.145859094Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:19:12.150408 containerd[1849]: time="2026-01-28T01:19:12.150335583Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:19:12.150892 containerd[1849]: time="2026-01-28T01:19:12.150777487Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.587162207s" Jan 28 01:19:12.150892 containerd[1849]: time="2026-01-28T01:19:12.150816896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 28 01:19:12.159215 containerd[1849]: time="2026-01-28T01:19:12.159147618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 28 01:19:12.164772 containerd[1849]: time="2026-01-28T01:19:12.163669827Z" level=info msg="CreateContainer within sandbox \"70dc0d9d5f990c4d834aa375b60cd40beda92a1e5afefe3b155257e740eac679\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 28 01:19:12.179601 containerd[1849]: time="2026-01-28T01:19:12.179563884Z" level=info msg="Container e5d38a1f3299a93e8a839a9a7bfce8eefcf65f462ce466f681cb86f9b39005c2: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:19:12.205070 kubelet[3202]: E0128 01:19:12.205016 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5fjx2" podUID="6c6224c1-45a4-4e67-9483-34412dd5913e" Jan 28 01:19:12.213461 containerd[1849]: time="2026-01-28T01:19:12.213412806Z" level=info msg="CreateContainer within sandbox \"70dc0d9d5f990c4d834aa375b60cd40beda92a1e5afefe3b155257e740eac679\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e5d38a1f3299a93e8a839a9a7bfce8eefcf65f462ce466f681cb86f9b39005c2\"" Jan 28 01:19:12.214218 containerd[1849]: time="2026-01-28T01:19:12.214168750Z" level=info msg="StartContainer for \"e5d38a1f3299a93e8a839a9a7bfce8eefcf65f462ce466f681cb86f9b39005c2\"" Jan 28 01:19:12.215844 containerd[1849]: time="2026-01-28T01:19:12.215822087Z" level=info msg="connecting to shim e5d38a1f3299a93e8a839a9a7bfce8eefcf65f462ce466f681cb86f9b39005c2" address="unix:///run/containerd/s/6691a1b60f1fff6e47140d25c711b7e0799ab7effedc660b5f01a265b72cdd62" protocol=ttrpc version=3 Jan 28 01:19:12.242057 systemd[1]: Started cri-containerd-e5d38a1f3299a93e8a839a9a7bfce8eefcf65f462ce466f681cb86f9b39005c2.scope - libcontainer container e5d38a1f3299a93e8a839a9a7bfce8eefcf65f462ce466f681cb86f9b39005c2. Jan 28 01:19:12.298000 audit: BPF prog-id=168 op=LOAD Jan 28 01:19:12.298000 audit[3897]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3802 pid=3897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:12.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535643338613166333239396139336538613833396139613762666365 Jan 28 01:19:12.298000 audit: BPF prog-id=169 op=LOAD Jan 28 01:19:12.298000 audit[3897]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3802 pid=3897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:12.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535643338613166333239396139336538613833396139613762666365 Jan 28 01:19:12.298000 audit: BPF prog-id=169 op=UNLOAD Jan 28 01:19:12.298000 audit[3897]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3802 pid=3897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:12.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535643338613166333239396139336538613833396139613762666365 Jan 28 01:19:12.298000 audit: BPF prog-id=168 op=UNLOAD Jan 28 01:19:12.298000 audit[3897]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3802 pid=3897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:12.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535643338613166333239396139336538613833396139613762666365 Jan 28 01:19:12.299000 audit: BPF prog-id=170 op=LOAD Jan 28 01:19:12.299000 audit[3897]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3802 pid=3897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:12.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535643338613166333239396139336538613833396139613762666365 Jan 28 01:19:12.328975 containerd[1849]: time="2026-01-28T01:19:12.328929702Z" level=info msg="StartContainer for \"e5d38a1f3299a93e8a839a9a7bfce8eefcf65f462ce466f681cb86f9b39005c2\" returns successfully" Jan 28 01:19:12.354860 systemd[1]: cri-containerd-e5d38a1f3299a93e8a839a9a7bfce8eefcf65f462ce466f681cb86f9b39005c2.scope: Deactivated successfully. Jan 28 01:19:12.355260 systemd[1]: cri-containerd-e5d38a1f3299a93e8a839a9a7bfce8eefcf65f462ce466f681cb86f9b39005c2.scope: Consumed 41ms CPU time, 6.1M memory peak, 4.5M written to disk. Jan 28 01:19:12.357000 audit: BPF prog-id=170 op=UNLOAD Jan 28 01:19:12.407377 containerd[1849]: time="2026-01-28T01:19:12.407312263Z" level=info msg="received container exit event container_id:\"e5d38a1f3299a93e8a839a9a7bfce8eefcf65f462ce466f681cb86f9b39005c2\" id:\"e5d38a1f3299a93e8a839a9a7bfce8eefcf65f462ce466f681cb86f9b39005c2\" pid:3911 exited_at:{seconds:1769563152 nanos:366182086}" Jan 28 01:19:12.459687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5d38a1f3299a93e8a839a9a7bfce8eefcf65f462ce466f681cb86f9b39005c2-rootfs.mount: Deactivated successfully. Jan 28 01:19:14.203530 kubelet[3202]: E0128 01:19:14.202207 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5fjx2" podUID="6c6224c1-45a4-4e67-9483-34412dd5913e" Jan 28 01:19:15.334982 containerd[1849]: time="2026-01-28T01:19:15.334928727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:19:15.337069 containerd[1849]: time="2026-01-28T01:19:15.336919741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 28 01:19:15.339217 containerd[1849]: time="2026-01-28T01:19:15.339186761Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:19:15.347873 containerd[1849]: time="2026-01-28T01:19:15.347808520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:19:15.348459 containerd[1849]: time="2026-01-28T01:19:15.348297445Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.189099407s" Jan 28 01:19:15.348459 containerd[1849]: time="2026-01-28T01:19:15.348329676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 28 01:19:15.349644 containerd[1849]: time="2026-01-28T01:19:15.349617546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 28 01:19:15.370375 containerd[1849]: time="2026-01-28T01:19:15.370336466Z" level=info msg="CreateContainer within sandbox \"109171bc269b85335d4c8fe9a96f719545f88d3c25ed4a406381d24eec338552\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 28 01:19:15.406150 containerd[1849]: time="2026-01-28T01:19:15.406107227Z" level=info msg="Container a830a74aa6f2312f033ddefcab95418498b72277ffa88f3e75bc44a25904bc80: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:19:15.443504 containerd[1849]: time="2026-01-28T01:19:15.443451224Z" level=info msg="CreateContainer within sandbox \"109171bc269b85335d4c8fe9a96f719545f88d3c25ed4a406381d24eec338552\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a830a74aa6f2312f033ddefcab95418498b72277ffa88f3e75bc44a25904bc80\"" Jan 28 01:19:15.444410 containerd[1849]: time="2026-01-28T01:19:15.444375586Z" level=info msg="StartContainer for \"a830a74aa6f2312f033ddefcab95418498b72277ffa88f3e75bc44a25904bc80\"" Jan 28 01:19:15.447327 containerd[1849]: time="2026-01-28T01:19:15.447279346Z" level=info msg="connecting to shim a830a74aa6f2312f033ddefcab95418498b72277ffa88f3e75bc44a25904bc80" address="unix:///run/containerd/s/51e0f9eebcd958f0834c8b5106cd1e09f054d703d5414771a37a3b6ca4c38fe6" protocol=ttrpc version=3 Jan 28 01:19:15.498060 systemd[1]: Started cri-containerd-a830a74aa6f2312f033ddefcab95418498b72277ffa88f3e75bc44a25904bc80.scope - libcontainer container a830a74aa6f2312f033ddefcab95418498b72277ffa88f3e75bc44a25904bc80. Jan 28 01:19:15.512263 kernel: kauditd_printk_skb: 80 callbacks suppressed Jan 28 01:19:15.512353 kernel: audit: type=1334 audit(1769563155.510:574): prog-id=171 op=LOAD Jan 28 01:19:15.510000 audit: BPF prog-id=171 op=LOAD Jan 28 01:19:15.514000 audit: BPF prog-id=172 op=LOAD Jan 28 01:19:15.516915 kernel: audit: type=1334 audit(1769563155.514:575): prog-id=172 op=LOAD Jan 28 01:19:15.516984 kernel: audit: type=1300 audit(1769563155.514:575): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3849 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:15.514000 audit[3954]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3849 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:15.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138333061373461613666323331326630333364646566636162393534 Jan 28 01:19:15.529871 kernel: audit: type=1327 audit(1769563155.514:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138333061373461613666323331326630333364646566636162393534 Jan 28 01:19:15.514000 audit: BPF prog-id=172 op=UNLOAD Jan 28 01:19:15.539564 kernel: audit: type=1334 audit(1769563155.514:576): prog-id=172 op=UNLOAD Jan 28 01:19:15.539685 kernel: audit: type=1300 audit(1769563155.514:576): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3849 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:15.514000 audit[3954]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3849 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:15.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138333061373461613666323331326630333364646566636162393534 Jan 28 01:19:15.548082 kernel: audit: type=1327 audit(1769563155.514:576): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138333061373461613666323331326630333364646566636162393534 Jan 28 01:19:15.548193 kernel: audit: type=1334 audit(1769563155.514:577): prog-id=173 op=LOAD Jan 28 01:19:15.514000 audit: BPF prog-id=173 op=LOAD Jan 28 01:19:15.554589 kernel: audit: type=1300 audit(1769563155.514:577): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3849 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:15.514000 audit[3954]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3849 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:15.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138333061373461613666323331326630333364646566636162393534 Jan 28 01:19:15.514000 audit: BPF prog-id=174 op=LOAD Jan 28 01:19:15.514000 audit[3954]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3849 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:15.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138333061373461613666323331326630333364646566636162393534 Jan 28 01:19:15.514000 audit: BPF prog-id=174 op=UNLOAD Jan 28 01:19:15.514000 audit[3954]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3849 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:15.561807 kernel: audit: type=1327 audit(1769563155.514:577): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138333061373461613666323331326630333364646566636162393534 Jan 28 01:19:15.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138333061373461613666323331326630333364646566636162393534 Jan 28 01:19:15.514000 audit: BPF prog-id=173 op=UNLOAD Jan 28 01:19:15.514000 audit[3954]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3849 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:15.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138333061373461613666323331326630333364646566636162393534 Jan 28 01:19:15.514000 audit: BPF prog-id=175 op=LOAD Jan 28 01:19:15.514000 audit[3954]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3849 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:15.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138333061373461613666323331326630333364646566636162393534 Jan 28 01:19:15.607145 containerd[1849]: time="2026-01-28T01:19:15.607035476Z" level=info msg="StartContainer for \"a830a74aa6f2312f033ddefcab95418498b72277ffa88f3e75bc44a25904bc80\" returns successfully" Jan 28 01:19:16.201556 kubelet[3202]: E0128 01:19:16.201496 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5fjx2" podUID="6c6224c1-45a4-4e67-9483-34412dd5913e" Jan 28 01:19:17.369366 kubelet[3202]: I0128 01:19:17.369320 3202 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 01:19:18.202308 kubelet[3202]: E0128 01:19:18.202257 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5fjx2" podUID="6c6224c1-45a4-4e67-9483-34412dd5913e" Jan 28 01:19:20.201720 kubelet[3202]: E0128 01:19:20.201345 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5fjx2" podUID="6c6224c1-45a4-4e67-9483-34412dd5913e" Jan 28 01:19:20.786175 kubelet[3202]: I0128 01:19:20.786143 3202 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 01:19:20.814158 kubelet[3202]: I0128 01:19:20.813987 3202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-54cb8b4c94-2phzc" podStartSLOduration=7.159319183 podStartE2EDuration="11.813965191s" podCreationTimestamp="2026-01-28 01:19:09 +0000 UTC" firstStartedPulling="2026-01-28 01:19:10.694623433 +0000 UTC m=+24.712510400" lastFinishedPulling="2026-01-28 01:19:15.349269429 +0000 UTC m=+29.367156408" observedRunningTime="2026-01-28 01:19:16.383431475 +0000 UTC m=+30.401318463" watchObservedRunningTime="2026-01-28 01:19:20.813965191 +0000 UTC m=+34.831852181" Jan 28 01:19:20.866000 audit[3995]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3995 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:20.868004 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 28 01:19:20.868680 kernel: audit: type=1325 audit(1769563160.866:582): table=filter:119 family=2 entries=21 op=nft_register_rule pid=3995 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:20.866000 audit[3995]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd649f6a60 a2=0 a3=7ffd649f6a4c items=0 ppid=3451 pid=3995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:20.878843 kernel: audit: type=1300 audit(1769563160.866:582): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd649f6a60 a2=0 a3=7ffd649f6a4c items=0 ppid=3451 pid=3995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:20.878946 kernel: audit: type=1327 audit(1769563160.866:582): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:20.866000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:20.877000 audit[3995]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3995 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:20.877000 audit[3995]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd649f6a60 a2=0 a3=7ffd649f6a4c items=0 ppid=3451 pid=3995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:20.892457 kernel: audit: type=1325 audit(1769563160.877:583): table=nat:120 family=2 entries=19 op=nft_register_chain pid=3995 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:20.892591 kernel: audit: type=1300 audit(1769563160.877:583): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd649f6a60 a2=0 a3=7ffd649f6a4c items=0 ppid=3451 pid=3995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:20.877000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:20.896768 kernel: audit: type=1327 audit(1769563160.877:583): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:21.659276 containerd[1849]: time="2026-01-28T01:19:21.658970447Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:19:21.661098 containerd[1849]: time="2026-01-28T01:19:21.660923855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 28 01:19:21.663207 containerd[1849]: time="2026-01-28T01:19:21.663177606Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:19:21.666684 containerd[1849]: time="2026-01-28T01:19:21.666649236Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:19:21.667422 containerd[1849]: time="2026-01-28T01:19:21.667225330Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 6.317577648s" Jan 28 01:19:21.667422 containerd[1849]: time="2026-01-28T01:19:21.667251142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 28 01:19:21.672850 containerd[1849]: time="2026-01-28T01:19:21.672809212Z" level=info msg="CreateContainer within sandbox \"70dc0d9d5f990c4d834aa375b60cd40beda92a1e5afefe3b155257e740eac679\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 28 01:19:21.691694 containerd[1849]: time="2026-01-28T01:19:21.688432716Z" level=info msg="Container e9a486d53b58e1ea48f7e018714b0e87d8a91c323ad85ff46f2d1152bbd89696: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:19:21.718766 containerd[1849]: time="2026-01-28T01:19:21.718338099Z" level=info msg="CreateContainer within sandbox \"70dc0d9d5f990c4d834aa375b60cd40beda92a1e5afefe3b155257e740eac679\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e9a486d53b58e1ea48f7e018714b0e87d8a91c323ad85ff46f2d1152bbd89696\"" Jan 28 01:19:21.723342 containerd[1849]: time="2026-01-28T01:19:21.722948701Z" level=info msg="StartContainer for \"e9a486d53b58e1ea48f7e018714b0e87d8a91c323ad85ff46f2d1152bbd89696\"" Jan 28 01:19:21.725259 containerd[1849]: time="2026-01-28T01:19:21.725214968Z" level=info msg="connecting to shim e9a486d53b58e1ea48f7e018714b0e87d8a91c323ad85ff46f2d1152bbd89696" address="unix:///run/containerd/s/6691a1b60f1fff6e47140d25c711b7e0799ab7effedc660b5f01a265b72cdd62" protocol=ttrpc version=3 Jan 28 01:19:21.750640 systemd[1]: Started cri-containerd-e9a486d53b58e1ea48f7e018714b0e87d8a91c323ad85ff46f2d1152bbd89696.scope - libcontainer container e9a486d53b58e1ea48f7e018714b0e87d8a91c323ad85ff46f2d1152bbd89696. Jan 28 01:19:21.825000 audit: BPF prog-id=176 op=LOAD Jan 28 01:19:21.825000 audit[4001]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3802 pid=4001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:21.829138 kernel: audit: type=1334 audit(1769563161.825:584): prog-id=176 op=LOAD Jan 28 01:19:21.833460 kernel: audit: type=1300 audit(1769563161.825:584): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3802 pid=4001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:21.833502 kernel: audit: type=1327 audit(1769563161.825:584): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539613438366435336235386531656134386637653031383731346230 Jan 28 01:19:21.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539613438366435336235386531656134386637653031383731346230 Jan 28 01:19:21.825000 audit: BPF prog-id=177 op=LOAD Jan 28 01:19:21.841012 kernel: audit: type=1334 audit(1769563161.825:585): prog-id=177 op=LOAD Jan 28 01:19:21.825000 audit[4001]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3802 pid=4001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:21.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539613438366435336235386531656134386637653031383731346230 Jan 28 01:19:21.825000 audit: BPF prog-id=177 op=UNLOAD Jan 28 01:19:21.825000 audit[4001]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3802 pid=4001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:21.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539613438366435336235386531656134386637653031383731346230 Jan 28 01:19:21.825000 audit: BPF prog-id=176 op=UNLOAD Jan 28 01:19:21.825000 audit[4001]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3802 pid=4001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:21.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539613438366435336235386531656134386637653031383731346230 Jan 28 01:19:21.825000 audit: BPF prog-id=178 op=LOAD Jan 28 01:19:21.825000 audit[4001]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3802 pid=4001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:21.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539613438366435336235386531656134386637653031383731346230 Jan 28 01:19:21.890874 containerd[1849]: time="2026-01-28T01:19:21.890818448Z" level=info msg="StartContainer for \"e9a486d53b58e1ea48f7e018714b0e87d8a91c323ad85ff46f2d1152bbd89696\" returns successfully" Jan 28 01:19:22.224784 kubelet[3202]: E0128 01:19:22.224718 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5fjx2" podUID="6c6224c1-45a4-4e67-9483-34412dd5913e" Jan 28 01:19:22.856089 systemd[1]: cri-containerd-e9a486d53b58e1ea48f7e018714b0e87d8a91c323ad85ff46f2d1152bbd89696.scope: Deactivated successfully. Jan 28 01:19:22.860000 audit: BPF prog-id=178 op=UNLOAD Jan 28 01:19:22.856960 systemd[1]: cri-containerd-e9a486d53b58e1ea48f7e018714b0e87d8a91c323ad85ff46f2d1152bbd89696.scope: Consumed 520ms CPU time, 163.6M memory peak, 4.1M read from disk, 171.3M written to disk. Jan 28 01:19:22.905353 containerd[1849]: time="2026-01-28T01:19:22.905304321Z" level=info msg="received container exit event container_id:\"e9a486d53b58e1ea48f7e018714b0e87d8a91c323ad85ff46f2d1152bbd89696\" id:\"e9a486d53b58e1ea48f7e018714b0e87d8a91c323ad85ff46f2d1152bbd89696\" pid:4014 exited_at:{seconds:1769563162 nanos:894694733}" Jan 28 01:19:22.977257 kubelet[3202]: I0128 01:19:22.977171 3202 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 28 01:19:22.993461 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9a486d53b58e1ea48f7e018714b0e87d8a91c323ad85ff46f2d1152bbd89696-rootfs.mount: Deactivated successfully. Jan 28 01:19:23.047041 systemd[1]: Created slice kubepods-burstable-podff239d8a_c337_42b4_8142_43d1fa64b8e0.slice - libcontainer container kubepods-burstable-podff239d8a_c337_42b4_8142_43d1fa64b8e0.slice. Jan 28 01:19:23.067068 systemd[1]: Created slice kubepods-besteffort-pod325cd625_25dd_4d22_8523_67e469e6d0e9.slice - libcontainer container kubepods-besteffort-pod325cd625_25dd_4d22_8523_67e469e6d0e9.slice. Jan 28 01:19:23.092364 systemd[1]: Created slice kubepods-burstable-pod4f85ac2a_2c17_47d1_bfad_b39e838e4361.slice - libcontainer container kubepods-burstable-pod4f85ac2a_2c17_47d1_bfad_b39e838e4361.slice. Jan 28 01:19:23.114769 systemd[1]: Created slice kubepods-besteffort-pod47f54516_c523_4eae_a71c_70c7983aeee2.slice - libcontainer container kubepods-besteffort-pod47f54516_c523_4eae_a71c_70c7983aeee2.slice. Jan 28 01:19:23.128000 systemd[1]: Created slice kubepods-besteffort-pod77c5b13e_b6c7_4da4_8d69_cf95701836c8.slice - libcontainer container kubepods-besteffort-pod77c5b13e_b6c7_4da4_8d69_cf95701836c8.slice. Jan 28 01:19:23.143395 systemd[1]: Created slice kubepods-besteffort-podc988f11c_6f16_4100_9308_ea1983457126.slice - libcontainer container kubepods-besteffort-podc988f11c_6f16_4100_9308_ea1983457126.slice. Jan 28 01:19:23.160590 systemd[1]: Created slice kubepods-besteffort-podd6acbfd4_88d6_4133_9434_3bfec2c327d4.slice - libcontainer container kubepods-besteffort-podd6acbfd4_88d6_4133_9434_3bfec2c327d4.slice. Jan 28 01:19:23.173190 systemd[1]: Created slice kubepods-besteffort-podcbd1002c_7263_43fc_8b17_789e41b44261.slice - libcontainer container kubepods-besteffort-podcbd1002c_7263_43fc_8b17_789e41b44261.slice. Jan 28 01:19:23.187557 kubelet[3202]: I0128 01:19:23.175227 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vv22\" (UniqueName: \"kubernetes.io/projected/325cd625-25dd-4d22-8523-67e469e6d0e9-kube-api-access-9vv22\") pod \"calico-apiserver-74bc45499-4hxx4\" (UID: \"325cd625-25dd-4d22-8523-67e469e6d0e9\") " pod="calico-apiserver/calico-apiserver-74bc45499-4hxx4" Jan 28 01:19:23.187557 kubelet[3202]: I0128 01:19:23.175815 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r774l\" (UniqueName: \"kubernetes.io/projected/ff239d8a-c337-42b4-8142-43d1fa64b8e0-kube-api-access-r774l\") pod \"coredns-674b8bbfcf-xwxpv\" (UID: \"ff239d8a-c337-42b4-8142-43d1fa64b8e0\") " pod="kube-system/coredns-674b8bbfcf-xwxpv" Jan 28 01:19:23.187557 kubelet[3202]: I0128 01:19:23.175865 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/77c5b13e-b6c7-4da4-8d69-cf95701836c8-calico-apiserver-certs\") pod \"calico-apiserver-597546f57d-dtd92\" (UID: \"77c5b13e-b6c7-4da4-8d69-cf95701836c8\") " pod="calico-apiserver/calico-apiserver-597546f57d-dtd92" Jan 28 01:19:23.187557 kubelet[3202]: I0128 01:19:23.175890 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbd1002c-7263-43fc-8b17-789e41b44261-config\") pod \"goldmane-666569f655-pcsbp\" (UID: \"cbd1002c-7263-43fc-8b17-789e41b44261\") " pod="calico-system/goldmane-666569f655-pcsbp" Jan 28 01:19:23.187557 kubelet[3202]: I0128 01:19:23.175918 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmdlc\" (UniqueName: \"kubernetes.io/projected/4f85ac2a-2c17-47d1-bfad-b39e838e4361-kube-api-access-lmdlc\") pod \"coredns-674b8bbfcf-ggpx4\" (UID: \"4f85ac2a-2c17-47d1-bfad-b39e838e4361\") " pod="kube-system/coredns-674b8bbfcf-ggpx4" Jan 28 01:19:23.187895 kubelet[3202]: I0128 01:19:23.175952 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/47f54516-c523-4eae-a71c-70c7983aeee2-whisker-backend-key-pair\") pod \"whisker-548f4d67b5-b26lw\" (UID: \"47f54516-c523-4eae-a71c-70c7983aeee2\") " pod="calico-system/whisker-548f4d67b5-b26lw" Jan 28 01:19:23.187895 kubelet[3202]: I0128 01:19:23.175987 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47f54516-c523-4eae-a71c-70c7983aeee2-whisker-ca-bundle\") pod \"whisker-548f4d67b5-b26lw\" (UID: \"47f54516-c523-4eae-a71c-70c7983aeee2\") " pod="calico-system/whisker-548f4d67b5-b26lw" Jan 28 01:19:23.187895 kubelet[3202]: I0128 01:19:23.176036 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff239d8a-c337-42b4-8142-43d1fa64b8e0-config-volume\") pod \"coredns-674b8bbfcf-xwxpv\" (UID: \"ff239d8a-c337-42b4-8142-43d1fa64b8e0\") " pod="kube-system/coredns-674b8bbfcf-xwxpv" Jan 28 01:19:23.187895 kubelet[3202]: I0128 01:19:23.176065 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2875x\" (UniqueName: \"kubernetes.io/projected/c988f11c-6f16-4100-9308-ea1983457126-kube-api-access-2875x\") pod \"calico-kube-controllers-dbbbff994-sqgh4\" (UID: \"c988f11c-6f16-4100-9308-ea1983457126\") " pod="calico-system/calico-kube-controllers-dbbbff994-sqgh4" Jan 28 01:19:23.187895 kubelet[3202]: I0128 01:19:23.176094 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbd1002c-7263-43fc-8b17-789e41b44261-goldmane-ca-bundle\") pod \"goldmane-666569f655-pcsbp\" (UID: \"cbd1002c-7263-43fc-8b17-789e41b44261\") " pod="calico-system/goldmane-666569f655-pcsbp" Jan 28 01:19:23.188022 kubelet[3202]: I0128 01:19:23.176130 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvmng\" (UniqueName: \"kubernetes.io/projected/47f54516-c523-4eae-a71c-70c7983aeee2-kube-api-access-zvmng\") pod \"whisker-548f4d67b5-b26lw\" (UID: \"47f54516-c523-4eae-a71c-70c7983aeee2\") " pod="calico-system/whisker-548f4d67b5-b26lw" Jan 28 01:19:23.188022 kubelet[3202]: I0128 01:19:23.176152 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/cbd1002c-7263-43fc-8b17-789e41b44261-goldmane-key-pair\") pod \"goldmane-666569f655-pcsbp\" (UID: \"cbd1002c-7263-43fc-8b17-789e41b44261\") " pod="calico-system/goldmane-666569f655-pcsbp" Jan 28 01:19:23.188022 kubelet[3202]: I0128 01:19:23.176180 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c988f11c-6f16-4100-9308-ea1983457126-tigera-ca-bundle\") pod \"calico-kube-controllers-dbbbff994-sqgh4\" (UID: \"c988f11c-6f16-4100-9308-ea1983457126\") " pod="calico-system/calico-kube-controllers-dbbbff994-sqgh4" Jan 28 01:19:23.188022 kubelet[3202]: I0128 01:19:23.176204 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-544bz\" (UniqueName: \"kubernetes.io/projected/cbd1002c-7263-43fc-8b17-789e41b44261-kube-api-access-544bz\") pod \"goldmane-666569f655-pcsbp\" (UID: \"cbd1002c-7263-43fc-8b17-789e41b44261\") " pod="calico-system/goldmane-666569f655-pcsbp" Jan 28 01:19:23.188022 kubelet[3202]: I0128 01:19:23.176230 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/325cd625-25dd-4d22-8523-67e469e6d0e9-calico-apiserver-certs\") pod \"calico-apiserver-74bc45499-4hxx4\" (UID: \"325cd625-25dd-4d22-8523-67e469e6d0e9\") " pod="calico-apiserver/calico-apiserver-74bc45499-4hxx4" Jan 28 01:19:23.188146 kubelet[3202]: I0128 01:19:23.176261 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pfgn\" (UniqueName: \"kubernetes.io/projected/77c5b13e-b6c7-4da4-8d69-cf95701836c8-kube-api-access-5pfgn\") pod \"calico-apiserver-597546f57d-dtd92\" (UID: \"77c5b13e-b6c7-4da4-8d69-cf95701836c8\") " pod="calico-apiserver/calico-apiserver-597546f57d-dtd92" Jan 28 01:19:23.188146 kubelet[3202]: I0128 01:19:23.176283 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f85ac2a-2c17-47d1-bfad-b39e838e4361-config-volume\") pod \"coredns-674b8bbfcf-ggpx4\" (UID: \"4f85ac2a-2c17-47d1-bfad-b39e838e4361\") " pod="kube-system/coredns-674b8bbfcf-ggpx4" Jan 28 01:19:23.277437 kubelet[3202]: I0128 01:19:23.277389 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d6acbfd4-88d6-4133-9434-3bfec2c327d4-calico-apiserver-certs\") pod \"calico-apiserver-74bc45499-2znnx\" (UID: \"d6acbfd4-88d6-4133-9434-3bfec2c327d4\") " pod="calico-apiserver/calico-apiserver-74bc45499-2znnx" Jan 28 01:19:23.278002 kubelet[3202]: I0128 01:19:23.277475 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqxhp\" (UniqueName: \"kubernetes.io/projected/d6acbfd4-88d6-4133-9434-3bfec2c327d4-kube-api-access-pqxhp\") pod \"calico-apiserver-74bc45499-2znnx\" (UID: \"d6acbfd4-88d6-4133-9434-3bfec2c327d4\") " pod="calico-apiserver/calico-apiserver-74bc45499-2znnx" Jan 28 01:19:23.356992 containerd[1849]: time="2026-01-28T01:19:23.356016175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xwxpv,Uid:ff239d8a-c337-42b4-8142-43d1fa64b8e0,Namespace:kube-system,Attempt:0,}" Jan 28 01:19:23.382740 containerd[1849]: time="2026-01-28T01:19:23.381445614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74bc45499-4hxx4,Uid:325cd625-25dd-4d22-8523-67e469e6d0e9,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:19:23.410618 containerd[1849]: time="2026-01-28T01:19:23.410578776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ggpx4,Uid:4f85ac2a-2c17-47d1-bfad-b39e838e4361,Namespace:kube-system,Attempt:0,}" Jan 28 01:19:23.428740 containerd[1849]: time="2026-01-28T01:19:23.428697094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-548f4d67b5-b26lw,Uid:47f54516-c523-4eae-a71c-70c7983aeee2,Namespace:calico-system,Attempt:0,}" Jan 28 01:19:23.437192 containerd[1849]: time="2026-01-28T01:19:23.437149791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-597546f57d-dtd92,Uid:77c5b13e-b6c7-4da4-8d69-cf95701836c8,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:19:23.458046 containerd[1849]: time="2026-01-28T01:19:23.457991562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dbbbff994-sqgh4,Uid:c988f11c-6f16-4100-9308-ea1983457126,Namespace:calico-system,Attempt:0,}" Jan 28 01:19:23.463671 containerd[1849]: time="2026-01-28T01:19:23.463581050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 28 01:19:23.471581 containerd[1849]: time="2026-01-28T01:19:23.471238720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74bc45499-2znnx,Uid:d6acbfd4-88d6-4133-9434-3bfec2c327d4,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:19:23.481159 containerd[1849]: time="2026-01-28T01:19:23.479915194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pcsbp,Uid:cbd1002c-7263-43fc-8b17-789e41b44261,Namespace:calico-system,Attempt:0,}" Jan 28 01:19:23.835018 containerd[1849]: time="2026-01-28T01:19:23.834912879Z" level=error msg="Failed to destroy network for sandbox \"92640e06e341731c3700e0151b62671f20218de12ed322c3af72b1ea070a327d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:19:23.843044 containerd[1849]: time="2026-01-28T01:19:23.842843537Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-597546f57d-dtd92,Uid:77c5b13e-b6c7-4da4-8d69-cf95701836c8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"92640e06e341731c3700e0151b62671f20218de12ed322c3af72b1ea070a327d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:19:23.844791 kubelet[3202]: E0128 01:19:23.843788 3202 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92640e06e341731c3700e0151b62671f20218de12ed322c3af72b1ea070a327d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:19:23.844791 kubelet[3202]: E0128 01:19:23.843883 3202 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92640e06e341731c3700e0151b62671f20218de12ed322c3af72b1ea070a327d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-597546f57d-dtd92" Jan 28 01:19:23.844791 kubelet[3202]: E0128 01:19:23.843917 3202 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92640e06e341731c3700e0151b62671f20218de12ed322c3af72b1ea070a327d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-597546f57d-dtd92" Jan 28 01:19:23.845053 kubelet[3202]: E0128 01:19:23.843989 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-597546f57d-dtd92_calico-apiserver(77c5b13e-b6c7-4da4-8d69-cf95701836c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-597546f57d-dtd92_calico-apiserver(77c5b13e-b6c7-4da4-8d69-cf95701836c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92640e06e341731c3700e0151b62671f20218de12ed322c3af72b1ea070a327d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-597546f57d-dtd92" podUID="77c5b13e-b6c7-4da4-8d69-cf95701836c8" Jan 28 01:19:23.860284 containerd[1849]: time="2026-01-28T01:19:23.859775861Z" level=error msg="Failed to destroy network for sandbox \"193a4b399ac094e61393f47013928646d747ae79c2b00f28c907cb773f0a7f76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:19:23.866316 containerd[1849]: time="2026-01-28T01:19:23.865174684Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74bc45499-4hxx4,Uid:325cd625-25dd-4d22-8523-67e469e6d0e9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"193a4b399ac094e61393f47013928646d747ae79c2b00f28c907cb773f0a7f76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:19:23.871397 containerd[1849]: time="2026-01-28T01:19:23.871347058Z" level=error msg="Failed to destroy network for sandbox \"9a8a93e2993f6eafcd1a627ef7c8943ab99637dce4a0a6d64fba585780ee4e05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:19:23.876058 containerd[1849]: time="2026-01-28T01:19:23.875807681Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74bc45499-2znnx,Uid:d6acbfd4-88d6-4133-9434-3bfec2c327d4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a8a93e2993f6eafcd1a627ef7c8943ab99637dce4a0a6d64fba585780ee4e05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:19:23.877248 containerd[1849]: time="2026-01-28T01:19:23.877213271Z" level=error msg="Failed to destroy network for sandbox \"66ee58078362da1e88a01c6f71e2bd79523f6591a14e03d90d0ac0db6dcb551e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:19:23.879358 kubelet[3202]: E0128 01:19:23.879309 3202 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"193a4b399ac094e61393f47013928646d747ae79c2b00f28c907cb773f0a7f76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:19:23.880385 kubelet[3202]: E0128 01:19:23.879475 3202 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a8a93e2993f6eafcd1a627ef7c8943ab99637dce4a0a6d64fba585780ee4e05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:19:23.880385 kubelet[3202]: E0128 01:19:23.880084 3202 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a8a93e2993f6eafcd1a627ef7c8943ab99637dce4a0a6d64fba585780ee4e05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74bc45499-2znnx" Jan 28 01:19:23.880385 kubelet[3202]: E0128 01:19:23.880116 3202 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a8a93e2993f6eafcd1a627ef7c8943ab99637dce4a0a6d64fba585780ee4e05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74bc45499-2znnx" Jan 28 01:19:23.880580 kubelet[3202]: E0128 01:19:23.880174 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74bc45499-2znnx_calico-apiserver(d6acbfd4-88d6-4133-9434-3bfec2c327d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74bc45499-2znnx_calico-apiserver(d6acbfd4-88d6-4133-9434-3bfec2c327d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a8a93e2993f6eafcd1a627ef7c8943ab99637dce4a0a6d64fba585780ee4e05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74bc45499-2znnx" podUID="d6acbfd4-88d6-4133-9434-3bfec2c327d4" Jan 28 01:19:23.882034 containerd[1849]: time="2026-01-28T01:19:23.881368366Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dbbbff994-sqgh4,Uid:c988f11c-6f16-4100-9308-ea1983457126,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"66ee58078362da1e88a01c6f71e2bd79523f6591a14e03d90d0ac0db6dcb551e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:19:23.882161 kubelet[3202]: E0128 01:19:23.880045 3202 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"193a4b399ac094e61393f47013928646d747ae79c2b00f28c907cb773f0a7f76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74bc45499-4hxx4" Jan 28 01:19:23.882161 kubelet[3202]: E0128 01:19:23.881622 3202 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"193a4b399ac094e61393f47013928646d747ae79c2b00f28c907cb773f0a7f76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74bc45499-4hxx4" Jan 28 01:19:23.882161 kubelet[3202]: E0128 01:19:23.881678 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74bc45499-4hxx4_calico-apiserver(325cd625-25dd-4d22-8523-67e469e6d0e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74bc45499-4hxx4_calico-apiserver(325cd625-25dd-4d22-8523-67e469e6d0e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"193a4b399ac094e61393f47013928646d747ae79c2b00f28c907cb773f0a7f76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74bc45499-4hxx4" podUID="325cd625-25dd-4d22-8523-67e469e6d0e9" Jan 28 01:19:23.883250 kubelet[3202]: E0128 01:19:23.882500 3202 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66ee58078362da1e88a01c6f71e2bd79523f6591a14e03d90d0ac0db6dcb551e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:19:23.885118 kubelet[3202]: E0128 01:19:23.884018 3202 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66ee58078362da1e88a01c6f71e2bd79523f6591a14e03d90d0ac0db6dcb551e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dbbbff994-sqgh4" Jan 28 01:19:23.885118 kubelet[3202]: E0128 01:19:23.884869 3202 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66ee58078362da1e88a01c6f71e2bd79523f6591a14e03d90d0ac0db6dcb551e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dbbbff994-sqgh4" Jan 28 01:19:23.885118 kubelet[3202]: E0128 01:19:23.885031 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-dbbbff994-sqgh4_calico-system(c988f11c-6f16-4100-9308-ea1983457126)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-dbbbff994-sqgh4_calico-system(c988f11c-6f16-4100-9308-ea1983457126)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66ee58078362da1e88a01c6f71e2bd79523f6591a14e03d90d0ac0db6dcb551e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-dbbbff994-sqgh4" podUID="c988f11c-6f16-4100-9308-ea1983457126" Jan 28 01:19:23.894452 containerd[1849]: time="2026-01-28T01:19:23.894406800Z" level=error msg="Failed to destroy network for sandbox \"b1f8ec103e34dc69f6a7ad7beb94d63225cffe9bc23104b9836bceac70a9b748\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:19:23.900965 containerd[1849]: time="2026-01-28T01:19:23.900838756Z" level=error msg="Failed to destroy network for sandbox \"1ce160c93b2fb498a0e4cdef76fb09835e9663cc111752892d567ad489cfbce8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:19:23.904782 containerd[1849]: time="2026-01-28T01:19:23.904661430Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ggpx4,Uid:4f85ac2a-2c17-47d1-bfad-b39e838e4361,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1f8ec103e34dc69f6a7ad7beb94d63225cffe9bc23104b9836bceac70a9b748\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:19:23.905602 kubelet[3202]: E0128 01:19:23.905533 3202 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1f8ec103e34dc69f6a7ad7beb94d63225cffe9bc23104b9836bceac70a9b748\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:19:23.905602 kubelet[3202]: E0128 01:19:23.905595 3202 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1f8ec103e34dc69f6a7ad7beb94d63225cffe9bc23104b9836bceac70a9b748\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ggpx4" Jan 28 01:19:23.905954 kubelet[3202]: E0128 01:19:23.905620 3202 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1f8ec103e34dc69f6a7ad7beb94d63225cffe9bc23104b9836bceac70a9b748\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ggpx4" Jan 28 01:19:23.905954 kubelet[3202]: E0128 01:19:23.905690 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ggpx4_kube-system(4f85ac2a-2c17-47d1-bfad-b39e838e4361)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ggpx4_kube-system(4f85ac2a-2c17-47d1-bfad-b39e838e4361)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b1f8ec103e34dc69f6a7ad7beb94d63225cffe9bc23104b9836bceac70a9b748\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ggpx4" podUID="4f85ac2a-2c17-47d1-bfad-b39e838e4361" Jan 28 01:19:23.949274 kubelet[3202]: E0128 01:19:23.908888 3202 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ce160c93b2fb498a0e4cdef76fb09835e9663cc111752892d567ad489cfbce8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:19:23.949274 kubelet[3202]: E0128 01:19:23.908939 3202 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ce160c93b2fb498a0e4cdef76fb09835e9663cc111752892d567ad489cfbce8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-pcsbp" Jan 28 01:19:23.949274 kubelet[3202]: E0128 01:19:23.908965 3202 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ce160c93b2fb498a0e4cdef76fb09835e9663cc111752892d567ad489cfbce8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-pcsbp" Jan 28 01:19:23.949411 containerd[1849]: time="2026-01-28T01:19:23.908637262Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pcsbp,Uid:cbd1002c-7263-43fc-8b17-789e41b44261,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ce160c93b2fb498a0e4cdef76fb09835e9663cc111752892d567ad489cfbce8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:19:23.949411 containerd[1849]: time="2026-01-28T01:19:23.910979313Z" level=error msg="Failed to destroy network for sandbox \"e452339576fac2ab5a95972f0ff353de6564bcd0558285f937d97cea2d319e24\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:19:23.949411 containerd[1849]: time="2026-01-28T01:19:23.912716664Z" level=error msg="Failed to destroy network for sandbox \"a703cf4e52dfc6c40b3d3ece98d90f4acf6df30a696fb557db73606bc5b3a2bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:19:23.949411 containerd[1849]: time="2026-01-28T01:19:23.915285902Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-548f4d67b5-b26lw,Uid:47f54516-c523-4eae-a71c-70c7983aeee2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e452339576fac2ab5a95972f0ff353de6564bcd0558285f937d97cea2d319e24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:19:23.949950 kubelet[3202]: E0128 01:19:23.909018 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-pcsbp_calico-system(cbd1002c-7263-43fc-8b17-789e41b44261)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-pcsbp_calico-system(cbd1002c-7263-43fc-8b17-789e41b44261)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ce160c93b2fb498a0e4cdef76fb09835e9663cc111752892d567ad489cfbce8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-pcsbp" podUID="cbd1002c-7263-43fc-8b17-789e41b44261" Jan 28 01:19:23.949950 kubelet[3202]: E0128 01:19:23.915552 3202 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e452339576fac2ab5a95972f0ff353de6564bcd0558285f937d97cea2d319e24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:19:23.949950 kubelet[3202]: E0128 01:19:23.915612 3202 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e452339576fac2ab5a95972f0ff353de6564bcd0558285f937d97cea2d319e24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-548f4d67b5-b26lw" Jan 28 01:19:23.950085 containerd[1849]: time="2026-01-28T01:19:23.919286974Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xwxpv,Uid:ff239d8a-c337-42b4-8142-43d1fa64b8e0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a703cf4e52dfc6c40b3d3ece98d90f4acf6df30a696fb557db73606bc5b3a2bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:19:23.950129 kubelet[3202]: E0128 01:19:23.915641 3202 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e452339576fac2ab5a95972f0ff353de6564bcd0558285f937d97cea2d319e24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-548f4d67b5-b26lw" Jan 28 01:19:23.950129 kubelet[3202]: E0128 01:19:23.915706 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-548f4d67b5-b26lw_calico-system(47f54516-c523-4eae-a71c-70c7983aeee2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-548f4d67b5-b26lw_calico-system(47f54516-c523-4eae-a71c-70c7983aeee2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e452339576fac2ab5a95972f0ff353de6564bcd0558285f937d97cea2d319e24\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-548f4d67b5-b26lw" podUID="47f54516-c523-4eae-a71c-70c7983aeee2" Jan 28 01:19:23.950129 kubelet[3202]: E0128 01:19:23.919556 3202 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a703cf4e52dfc6c40b3d3ece98d90f4acf6df30a696fb557db73606bc5b3a2bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:19:23.950675 kubelet[3202]: E0128 01:19:23.919663 3202 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a703cf4e52dfc6c40b3d3ece98d90f4acf6df30a696fb557db73606bc5b3a2bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xwxpv" Jan 28 01:19:23.950675 kubelet[3202]: E0128 01:19:23.919683 3202 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a703cf4e52dfc6c40b3d3ece98d90f4acf6df30a696fb557db73606bc5b3a2bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xwxpv" Jan 28 01:19:23.950675 kubelet[3202]: E0128 01:19:23.919793 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-xwxpv_kube-system(ff239d8a-c337-42b4-8142-43d1fa64b8e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-xwxpv_kube-system(ff239d8a-c337-42b4-8142-43d1fa64b8e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a703cf4e52dfc6c40b3d3ece98d90f4acf6df30a696fb557db73606bc5b3a2bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-xwxpv" podUID="ff239d8a-c337-42b4-8142-43d1fa64b8e0" Jan 28 01:19:24.207514 systemd[1]: Created slice kubepods-besteffort-pod6c6224c1_45a4_4e67_9483_34412dd5913e.slice - libcontainer container kubepods-besteffort-pod6c6224c1_45a4_4e67_9483_34412dd5913e.slice. Jan 28 01:19:24.210128 containerd[1849]: time="2026-01-28T01:19:24.210091393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5fjx2,Uid:6c6224c1-45a4-4e67-9483-34412dd5913e,Namespace:calico-system,Attempt:0,}" Jan 28 01:19:24.269558 containerd[1849]: time="2026-01-28T01:19:24.269498983Z" level=error msg="Failed to destroy network for sandbox \"59219b8308ba2f0cfa5b3e5607682a2045cef1e5929b39d0c838f033fe83eb75\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:19:24.273025 systemd[1]: run-netns-cni\x2dc6b2f4b7\x2dc49b\x2d7762\x2dd49d\x2d3eb1be01f9b6.mount: Deactivated successfully. Jan 28 01:19:24.280597 containerd[1849]: time="2026-01-28T01:19:24.280458296Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5fjx2,Uid:6c6224c1-45a4-4e67-9483-34412dd5913e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"59219b8308ba2f0cfa5b3e5607682a2045cef1e5929b39d0c838f033fe83eb75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:19:24.281121 kubelet[3202]: E0128 01:19:24.280733 3202 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59219b8308ba2f0cfa5b3e5607682a2045cef1e5929b39d0c838f033fe83eb75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:19:24.281121 kubelet[3202]: E0128 01:19:24.280836 3202 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59219b8308ba2f0cfa5b3e5607682a2045cef1e5929b39d0c838f033fe83eb75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5fjx2" Jan 28 01:19:24.281121 kubelet[3202]: E0128 01:19:24.280864 3202 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59219b8308ba2f0cfa5b3e5607682a2045cef1e5929b39d0c838f033fe83eb75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5fjx2" Jan 28 01:19:24.281586 kubelet[3202]: E0128 01:19:24.280956 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5fjx2_calico-system(6c6224c1-45a4-4e67-9483-34412dd5913e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5fjx2_calico-system(6c6224c1-45a4-4e67-9483-34412dd5913e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59219b8308ba2f0cfa5b3e5607682a2045cef1e5929b39d0c838f033fe83eb75\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5fjx2" podUID="6c6224c1-45a4-4e67-9483-34412dd5913e" Jan 28 01:19:30.264478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2635184382.mount: Deactivated successfully. Jan 28 01:19:30.345579 containerd[1849]: time="2026-01-28T01:19:30.330073723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:19:30.351950 containerd[1849]: time="2026-01-28T01:19:30.351893522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 28 01:19:30.389083 containerd[1849]: time="2026-01-28T01:19:30.389028777Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:19:30.394069 containerd[1849]: time="2026-01-28T01:19:30.393030509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:19:30.394330 containerd[1849]: time="2026-01-28T01:19:30.393693844Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.93005631s" Jan 28 01:19:30.405103 containerd[1849]: time="2026-01-28T01:19:30.405038598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 28 01:19:30.433909 containerd[1849]: time="2026-01-28T01:19:30.433711097Z" level=info msg="CreateContainer within sandbox \"70dc0d9d5f990c4d834aa375b60cd40beda92a1e5afefe3b155257e740eac679\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 28 01:19:30.543868 containerd[1849]: time="2026-01-28T01:19:30.542101852Z" level=info msg="Container 4d7f18e6103cf90a8760127bb416d3071b33e5d236c1670cc12dcb0d2406e212: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:19:30.543953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2857159401.mount: Deactivated successfully. Jan 28 01:19:30.611897 containerd[1849]: time="2026-01-28T01:19:30.611860942Z" level=info msg="CreateContainer within sandbox \"70dc0d9d5f990c4d834aa375b60cd40beda92a1e5afefe3b155257e740eac679\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4d7f18e6103cf90a8760127bb416d3071b33e5d236c1670cc12dcb0d2406e212\"" Jan 28 01:19:30.613045 containerd[1849]: time="2026-01-28T01:19:30.612994986Z" level=info msg="StartContainer for \"4d7f18e6103cf90a8760127bb416d3071b33e5d236c1670cc12dcb0d2406e212\"" Jan 28 01:19:30.618056 containerd[1849]: time="2026-01-28T01:19:30.618008695Z" level=info msg="connecting to shim 4d7f18e6103cf90a8760127bb416d3071b33e5d236c1670cc12dcb0d2406e212" address="unix:///run/containerd/s/6691a1b60f1fff6e47140d25c711b7e0799ab7effedc660b5f01a265b72cdd62" protocol=ttrpc version=3 Jan 28 01:19:30.716241 systemd[1]: Started cri-containerd-4d7f18e6103cf90a8760127bb416d3071b33e5d236c1670cc12dcb0d2406e212.scope - libcontainer container 4d7f18e6103cf90a8760127bb416d3071b33e5d236c1670cc12dcb0d2406e212. Jan 28 01:19:30.776000 audit: BPF prog-id=179 op=LOAD Jan 28 01:19:30.778765 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 28 01:19:30.814651 kernel: audit: type=1334 audit(1769563170.776:590): prog-id=179 op=LOAD Jan 28 01:19:30.814734 kernel: audit: type=1300 audit(1769563170.776:590): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000198488 a2=98 a3=0 items=0 ppid=3802 pid=4298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:30.814775 kernel: audit: type=1327 audit(1769563170.776:590): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464376631386536313033636639306138373630313237626234313664 Jan 28 01:19:30.816165 kernel: audit: type=1334 audit(1769563170.776:591): prog-id=180 op=LOAD Jan 28 01:19:30.816200 kernel: audit: type=1300 audit(1769563170.776:591): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000198218 a2=98 a3=0 items=0 ppid=3802 pid=4298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:30.816220 kernel: audit: type=1327 audit(1769563170.776:591): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464376631386536313033636639306138373630313237626234313664 Jan 28 01:19:30.816246 kernel: audit: type=1334 audit(1769563170.776:592): prog-id=180 op=UNLOAD Jan 28 01:19:30.816601 kernel: audit: type=1300 audit(1769563170.776:592): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3802 pid=4298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:30.816632 kernel: audit: type=1327 audit(1769563170.776:592): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464376631386536313033636639306138373630313237626234313664 Jan 28 01:19:30.816655 kernel: audit: type=1334 audit(1769563170.776:593): prog-id=179 op=UNLOAD Jan 28 01:19:30.776000 audit[4298]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000198488 a2=98 a3=0 items=0 ppid=3802 pid=4298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:30.776000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464376631386536313033636639306138373630313237626234313664 Jan 28 01:19:30.776000 audit: BPF prog-id=180 op=LOAD Jan 28 01:19:30.776000 audit[4298]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000198218 a2=98 a3=0 items=0 ppid=3802 pid=4298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:30.776000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464376631386536313033636639306138373630313237626234313664 Jan 28 01:19:30.776000 audit: BPF prog-id=180 op=UNLOAD Jan 28 01:19:30.776000 audit[4298]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3802 pid=4298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:30.776000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464376631386536313033636639306138373630313237626234313664 Jan 28 01:19:30.776000 audit: BPF prog-id=179 op=UNLOAD Jan 28 01:19:30.776000 audit[4298]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3802 pid=4298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:30.776000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464376631386536313033636639306138373630313237626234313664 Jan 28 01:19:30.776000 audit: BPF prog-id=181 op=LOAD Jan 28 01:19:30.776000 audit[4298]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001986e8 a2=98 a3=0 items=0 ppid=3802 pid=4298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:30.776000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464376631386536313033636639306138373630313237626234313664 Jan 28 01:19:30.853247 containerd[1849]: time="2026-01-28T01:19:30.853199021Z" level=info msg="StartContainer for \"4d7f18e6103cf90a8760127bb416d3071b33e5d236c1670cc12dcb0d2406e212\" returns successfully" Jan 28 01:19:31.117109 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 28 01:19:31.117359 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 28 01:19:31.537145 kubelet[3202]: I0128 01:19:31.534329 3202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vfzjm" podStartSLOduration=1.6907404339999998 podStartE2EDuration="21.533634896s" podCreationTimestamp="2026-01-28 01:19:10 +0000 UTC" firstStartedPulling="2026-01-28 01:19:10.563154911 +0000 UTC m=+24.581041889" lastFinishedPulling="2026-01-28 01:19:30.40604937 +0000 UTC m=+44.423936351" observedRunningTime="2026-01-28 01:19:31.532913532 +0000 UTC m=+45.550800520" watchObservedRunningTime="2026-01-28 01:19:31.533634896 +0000 UTC m=+45.551521883" Jan 28 01:19:31.539298 kubelet[3202]: I0128 01:19:31.539063 3202 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/47f54516-c523-4eae-a71c-70c7983aeee2-whisker-backend-key-pair\") pod \"47f54516-c523-4eae-a71c-70c7983aeee2\" (UID: \"47f54516-c523-4eae-a71c-70c7983aeee2\") " Jan 28 01:19:31.539298 kubelet[3202]: I0128 01:19:31.539124 3202 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvmng\" (UniqueName: \"kubernetes.io/projected/47f54516-c523-4eae-a71c-70c7983aeee2-kube-api-access-zvmng\") pod \"47f54516-c523-4eae-a71c-70c7983aeee2\" (UID: \"47f54516-c523-4eae-a71c-70c7983aeee2\") " Jan 28 01:19:31.539298 kubelet[3202]: I0128 01:19:31.539157 3202 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47f54516-c523-4eae-a71c-70c7983aeee2-whisker-ca-bundle\") pod \"47f54516-c523-4eae-a71c-70c7983aeee2\" (UID: \"47f54516-c523-4eae-a71c-70c7983aeee2\") " Jan 28 01:19:31.555847 kubelet[3202]: I0128 01:19:31.553450 3202 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47f54516-c523-4eae-a71c-70c7983aeee2-kube-api-access-zvmng" (OuterVolumeSpecName: "kube-api-access-zvmng") pod "47f54516-c523-4eae-a71c-70c7983aeee2" (UID: "47f54516-c523-4eae-a71c-70c7983aeee2"). InnerVolumeSpecName "kube-api-access-zvmng". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 01:19:31.555847 kubelet[3202]: I0128 01:19:31.553740 3202 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47f54516-c523-4eae-a71c-70c7983aeee2-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "47f54516-c523-4eae-a71c-70c7983aeee2" (UID: "47f54516-c523-4eae-a71c-70c7983aeee2"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 01:19:31.555347 systemd[1]: var-lib-kubelet-pods-47f54516\x2dc523\x2d4eae\x2da71c\x2d70c7983aeee2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzvmng.mount: Deactivated successfully. Jan 28 01:19:31.557176 kubelet[3202]: I0128 01:19:31.556486 3202 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47f54516-c523-4eae-a71c-70c7983aeee2-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "47f54516-c523-4eae-a71c-70c7983aeee2" (UID: "47f54516-c523-4eae-a71c-70c7983aeee2"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 28 01:19:31.555447 systemd[1]: var-lib-kubelet-pods-47f54516\x2dc523\x2d4eae\x2da71c\x2d70c7983aeee2-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 28 01:19:31.640281 kubelet[3202]: I0128 01:19:31.640235 3202 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47f54516-c523-4eae-a71c-70c7983aeee2-whisker-ca-bundle\") on node \"ip-172-31-31-26\" DevicePath \"\"" Jan 28 01:19:31.640281 kubelet[3202]: I0128 01:19:31.640276 3202 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/47f54516-c523-4eae-a71c-70c7983aeee2-whisker-backend-key-pair\") on node \"ip-172-31-31-26\" DevicePath \"\"" Jan 28 01:19:31.640281 kubelet[3202]: I0128 01:19:31.640289 3202 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zvmng\" (UniqueName: \"kubernetes.io/projected/47f54516-c523-4eae-a71c-70c7983aeee2-kube-api-access-zvmng\") on node \"ip-172-31-31-26\" DevicePath \"\"" Jan 28 01:19:31.820857 systemd[1]: Removed slice kubepods-besteffort-pod47f54516_c523_4eae_a71c_70c7983aeee2.slice - libcontainer container kubepods-besteffort-pod47f54516_c523_4eae_a71c_70c7983aeee2.slice. Jan 28 01:19:31.944632 kubelet[3202]: I0128 01:19:31.944592 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9accd949-c4fa-4ce7-b3a8-524deb448a06-whisker-ca-bundle\") pod \"whisker-799d7c8d49-2vrfp\" (UID: \"9accd949-c4fa-4ce7-b3a8-524deb448a06\") " pod="calico-system/whisker-799d7c8d49-2vrfp" Jan 28 01:19:31.944852 kubelet[3202]: I0128 01:19:31.944654 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pdxw\" (UniqueName: \"kubernetes.io/projected/9accd949-c4fa-4ce7-b3a8-524deb448a06-kube-api-access-6pdxw\") pod \"whisker-799d7c8d49-2vrfp\" (UID: \"9accd949-c4fa-4ce7-b3a8-524deb448a06\") " pod="calico-system/whisker-799d7c8d49-2vrfp" Jan 28 01:19:31.944852 kubelet[3202]: I0128 01:19:31.944702 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9accd949-c4fa-4ce7-b3a8-524deb448a06-whisker-backend-key-pair\") pod \"whisker-799d7c8d49-2vrfp\" (UID: \"9accd949-c4fa-4ce7-b3a8-524deb448a06\") " pod="calico-system/whisker-799d7c8d49-2vrfp" Jan 28 01:19:31.955944 systemd[1]: Created slice kubepods-besteffort-pod9accd949_c4fa_4ce7_b3a8_524deb448a06.slice - libcontainer container kubepods-besteffort-pod9accd949_c4fa_4ce7_b3a8_524deb448a06.slice. Jan 28 01:19:32.205018 kubelet[3202]: I0128 01:19:32.204950 3202 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47f54516-c523-4eae-a71c-70c7983aeee2" path="/var/lib/kubelet/pods/47f54516-c523-4eae-a71c-70c7983aeee2/volumes" Jan 28 01:19:32.264934 containerd[1849]: time="2026-01-28T01:19:32.264883075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-799d7c8d49-2vrfp,Uid:9accd949-c4fa-4ce7-b3a8-524deb448a06,Namespace:calico-system,Attempt:0,}" Jan 28 01:19:32.816201 (udev-worker)[4341]: Network interface NamePolicy= disabled on kernel command line. Jan 28 01:19:32.821387 systemd-networkd[1458]: cali3055b20ed95: Link UP Jan 28 01:19:32.821589 systemd-networkd[1458]: cali3055b20ed95: Gained carrier Jan 28 01:19:32.856350 containerd[1849]: 2026-01-28 01:19:32.310 [INFO][4426] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 01:19:32.856350 containerd[1849]: 2026-01-28 01:19:32.410 [INFO][4426] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--26-k8s-whisker--799d7c8d49--2vrfp-eth0 whisker-799d7c8d49- calico-system 9accd949-c4fa-4ce7-b3a8-524deb448a06 920 0 2026-01-28 01:19:31 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:799d7c8d49 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-31-26 whisker-799d7c8d49-2vrfp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3055b20ed95 [] [] }} ContainerID="4b6949aa5be439a00abcf492daf8b31d02190bf3bedce96f8f437e80dcfd140d" Namespace="calico-system" Pod="whisker-799d7c8d49-2vrfp" WorkloadEndpoint="ip--172--31--31--26-k8s-whisker--799d7c8d49--2vrfp-" Jan 28 01:19:32.856350 containerd[1849]: 2026-01-28 01:19:32.410 [INFO][4426] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b6949aa5be439a00abcf492daf8b31d02190bf3bedce96f8f437e80dcfd140d" Namespace="calico-system" Pod="whisker-799d7c8d49-2vrfp" WorkloadEndpoint="ip--172--31--31--26-k8s-whisker--799d7c8d49--2vrfp-eth0" Jan 28 01:19:32.856350 containerd[1849]: 2026-01-28 01:19:32.727 [INFO][4434] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b6949aa5be439a00abcf492daf8b31d02190bf3bedce96f8f437e80dcfd140d" HandleID="k8s-pod-network.4b6949aa5be439a00abcf492daf8b31d02190bf3bedce96f8f437e80dcfd140d" Workload="ip--172--31--31--26-k8s-whisker--799d7c8d49--2vrfp-eth0" Jan 28 01:19:32.857247 containerd[1849]: 2026-01-28 01:19:32.733 [INFO][4434] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4b6949aa5be439a00abcf492daf8b31d02190bf3bedce96f8f437e80dcfd140d" HandleID="k8s-pod-network.4b6949aa5be439a00abcf492daf8b31d02190bf3bedce96f8f437e80dcfd140d" Workload="ip--172--31--31--26-k8s-whisker--799d7c8d49--2vrfp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031c2f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-26", "pod":"whisker-799d7c8d49-2vrfp", "timestamp":"2026-01-28 01:19:32.727987827 +0000 UTC"}, Hostname:"ip-172-31-31-26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:19:32.857247 containerd[1849]: 2026-01-28 01:19:32.733 [INFO][4434] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:19:32.857247 containerd[1849]: 2026-01-28 01:19:32.733 [INFO][4434] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:19:32.857247 containerd[1849]: 2026-01-28 01:19:32.734 [INFO][4434] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-26' Jan 28 01:19:32.857247 containerd[1849]: 2026-01-28 01:19:32.752 [INFO][4434] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4b6949aa5be439a00abcf492daf8b31d02190bf3bedce96f8f437e80dcfd140d" host="ip-172-31-31-26" Jan 28 01:19:32.857247 containerd[1849]: 2026-01-28 01:19:32.772 [INFO][4434] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-26" Jan 28 01:19:32.857247 containerd[1849]: 2026-01-28 01:19:32.778 [INFO][4434] ipam/ipam.go 511: Trying affinity for 192.168.24.0/26 host="ip-172-31-31-26" Jan 28 01:19:32.857247 containerd[1849]: 2026-01-28 01:19:32.782 [INFO][4434] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.0/26 host="ip-172-31-31-26" Jan 28 01:19:32.857247 containerd[1849]: 2026-01-28 01:19:32.784 [INFO][4434] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.0/26 host="ip-172-31-31-26" Jan 28 01:19:32.857247 containerd[1849]: 2026-01-28 01:19:32.784 [INFO][4434] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.0/26 handle="k8s-pod-network.4b6949aa5be439a00abcf492daf8b31d02190bf3bedce96f8f437e80dcfd140d" host="ip-172-31-31-26" Jan 28 01:19:32.859159 containerd[1849]: 2026-01-28 01:19:32.785 [INFO][4434] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4b6949aa5be439a00abcf492daf8b31d02190bf3bedce96f8f437e80dcfd140d Jan 28 01:19:32.859159 containerd[1849]: 2026-01-28 01:19:32.791 [INFO][4434] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.0/26 handle="k8s-pod-network.4b6949aa5be439a00abcf492daf8b31d02190bf3bedce96f8f437e80dcfd140d" host="ip-172-31-31-26" Jan 28 01:19:32.859159 containerd[1849]: 2026-01-28 01:19:32.797 [INFO][4434] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.1/26] block=192.168.24.0/26 handle="k8s-pod-network.4b6949aa5be439a00abcf492daf8b31d02190bf3bedce96f8f437e80dcfd140d" host="ip-172-31-31-26" Jan 28 01:19:32.859159 containerd[1849]: 2026-01-28 01:19:32.797 [INFO][4434] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.1/26] handle="k8s-pod-network.4b6949aa5be439a00abcf492daf8b31d02190bf3bedce96f8f437e80dcfd140d" host="ip-172-31-31-26" Jan 28 01:19:32.859159 containerd[1849]: 2026-01-28 01:19:32.797 [INFO][4434] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:19:32.859159 containerd[1849]: 2026-01-28 01:19:32.797 [INFO][4434] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.1/26] IPv6=[] ContainerID="4b6949aa5be439a00abcf492daf8b31d02190bf3bedce96f8f437e80dcfd140d" HandleID="k8s-pod-network.4b6949aa5be439a00abcf492daf8b31d02190bf3bedce96f8f437e80dcfd140d" Workload="ip--172--31--31--26-k8s-whisker--799d7c8d49--2vrfp-eth0" Jan 28 01:19:32.859392 containerd[1849]: 2026-01-28 01:19:32.801 [INFO][4426] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b6949aa5be439a00abcf492daf8b31d02190bf3bedce96f8f437e80dcfd140d" Namespace="calico-system" Pod="whisker-799d7c8d49-2vrfp" WorkloadEndpoint="ip--172--31--31--26-k8s-whisker--799d7c8d49--2vrfp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--26-k8s-whisker--799d7c8d49--2vrfp-eth0", GenerateName:"whisker-799d7c8d49-", Namespace:"calico-system", SelfLink:"", UID:"9accd949-c4fa-4ce7-b3a8-524deb448a06", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 19, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"799d7c8d49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-26", ContainerID:"", Pod:"whisker-799d7c8d49-2vrfp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.24.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3055b20ed95", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:19:32.859392 containerd[1849]: 2026-01-28 01:19:32.802 [INFO][4426] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.1/32] ContainerID="4b6949aa5be439a00abcf492daf8b31d02190bf3bedce96f8f437e80dcfd140d" Namespace="calico-system" Pod="whisker-799d7c8d49-2vrfp" WorkloadEndpoint="ip--172--31--31--26-k8s-whisker--799d7c8d49--2vrfp-eth0" Jan 28 01:19:32.860109 containerd[1849]: 2026-01-28 01:19:32.802 [INFO][4426] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3055b20ed95 ContainerID="4b6949aa5be439a00abcf492daf8b31d02190bf3bedce96f8f437e80dcfd140d" Namespace="calico-system" Pod="whisker-799d7c8d49-2vrfp" WorkloadEndpoint="ip--172--31--31--26-k8s-whisker--799d7c8d49--2vrfp-eth0" Jan 28 01:19:32.860109 containerd[1849]: 2026-01-28 01:19:32.825 [INFO][4426] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b6949aa5be439a00abcf492daf8b31d02190bf3bedce96f8f437e80dcfd140d" Namespace="calico-system" Pod="whisker-799d7c8d49-2vrfp" WorkloadEndpoint="ip--172--31--31--26-k8s-whisker--799d7c8d49--2vrfp-eth0" Jan 28 01:19:32.860540 containerd[1849]: 2026-01-28 01:19:32.826 [INFO][4426] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b6949aa5be439a00abcf492daf8b31d02190bf3bedce96f8f437e80dcfd140d" Namespace="calico-system" Pod="whisker-799d7c8d49-2vrfp" WorkloadEndpoint="ip--172--31--31--26-k8s-whisker--799d7c8d49--2vrfp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--26-k8s-whisker--799d7c8d49--2vrfp-eth0", GenerateName:"whisker-799d7c8d49-", Namespace:"calico-system", SelfLink:"", UID:"9accd949-c4fa-4ce7-b3a8-524deb448a06", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 19, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"799d7c8d49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-26", ContainerID:"4b6949aa5be439a00abcf492daf8b31d02190bf3bedce96f8f437e80dcfd140d", Pod:"whisker-799d7c8d49-2vrfp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.24.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3055b20ed95", MAC:"ca:c2:f2:61:65:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:19:32.861095 containerd[1849]: 2026-01-28 01:19:32.839 [INFO][4426] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b6949aa5be439a00abcf492daf8b31d02190bf3bedce96f8f437e80dcfd140d" Namespace="calico-system" Pod="whisker-799d7c8d49-2vrfp" WorkloadEndpoint="ip--172--31--31--26-k8s-whisker--799d7c8d49--2vrfp-eth0" Jan 28 01:19:33.291408 containerd[1849]: time="2026-01-28T01:19:33.291351261Z" level=info msg="connecting to shim 4b6949aa5be439a00abcf492daf8b31d02190bf3bedce96f8f437e80dcfd140d" address="unix:///run/containerd/s/934a63aaab9a3de245048ac2399a86ebfad688271d786b81ba0eb47ec1f2a449" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:19:33.294000 audit: BPF prog-id=182 op=LOAD Jan 28 01:19:33.294000 audit[4582]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffee718d70 a2=98 a3=1fffffffffffffff items=0 ppid=4484 pid=4582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.294000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 01:19:33.294000 audit: BPF prog-id=182 op=UNLOAD Jan 28 01:19:33.294000 audit[4582]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffee718d40 a3=0 items=0 ppid=4484 pid=4582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.294000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 01:19:33.296000 audit: BPF prog-id=183 op=LOAD Jan 28 01:19:33.296000 audit[4582]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffee718c50 a2=94 a3=3 items=0 ppid=4484 pid=4582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.296000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 01:19:33.296000 audit: BPF prog-id=183 op=UNLOAD Jan 28 01:19:33.296000 audit[4582]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffee718c50 a2=94 a3=3 items=0 ppid=4484 pid=4582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.296000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 01:19:33.296000 audit: BPF prog-id=184 op=LOAD Jan 28 01:19:33.296000 audit[4582]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffee718c90 a2=94 a3=7fffee718e70 items=0 ppid=4484 pid=4582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.296000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 01:19:33.296000 audit: BPF prog-id=184 op=UNLOAD Jan 28 01:19:33.296000 audit[4582]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffee718c90 a2=94 a3=7fffee718e70 items=0 ppid=4484 pid=4582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.296000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 01:19:33.306000 audit: BPF prog-id=185 op=LOAD Jan 28 01:19:33.306000 audit[4587]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff6b988850 a2=98 a3=3 items=0 ppid=4484 pid=4587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.306000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:19:33.306000 audit: BPF prog-id=185 op=UNLOAD Jan 28 01:19:33.306000 audit[4587]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff6b988820 a3=0 items=0 ppid=4484 pid=4587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.306000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:19:33.307000 audit: BPF prog-id=186 op=LOAD Jan 28 01:19:33.307000 audit[4587]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff6b988640 a2=94 a3=54428f items=0 ppid=4484 pid=4587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.307000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:19:33.308000 audit: BPF prog-id=186 op=UNLOAD Jan 28 01:19:33.308000 audit[4587]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff6b988640 a2=94 a3=54428f items=0 ppid=4484 pid=4587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.308000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:19:33.308000 audit: BPF prog-id=187 op=LOAD Jan 28 01:19:33.308000 audit[4587]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff6b988670 a2=94 a3=2 items=0 ppid=4484 pid=4587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.308000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:19:33.308000 audit: BPF prog-id=187 op=UNLOAD Jan 28 01:19:33.308000 audit[4587]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff6b988670 a2=0 a3=2 items=0 ppid=4484 pid=4587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.308000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:19:33.402041 systemd[1]: Started cri-containerd-4b6949aa5be439a00abcf492daf8b31d02190bf3bedce96f8f437e80dcfd140d.scope - libcontainer container 4b6949aa5be439a00abcf492daf8b31d02190bf3bedce96f8f437e80dcfd140d. Jan 28 01:19:33.454000 audit: BPF prog-id=188 op=LOAD Jan 28 01:19:33.457000 audit: BPF prog-id=189 op=LOAD Jan 28 01:19:33.457000 audit[4590]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4575 pid=4590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.457000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462363934396161356265343339613030616263663439326461663862 Jan 28 01:19:33.457000 audit: BPF prog-id=189 op=UNLOAD Jan 28 01:19:33.457000 audit[4590]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4575 pid=4590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.457000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462363934396161356265343339613030616263663439326461663862 Jan 28 01:19:33.457000 audit: BPF prog-id=190 op=LOAD Jan 28 01:19:33.457000 audit[4590]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4575 pid=4590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.457000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462363934396161356265343339613030616263663439326461663862 Jan 28 01:19:33.458000 audit: BPF prog-id=191 op=LOAD Jan 28 01:19:33.458000 audit[4590]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4575 pid=4590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462363934396161356265343339613030616263663439326461663862 Jan 28 01:19:33.458000 audit: BPF prog-id=191 op=UNLOAD Jan 28 01:19:33.458000 audit[4590]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4575 pid=4590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462363934396161356265343339613030616263663439326461663862 Jan 28 01:19:33.458000 audit: BPF prog-id=190 op=UNLOAD Jan 28 01:19:33.458000 audit[4590]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4575 pid=4590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462363934396161356265343339613030616263663439326461663862 Jan 28 01:19:33.459000 audit: BPF prog-id=192 op=LOAD Jan 28 01:19:33.459000 audit[4590]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4575 pid=4590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462363934396161356265343339613030616263663439326461663862 Jan 28 01:19:33.571801 containerd[1849]: time="2026-01-28T01:19:33.571042928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-799d7c8d49-2vrfp,Uid:9accd949-c4fa-4ce7-b3a8-524deb448a06,Namespace:calico-system,Attempt:0,} returns sandbox id \"4b6949aa5be439a00abcf492daf8b31d02190bf3bedce96f8f437e80dcfd140d\"" Jan 28 01:19:33.576065 containerd[1849]: time="2026-01-28T01:19:33.575939617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:19:33.733000 audit: BPF prog-id=193 op=LOAD Jan 28 01:19:33.733000 audit[4587]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff6b988530 a2=94 a3=1 items=0 ppid=4484 pid=4587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.733000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:19:33.733000 audit: BPF prog-id=193 op=UNLOAD Jan 28 01:19:33.733000 audit[4587]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff6b988530 a2=94 a3=1 items=0 ppid=4484 pid=4587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.733000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:19:33.745000 audit: BPF prog-id=194 op=LOAD Jan 28 01:19:33.745000 audit[4587]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff6b988520 a2=94 a3=4 items=0 ppid=4484 pid=4587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.745000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:19:33.745000 audit: BPF prog-id=194 op=UNLOAD Jan 28 01:19:33.745000 audit[4587]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff6b988520 a2=0 a3=4 items=0 ppid=4484 pid=4587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.745000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:19:33.746000 audit: BPF prog-id=195 op=LOAD Jan 28 01:19:33.746000 audit[4587]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff6b988380 a2=94 a3=5 items=0 ppid=4484 pid=4587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.746000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:19:33.746000 audit: BPF prog-id=195 op=UNLOAD Jan 28 01:19:33.746000 audit[4587]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff6b988380 a2=0 a3=5 items=0 ppid=4484 pid=4587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.746000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:19:33.746000 audit: BPF prog-id=196 op=LOAD Jan 28 01:19:33.746000 audit[4587]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff6b9885a0 a2=94 a3=6 items=0 ppid=4484 pid=4587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.746000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:19:33.746000 audit: BPF prog-id=196 op=UNLOAD Jan 28 01:19:33.746000 audit[4587]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff6b9885a0 a2=0 a3=6 items=0 ppid=4484 pid=4587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.746000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:19:33.746000 audit: BPF prog-id=197 op=LOAD Jan 28 01:19:33.746000 audit[4587]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff6b987d50 a2=94 a3=88 items=0 ppid=4484 pid=4587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.746000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:19:33.747000 audit: BPF prog-id=198 op=LOAD Jan 28 01:19:33.747000 audit[4587]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff6b987bd0 a2=94 a3=2 items=0 ppid=4484 pid=4587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.747000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:19:33.747000 audit: BPF prog-id=198 op=UNLOAD Jan 28 01:19:33.747000 audit[4587]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff6b987c00 a2=0 a3=7fff6b987d00 items=0 ppid=4484 pid=4587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.747000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:19:33.747000 audit: BPF prog-id=197 op=UNLOAD Jan 28 01:19:33.747000 audit[4587]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=37451d10 a2=0 a3=ff0da8a9600d7ab8 items=0 ppid=4484 pid=4587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.747000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:19:33.785000 audit: BPF prog-id=199 op=LOAD Jan 28 01:19:33.785000 audit[4661]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffda9397320 a2=98 a3=1999999999999999 items=0 ppid=4484 pid=4661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.785000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 01:19:33.786000 audit: BPF prog-id=199 op=UNLOAD Jan 28 01:19:33.786000 audit[4661]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffda93972f0 a3=0 items=0 ppid=4484 pid=4661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.786000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 01:19:33.786000 audit: BPF prog-id=200 op=LOAD Jan 28 01:19:33.786000 audit[4661]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffda9397200 a2=94 a3=ffff items=0 ppid=4484 pid=4661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.786000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 01:19:33.786000 audit: BPF prog-id=200 op=UNLOAD Jan 28 01:19:33.786000 audit[4661]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffda9397200 a2=94 a3=ffff items=0 ppid=4484 pid=4661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.786000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 01:19:33.786000 audit: BPF prog-id=201 op=LOAD Jan 28 01:19:33.786000 audit[4661]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffda9397240 a2=94 a3=7ffda9397420 items=0 ppid=4484 pid=4661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.786000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 01:19:33.786000 audit: BPF prog-id=201 op=UNLOAD Jan 28 01:19:33.786000 audit[4661]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffda9397240 a2=94 a3=7ffda9397420 items=0 ppid=4484 pid=4661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.786000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 01:19:33.855586 (udev-worker)[4344]: Network interface NamePolicy= disabled on kernel command line. Jan 28 01:19:33.865498 systemd-networkd[1458]: vxlan.calico: Link UP Jan 28 01:19:33.866154 systemd-networkd[1458]: vxlan.calico: Gained carrier Jan 28 01:19:33.888799 containerd[1849]: time="2026-01-28T01:19:33.887975090Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:19:33.890129 containerd[1849]: time="2026-01-28T01:19:33.890075574Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:19:33.890275 containerd[1849]: time="2026-01-28T01:19:33.890192168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 01:19:33.890975 kubelet[3202]: E0128 01:19:33.890805 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:19:33.890975 kubelet[3202]: E0128 01:19:33.890869 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:19:33.911711 kubelet[3202]: E0128 01:19:33.911204 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e0b05c5628ca455b907d3d9bf2c70df6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6pdxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-799d7c8d49-2vrfp_calico-system(9accd949-c4fa-4ce7-b3a8-524deb448a06): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:19:33.916250 containerd[1849]: time="2026-01-28T01:19:33.916005541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:19:33.920000 audit: BPF prog-id=202 op=LOAD Jan 28 01:19:33.920000 audit[4688]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff12136010 a2=98 a3=0 items=0 ppid=4484 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.920000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:19:33.920000 audit: BPF prog-id=202 op=UNLOAD Jan 28 01:19:33.920000 audit[4688]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff12135fe0 a3=0 items=0 ppid=4484 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.920000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:19:33.953000 audit: BPF prog-id=203 op=LOAD Jan 28 01:19:33.953000 audit[4688]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff12135e20 a2=94 a3=54428f items=0 ppid=4484 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.953000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:19:33.954000 audit: BPF prog-id=203 op=UNLOAD Jan 28 01:19:33.954000 audit[4688]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff12135e20 a2=94 a3=54428f items=0 ppid=4484 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.954000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:19:33.954000 audit: BPF prog-id=204 op=LOAD Jan 28 01:19:33.954000 audit[4688]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff12135e50 a2=94 a3=2 items=0 ppid=4484 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.954000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:19:33.954000 audit: BPF prog-id=204 op=UNLOAD Jan 28 01:19:33.954000 audit[4688]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff12135e50 a2=0 a3=2 items=0 ppid=4484 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.954000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:19:33.954000 audit: BPF prog-id=205 op=LOAD Jan 28 01:19:33.954000 audit[4688]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff12135c00 a2=94 a3=4 items=0 ppid=4484 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.954000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:19:33.954000 audit: BPF prog-id=205 op=UNLOAD Jan 28 01:19:33.954000 audit[4688]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff12135c00 a2=94 a3=4 items=0 ppid=4484 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.954000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:19:33.954000 audit: BPF prog-id=206 op=LOAD Jan 28 01:19:33.954000 audit[4688]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff12135d00 a2=94 a3=7fff12135e80 items=0 ppid=4484 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.954000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:19:33.954000 audit: BPF prog-id=206 op=UNLOAD Jan 28 01:19:33.954000 audit[4688]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff12135d00 a2=0 a3=7fff12135e80 items=0 ppid=4484 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.954000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:19:33.955000 audit: BPF prog-id=207 op=LOAD Jan 28 01:19:33.955000 audit[4688]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff12135430 a2=94 a3=2 items=0 ppid=4484 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.955000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:19:33.955000 audit: BPF prog-id=207 op=UNLOAD Jan 28 01:19:33.955000 audit[4688]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff12135430 a2=0 a3=2 items=0 ppid=4484 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.955000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:19:33.955000 audit: BPF prog-id=208 op=LOAD Jan 28 01:19:33.955000 audit[4688]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff12135530 a2=94 a3=30 items=0 ppid=4484 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.955000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:19:33.962000 audit: BPF prog-id=209 op=LOAD Jan 28 01:19:33.962000 audit[4691]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc0a4e4df0 a2=98 a3=0 items=0 ppid=4484 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.962000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:19:33.962000 audit: BPF prog-id=209 op=UNLOAD Jan 28 01:19:33.962000 audit[4691]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc0a4e4dc0 a3=0 items=0 ppid=4484 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.962000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:19:33.962000 audit: BPF prog-id=210 op=LOAD Jan 28 01:19:33.962000 audit[4691]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc0a4e4be0 a2=94 a3=54428f items=0 ppid=4484 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.962000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:19:33.962000 audit: BPF prog-id=210 op=UNLOAD Jan 28 01:19:33.962000 audit[4691]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc0a4e4be0 a2=94 a3=54428f items=0 ppid=4484 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.962000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:19:33.962000 audit: BPF prog-id=211 op=LOAD Jan 28 01:19:33.962000 audit[4691]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc0a4e4c10 a2=94 a3=2 items=0 ppid=4484 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.962000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:19:33.962000 audit: BPF prog-id=211 op=UNLOAD Jan 28 01:19:33.962000 audit[4691]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc0a4e4c10 a2=0 a3=2 items=0 ppid=4484 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:33.962000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:19:34.127000 audit: BPF prog-id=212 op=LOAD Jan 28 01:19:34.127000 audit[4691]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc0a4e4ad0 a2=94 a3=1 items=0 ppid=4484 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:34.127000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:19:34.127000 audit: BPF prog-id=212 op=UNLOAD Jan 28 01:19:34.127000 audit[4691]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc0a4e4ad0 a2=94 a3=1 items=0 ppid=4484 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:34.127000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:19:34.141000 audit: BPF prog-id=213 op=LOAD Jan 28 01:19:34.141000 audit[4691]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc0a4e4ac0 a2=94 a3=4 items=0 ppid=4484 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:34.141000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:19:34.141000 audit: BPF prog-id=213 op=UNLOAD Jan 28 01:19:34.141000 audit[4691]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc0a4e4ac0 a2=0 a3=4 items=0 ppid=4484 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:34.141000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:19:34.142000 audit: BPF prog-id=214 op=LOAD Jan 28 01:19:34.142000 audit[4691]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc0a4e4920 a2=94 a3=5 items=0 ppid=4484 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:34.142000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:19:34.142000 audit: BPF prog-id=214 op=UNLOAD Jan 28 01:19:34.142000 audit[4691]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc0a4e4920 a2=0 a3=5 items=0 ppid=4484 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:34.142000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:19:34.142000 audit: BPF prog-id=215 op=LOAD Jan 28 01:19:34.142000 audit[4691]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc0a4e4b40 a2=94 a3=6 items=0 ppid=4484 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:34.142000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:19:34.142000 audit: BPF prog-id=215 op=UNLOAD Jan 28 01:19:34.142000 audit[4691]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc0a4e4b40 a2=0 a3=6 items=0 ppid=4484 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:34.142000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:19:34.142000 audit: BPF prog-id=216 op=LOAD Jan 28 01:19:34.142000 audit[4691]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc0a4e42f0 a2=94 a3=88 items=0 ppid=4484 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:34.142000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:19:34.143000 audit: BPF prog-id=217 op=LOAD Jan 28 01:19:34.143000 audit[4691]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffc0a4e4170 a2=94 a3=2 items=0 ppid=4484 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:34.143000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:19:34.143000 audit: BPF prog-id=217 op=UNLOAD Jan 28 01:19:34.143000 audit[4691]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffc0a4e41a0 a2=0 a3=7ffc0a4e42a0 items=0 ppid=4484 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:34.143000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:19:34.143000 audit: BPF prog-id=216 op=UNLOAD Jan 28 01:19:34.143000 audit[4691]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=29b3bd10 a2=0 a3=c340b8c93128e194 items=0 ppid=4484 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:34.143000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:19:34.150481 systemd-networkd[1458]: cali3055b20ed95: Gained IPv6LL Jan 28 01:19:34.153000 audit: BPF prog-id=208 op=UNLOAD Jan 28 01:19:34.153000 audit[4484]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0007e3cc0 a2=0 a3=0 items=0 ppid=4466 pid=4484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:34.153000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 28 01:19:34.204160 containerd[1849]: time="2026-01-28T01:19:34.203944468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74bc45499-4hxx4,Uid:325cd625-25dd-4d22-8523-67e469e6d0e9,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:19:34.214969 containerd[1849]: time="2026-01-28T01:19:34.214925127Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:19:34.220177 containerd[1849]: time="2026-01-28T01:19:34.219873416Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:19:34.220774 containerd[1849]: time="2026-01-28T01:19:34.220442919Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 01:19:34.221080 kubelet[3202]: E0128 01:19:34.221037 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:19:34.221905 kubelet[3202]: E0128 01:19:34.221097 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:19:34.221965 kubelet[3202]: E0128 01:19:34.221256 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6pdxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-799d7c8d49-2vrfp_calico-system(9accd949-c4fa-4ce7-b3a8-524deb448a06): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:19:34.225144 kubelet[3202]: E0128 01:19:34.224878 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-799d7c8d49-2vrfp" podUID="9accd949-c4fa-4ce7-b3a8-524deb448a06" Jan 28 01:19:34.274000 audit[4721]: NETFILTER_CFG table=nat:121 family=2 entries=15 op=nft_register_chain pid=4721 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:19:34.274000 audit[4721]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffd601fe700 a2=0 a3=7ffd601fe6ec items=0 ppid=4484 pid=4721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:34.274000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:19:34.299000 audit[4723]: NETFILTER_CFG table=raw:122 family=2 entries=21 op=nft_register_chain pid=4723 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:19:34.299000 audit[4723]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffccc5fd910 a2=0 a3=7ffccc5fd8fc items=0 ppid=4484 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:34.301000 audit[4725]: NETFILTER_CFG table=mangle:123 family=2 entries=16 op=nft_register_chain pid=4725 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:19:34.301000 audit[4725]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffed11c1c10 a2=0 a3=7ffed11c1bfc items=0 ppid=4484 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:34.301000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:19:34.299000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:19:34.304000 audit[4722]: NETFILTER_CFG table=filter:124 family=2 entries=94 op=nft_register_chain pid=4722 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:19:34.304000 audit[4722]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7fffaec72a30 a2=0 a3=55c7a245e000 items=0 ppid=4484 pid=4722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:34.304000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:19:34.408212 systemd-networkd[1458]: calif960986ffda: Link UP Jan 28 01:19:34.411379 systemd-networkd[1458]: calif960986ffda: Gained carrier Jan 28 01:19:34.425543 containerd[1849]: 2026-01-28 01:19:34.298 [INFO][4708] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--26-k8s-calico--apiserver--74bc45499--4hxx4-eth0 calico-apiserver-74bc45499- calico-apiserver 325cd625-25dd-4d22-8523-67e469e6d0e9 846 0 2026-01-28 01:19:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74bc45499 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-31-26 calico-apiserver-74bc45499-4hxx4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif960986ffda [] [] }} ContainerID="5dc52e541f94f49e22520006a65e8f274e1cf27f00ff86abd788fb935887c9b8" Namespace="calico-apiserver" Pod="calico-apiserver-74bc45499-4hxx4" WorkloadEndpoint="ip--172--31--31--26-k8s-calico--apiserver--74bc45499--4hxx4-" Jan 28 01:19:34.425543 containerd[1849]: 2026-01-28 01:19:34.298 [INFO][4708] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5dc52e541f94f49e22520006a65e8f274e1cf27f00ff86abd788fb935887c9b8" Namespace="calico-apiserver" Pod="calico-apiserver-74bc45499-4hxx4" WorkloadEndpoint="ip--172--31--31--26-k8s-calico--apiserver--74bc45499--4hxx4-eth0" Jan 28 01:19:34.425543 containerd[1849]: 2026-01-28 01:19:34.352 [INFO][4736] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5dc52e541f94f49e22520006a65e8f274e1cf27f00ff86abd788fb935887c9b8" HandleID="k8s-pod-network.5dc52e541f94f49e22520006a65e8f274e1cf27f00ff86abd788fb935887c9b8" Workload="ip--172--31--31--26-k8s-calico--apiserver--74bc45499--4hxx4-eth0" Jan 28 01:19:34.426028 containerd[1849]: 2026-01-28 01:19:34.352 [INFO][4736] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5dc52e541f94f49e22520006a65e8f274e1cf27f00ff86abd788fb935887c9b8" HandleID="k8s-pod-network.5dc52e541f94f49e22520006a65e8f274e1cf27f00ff86abd788fb935887c9b8" Workload="ip--172--31--31--26-k8s-calico--apiserver--74bc45499--4hxx4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad3a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-31-26", "pod":"calico-apiserver-74bc45499-4hxx4", "timestamp":"2026-01-28 01:19:34.352650566 +0000 UTC"}, Hostname:"ip-172-31-31-26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:19:34.426028 containerd[1849]: 2026-01-28 01:19:34.352 [INFO][4736] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:19:34.426028 containerd[1849]: 2026-01-28 01:19:34.352 [INFO][4736] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:19:34.426028 containerd[1849]: 2026-01-28 01:19:34.353 [INFO][4736] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-26' Jan 28 01:19:34.426028 containerd[1849]: 2026-01-28 01:19:34.361 [INFO][4736] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5dc52e541f94f49e22520006a65e8f274e1cf27f00ff86abd788fb935887c9b8" host="ip-172-31-31-26" Jan 28 01:19:34.426028 containerd[1849]: 2026-01-28 01:19:34.367 [INFO][4736] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-26" Jan 28 01:19:34.426028 containerd[1849]: 2026-01-28 01:19:34.376 [INFO][4736] ipam/ipam.go 511: Trying affinity for 192.168.24.0/26 host="ip-172-31-31-26" Jan 28 01:19:34.426028 containerd[1849]: 2026-01-28 01:19:34.379 [INFO][4736] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.0/26 host="ip-172-31-31-26" Jan 28 01:19:34.426028 containerd[1849]: 2026-01-28 01:19:34.382 [INFO][4736] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.0/26 host="ip-172-31-31-26" Jan 28 01:19:34.426261 containerd[1849]: 2026-01-28 01:19:34.382 [INFO][4736] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.0/26 handle="k8s-pod-network.5dc52e541f94f49e22520006a65e8f274e1cf27f00ff86abd788fb935887c9b8" host="ip-172-31-31-26" Jan 28 01:19:34.426261 containerd[1849]: 2026-01-28 01:19:34.384 [INFO][4736] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5dc52e541f94f49e22520006a65e8f274e1cf27f00ff86abd788fb935887c9b8 Jan 28 01:19:34.426261 containerd[1849]: 2026-01-28 01:19:34.389 [INFO][4736] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.0/26 handle="k8s-pod-network.5dc52e541f94f49e22520006a65e8f274e1cf27f00ff86abd788fb935887c9b8" host="ip-172-31-31-26" Jan 28 01:19:34.426261 containerd[1849]: 2026-01-28 01:19:34.397 [INFO][4736] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.2/26] block=192.168.24.0/26 handle="k8s-pod-network.5dc52e541f94f49e22520006a65e8f274e1cf27f00ff86abd788fb935887c9b8" host="ip-172-31-31-26" Jan 28 01:19:34.426261 containerd[1849]: 2026-01-28 01:19:34.397 [INFO][4736] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.2/26] handle="k8s-pod-network.5dc52e541f94f49e22520006a65e8f274e1cf27f00ff86abd788fb935887c9b8" host="ip-172-31-31-26" Jan 28 01:19:34.426261 containerd[1849]: 2026-01-28 01:19:34.397 [INFO][4736] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:19:34.426261 containerd[1849]: 2026-01-28 01:19:34.397 [INFO][4736] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.2/26] IPv6=[] ContainerID="5dc52e541f94f49e22520006a65e8f274e1cf27f00ff86abd788fb935887c9b8" HandleID="k8s-pod-network.5dc52e541f94f49e22520006a65e8f274e1cf27f00ff86abd788fb935887c9b8" Workload="ip--172--31--31--26-k8s-calico--apiserver--74bc45499--4hxx4-eth0" Jan 28 01:19:34.426451 containerd[1849]: 2026-01-28 01:19:34.401 [INFO][4708] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5dc52e541f94f49e22520006a65e8f274e1cf27f00ff86abd788fb935887c9b8" Namespace="calico-apiserver" Pod="calico-apiserver-74bc45499-4hxx4" WorkloadEndpoint="ip--172--31--31--26-k8s-calico--apiserver--74bc45499--4hxx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--26-k8s-calico--apiserver--74bc45499--4hxx4-eth0", GenerateName:"calico-apiserver-74bc45499-", Namespace:"calico-apiserver", SelfLink:"", UID:"325cd625-25dd-4d22-8523-67e469e6d0e9", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 19, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74bc45499", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-26", ContainerID:"", Pod:"calico-apiserver-74bc45499-4hxx4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif960986ffda", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:19:34.426509 containerd[1849]: 2026-01-28 01:19:34.401 [INFO][4708] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.2/32] ContainerID="5dc52e541f94f49e22520006a65e8f274e1cf27f00ff86abd788fb935887c9b8" Namespace="calico-apiserver" Pod="calico-apiserver-74bc45499-4hxx4" WorkloadEndpoint="ip--172--31--31--26-k8s-calico--apiserver--74bc45499--4hxx4-eth0" Jan 28 01:19:34.426509 containerd[1849]: 2026-01-28 01:19:34.401 [INFO][4708] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif960986ffda ContainerID="5dc52e541f94f49e22520006a65e8f274e1cf27f00ff86abd788fb935887c9b8" Namespace="calico-apiserver" Pod="calico-apiserver-74bc45499-4hxx4" WorkloadEndpoint="ip--172--31--31--26-k8s-calico--apiserver--74bc45499--4hxx4-eth0" Jan 28 01:19:34.426509 containerd[1849]: 2026-01-28 01:19:34.411 [INFO][4708] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5dc52e541f94f49e22520006a65e8f274e1cf27f00ff86abd788fb935887c9b8" Namespace="calico-apiserver" Pod="calico-apiserver-74bc45499-4hxx4" WorkloadEndpoint="ip--172--31--31--26-k8s-calico--apiserver--74bc45499--4hxx4-eth0" Jan 28 01:19:34.426581 containerd[1849]: 2026-01-28 01:19:34.411 [INFO][4708] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5dc52e541f94f49e22520006a65e8f274e1cf27f00ff86abd788fb935887c9b8" Namespace="calico-apiserver" Pod="calico-apiserver-74bc45499-4hxx4" WorkloadEndpoint="ip--172--31--31--26-k8s-calico--apiserver--74bc45499--4hxx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--26-k8s-calico--apiserver--74bc45499--4hxx4-eth0", GenerateName:"calico-apiserver-74bc45499-", Namespace:"calico-apiserver", SelfLink:"", UID:"325cd625-25dd-4d22-8523-67e469e6d0e9", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 19, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74bc45499", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-26", ContainerID:"5dc52e541f94f49e22520006a65e8f274e1cf27f00ff86abd788fb935887c9b8", Pod:"calico-apiserver-74bc45499-4hxx4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif960986ffda", MAC:"56:a1:1e:da:b1:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:19:34.426635 containerd[1849]: 2026-01-28 01:19:34.422 [INFO][4708] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5dc52e541f94f49e22520006a65e8f274e1cf27f00ff86abd788fb935887c9b8" Namespace="calico-apiserver" Pod="calico-apiserver-74bc45499-4hxx4" WorkloadEndpoint="ip--172--31--31--26-k8s-calico--apiserver--74bc45499--4hxx4-eth0" Jan 28 01:19:34.441000 audit[4754]: NETFILTER_CFG table=filter:125 family=2 entries=50 op=nft_register_chain pid=4754 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:19:34.441000 audit[4754]: SYSCALL arch=c000003e syscall=46 success=yes exit=28208 a0=3 a1=7fffbed83040 a2=0 a3=7fffbed8302c items=0 ppid=4484 pid=4754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:34.441000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:19:34.471464 containerd[1849]: time="2026-01-28T01:19:34.471418605Z" level=info msg="connecting to shim 5dc52e541f94f49e22520006a65e8f274e1cf27f00ff86abd788fb935887c9b8" address="unix:///run/containerd/s/2980317d5dee47eb13dcc35905917249738aac43c5645c8b3975e4fb6ca8a00a" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:19:34.506068 systemd[1]: Started cri-containerd-5dc52e541f94f49e22520006a65e8f274e1cf27f00ff86abd788fb935887c9b8.scope - libcontainer container 5dc52e541f94f49e22520006a65e8f274e1cf27f00ff86abd788fb935887c9b8. Jan 28 01:19:34.521000 audit: BPF prog-id=218 op=LOAD Jan 28 01:19:34.522000 audit: BPF prog-id=219 op=LOAD Jan 28 01:19:34.522000 audit[4775]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4764 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:34.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564633532653534316639346634396532323532303030366136356538 Jan 28 01:19:34.522000 audit: BPF prog-id=219 op=UNLOAD Jan 28 01:19:34.522000 audit[4775]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4764 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:34.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564633532653534316639346634396532323532303030366136356538 Jan 28 01:19:34.523000 audit: BPF prog-id=220 op=LOAD Jan 28 01:19:34.523000 audit[4775]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4764 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:34.523000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564633532653534316639346634396532323532303030366136356538 Jan 28 01:19:34.523000 audit: BPF prog-id=221 op=LOAD Jan 28 01:19:34.523000 audit[4775]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4764 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:34.523000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564633532653534316639346634396532323532303030366136356538 Jan 28 01:19:34.523000 audit: BPF prog-id=221 op=UNLOAD Jan 28 01:19:34.523000 audit[4775]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4764 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:34.523000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564633532653534316639346634396532323532303030366136356538 Jan 28 01:19:34.523000 audit: BPF prog-id=220 op=UNLOAD Jan 28 01:19:34.523000 audit[4775]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4764 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:34.523000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564633532653534316639346634396532323532303030366136356538 Jan 28 01:19:34.523000 audit: BPF prog-id=222 op=LOAD Jan 28 01:19:34.523000 audit[4775]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4764 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:34.523000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564633532653534316639346634396532323532303030366136356538 Jan 28 01:19:34.572247 containerd[1849]: time="2026-01-28T01:19:34.572208442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74bc45499-4hxx4,Uid:325cd625-25dd-4d22-8523-67e469e6d0e9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5dc52e541f94f49e22520006a65e8f274e1cf27f00ff86abd788fb935887c9b8\"" Jan 28 01:19:34.574165 containerd[1849]: time="2026-01-28T01:19:34.574070199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:19:34.650679 kubelet[3202]: E0128 01:19:34.650640 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-799d7c8d49-2vrfp" podUID="9accd949-c4fa-4ce7-b3a8-524deb448a06" Jan 28 01:19:34.731000 audit[4801]: NETFILTER_CFG table=filter:126 family=2 entries=20 op=nft_register_rule pid=4801 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:34.731000 audit[4801]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffd39490a0 a2=0 a3=7fffd394908c items=0 ppid=3451 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:34.731000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:34.738000 audit[4801]: NETFILTER_CFG table=nat:127 family=2 entries=14 op=nft_register_rule pid=4801 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:34.738000 audit[4801]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffd39490a0 a2=0 a3=0 items=0 ppid=3451 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:34.738000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:34.957395 containerd[1849]: time="2026-01-28T01:19:34.957350213Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:19:34.959664 containerd[1849]: time="2026-01-28T01:19:34.959616711Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:19:34.959976 containerd[1849]: time="2026-01-28T01:19:34.959723084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:19:34.960021 kubelet[3202]: E0128 01:19:34.959916 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:19:34.960021 kubelet[3202]: E0128 01:19:34.959958 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:19:34.960339 kubelet[3202]: E0128 01:19:34.960083 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9vv22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74bc45499-4hxx4_calico-apiserver(325cd625-25dd-4d22-8523-67e469e6d0e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:19:34.961478 kubelet[3202]: E0128 01:19:34.961420 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74bc45499-4hxx4" podUID="325cd625-25dd-4d22-8523-67e469e6d0e9" Jan 28 01:19:35.203155 containerd[1849]: time="2026-01-28T01:19:35.203012116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74bc45499-2znnx,Uid:d6acbfd4-88d6-4133-9434-3bfec2c327d4,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:19:35.203155 containerd[1849]: time="2026-01-28T01:19:35.203087503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dbbbff994-sqgh4,Uid:c988f11c-6f16-4100-9308-ea1983457126,Namespace:calico-system,Attempt:0,}" Jan 28 01:19:35.444297 systemd-networkd[1458]: cali8ece591c57f: Link UP Jan 28 01:19:35.444884 systemd-networkd[1458]: cali8ece591c57f: Gained carrier Jan 28 01:19:35.474840 containerd[1849]: 2026-01-28 01:19:35.300 [INFO][4805] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--26-k8s-calico--kube--controllers--dbbbff994--sqgh4-eth0 calico-kube-controllers-dbbbff994- calico-system c988f11c-6f16-4100-9308-ea1983457126 853 0 2026-01-28 01:19:10 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:dbbbff994 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-31-26 calico-kube-controllers-dbbbff994-sqgh4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8ece591c57f [] [] }} ContainerID="ab8d4611f3aa9e2f99e153c5e77470bf432c2eb49865b28d050848648d72e64e" Namespace="calico-system" Pod="calico-kube-controllers-dbbbff994-sqgh4" WorkloadEndpoint="ip--172--31--31--26-k8s-calico--kube--controllers--dbbbff994--sqgh4-" Jan 28 01:19:35.474840 containerd[1849]: 2026-01-28 01:19:35.300 [INFO][4805] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ab8d4611f3aa9e2f99e153c5e77470bf432c2eb49865b28d050848648d72e64e" Namespace="calico-system" Pod="calico-kube-controllers-dbbbff994-sqgh4" WorkloadEndpoint="ip--172--31--31--26-k8s-calico--kube--controllers--dbbbff994--sqgh4-eth0" Jan 28 01:19:35.474840 containerd[1849]: 2026-01-28 01:19:35.376 [INFO][4828] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab8d4611f3aa9e2f99e153c5e77470bf432c2eb49865b28d050848648d72e64e" HandleID="k8s-pod-network.ab8d4611f3aa9e2f99e153c5e77470bf432c2eb49865b28d050848648d72e64e" Workload="ip--172--31--31--26-k8s-calico--kube--controllers--dbbbff994--sqgh4-eth0" Jan 28 01:19:35.476271 containerd[1849]: 2026-01-28 01:19:35.377 [INFO][4828] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ab8d4611f3aa9e2f99e153c5e77470bf432c2eb49865b28d050848648d72e64e" HandleID="k8s-pod-network.ab8d4611f3aa9e2f99e153c5e77470bf432c2eb49865b28d050848648d72e64e" Workload="ip--172--31--31--26-k8s-calico--kube--controllers--dbbbff994--sqgh4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c3670), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-26", "pod":"calico-kube-controllers-dbbbff994-sqgh4", "timestamp":"2026-01-28 01:19:35.376935115 +0000 UTC"}, Hostname:"ip-172-31-31-26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:19:35.476271 containerd[1849]: 2026-01-28 01:19:35.377 [INFO][4828] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:19:35.476271 containerd[1849]: 2026-01-28 01:19:35.377 [INFO][4828] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:19:35.476271 containerd[1849]: 2026-01-28 01:19:35.377 [INFO][4828] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-26' Jan 28 01:19:35.476271 containerd[1849]: 2026-01-28 01:19:35.400 [INFO][4828] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ab8d4611f3aa9e2f99e153c5e77470bf432c2eb49865b28d050848648d72e64e" host="ip-172-31-31-26" Jan 28 01:19:35.476271 containerd[1849]: 2026-01-28 01:19:35.407 [INFO][4828] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-26" Jan 28 01:19:35.476271 containerd[1849]: 2026-01-28 01:19:35.415 [INFO][4828] ipam/ipam.go 511: Trying affinity for 192.168.24.0/26 host="ip-172-31-31-26" Jan 28 01:19:35.476271 containerd[1849]: 2026-01-28 01:19:35.418 [INFO][4828] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.0/26 host="ip-172-31-31-26" Jan 28 01:19:35.476271 containerd[1849]: 2026-01-28 01:19:35.420 [INFO][4828] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.0/26 host="ip-172-31-31-26" Jan 28 01:19:35.477227 containerd[1849]: 2026-01-28 01:19:35.420 [INFO][4828] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.0/26 handle="k8s-pod-network.ab8d4611f3aa9e2f99e153c5e77470bf432c2eb49865b28d050848648d72e64e" host="ip-172-31-31-26" Jan 28 01:19:35.477227 containerd[1849]: 2026-01-28 01:19:35.422 [INFO][4828] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ab8d4611f3aa9e2f99e153c5e77470bf432c2eb49865b28d050848648d72e64e Jan 28 01:19:35.477227 containerd[1849]: 2026-01-28 01:19:35.430 [INFO][4828] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.0/26 handle="k8s-pod-network.ab8d4611f3aa9e2f99e153c5e77470bf432c2eb49865b28d050848648d72e64e" host="ip-172-31-31-26" Jan 28 01:19:35.477227 containerd[1849]: 2026-01-28 01:19:35.436 [INFO][4828] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.3/26] block=192.168.24.0/26 handle="k8s-pod-network.ab8d4611f3aa9e2f99e153c5e77470bf432c2eb49865b28d050848648d72e64e" host="ip-172-31-31-26" Jan 28 01:19:35.477227 containerd[1849]: 2026-01-28 01:19:35.436 [INFO][4828] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.3/26] handle="k8s-pod-network.ab8d4611f3aa9e2f99e153c5e77470bf432c2eb49865b28d050848648d72e64e" host="ip-172-31-31-26" Jan 28 01:19:35.477227 containerd[1849]: 2026-01-28 01:19:35.436 [INFO][4828] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:19:35.477227 containerd[1849]: 2026-01-28 01:19:35.437 [INFO][4828] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.3/26] IPv6=[] ContainerID="ab8d4611f3aa9e2f99e153c5e77470bf432c2eb49865b28d050848648d72e64e" HandleID="k8s-pod-network.ab8d4611f3aa9e2f99e153c5e77470bf432c2eb49865b28d050848648d72e64e" Workload="ip--172--31--31--26-k8s-calico--kube--controllers--dbbbff994--sqgh4-eth0" Jan 28 01:19:35.478054 containerd[1849]: 2026-01-28 01:19:35.441 [INFO][4805] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ab8d4611f3aa9e2f99e153c5e77470bf432c2eb49865b28d050848648d72e64e" Namespace="calico-system" Pod="calico-kube-controllers-dbbbff994-sqgh4" WorkloadEndpoint="ip--172--31--31--26-k8s-calico--kube--controllers--dbbbff994--sqgh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--26-k8s-calico--kube--controllers--dbbbff994--sqgh4-eth0", GenerateName:"calico-kube-controllers-dbbbff994-", Namespace:"calico-system", SelfLink:"", UID:"c988f11c-6f16-4100-9308-ea1983457126", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 19, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dbbbff994", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-26", ContainerID:"", Pod:"calico-kube-controllers-dbbbff994-sqgh4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8ece591c57f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:19:35.478254 containerd[1849]: 2026-01-28 01:19:35.441 [INFO][4805] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.3/32] ContainerID="ab8d4611f3aa9e2f99e153c5e77470bf432c2eb49865b28d050848648d72e64e" Namespace="calico-system" Pod="calico-kube-controllers-dbbbff994-sqgh4" WorkloadEndpoint="ip--172--31--31--26-k8s-calico--kube--controllers--dbbbff994--sqgh4-eth0" Jan 28 01:19:35.478254 containerd[1849]: 2026-01-28 01:19:35.441 [INFO][4805] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ece591c57f ContainerID="ab8d4611f3aa9e2f99e153c5e77470bf432c2eb49865b28d050848648d72e64e" Namespace="calico-system" Pod="calico-kube-controllers-dbbbff994-sqgh4" WorkloadEndpoint="ip--172--31--31--26-k8s-calico--kube--controllers--dbbbff994--sqgh4-eth0" Jan 28 01:19:35.478254 containerd[1849]: 2026-01-28 01:19:35.445 [INFO][4805] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ab8d4611f3aa9e2f99e153c5e77470bf432c2eb49865b28d050848648d72e64e" Namespace="calico-system" Pod="calico-kube-controllers-dbbbff994-sqgh4" WorkloadEndpoint="ip--172--31--31--26-k8s-calico--kube--controllers--dbbbff994--sqgh4-eth0" Jan 28 01:19:35.478503 containerd[1849]: 2026-01-28 01:19:35.446 [INFO][4805] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ab8d4611f3aa9e2f99e153c5e77470bf432c2eb49865b28d050848648d72e64e" Namespace="calico-system" Pod="calico-kube-controllers-dbbbff994-sqgh4" WorkloadEndpoint="ip--172--31--31--26-k8s-calico--kube--controllers--dbbbff994--sqgh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--26-k8s-calico--kube--controllers--dbbbff994--sqgh4-eth0", GenerateName:"calico-kube-controllers-dbbbff994-", Namespace:"calico-system", SelfLink:"", UID:"c988f11c-6f16-4100-9308-ea1983457126", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 19, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dbbbff994", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-26", ContainerID:"ab8d4611f3aa9e2f99e153c5e77470bf432c2eb49865b28d050848648d72e64e", Pod:"calico-kube-controllers-dbbbff994-sqgh4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8ece591c57f", MAC:"12:d8:3e:e1:49:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:19:35.478627 containerd[1849]: 2026-01-28 01:19:35.465 [INFO][4805] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ab8d4611f3aa9e2f99e153c5e77470bf432c2eb49865b28d050848648d72e64e" Namespace="calico-system" Pod="calico-kube-controllers-dbbbff994-sqgh4" WorkloadEndpoint="ip--172--31--31--26-k8s-calico--kube--controllers--dbbbff994--sqgh4-eth0" Jan 28 01:19:35.491892 systemd-networkd[1458]: vxlan.calico: Gained IPv6LL Jan 28 01:19:35.512685 containerd[1849]: time="2026-01-28T01:19:35.512622421Z" level=info msg="connecting to shim ab8d4611f3aa9e2f99e153c5e77470bf432c2eb49865b28d050848648d72e64e" address="unix:///run/containerd/s/6f11299383b619f74bf138bc5b827c8efb5bded29873921ea7259c2575f51310" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:19:35.556034 systemd[1]: Started cri-containerd-ab8d4611f3aa9e2f99e153c5e77470bf432c2eb49865b28d050848648d72e64e.scope - libcontainer container ab8d4611f3aa9e2f99e153c5e77470bf432c2eb49865b28d050848648d72e64e. Jan 28 01:19:35.559816 systemd-networkd[1458]: cali97e1ca024cd: Link UP Jan 28 01:19:35.562046 systemd-networkd[1458]: cali97e1ca024cd: Gained carrier Jan 28 01:19:35.591364 containerd[1849]: 2026-01-28 01:19:35.340 [INFO][4804] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--26-k8s-calico--apiserver--74bc45499--2znnx-eth0 calico-apiserver-74bc45499- calico-apiserver d6acbfd4-88d6-4133-9434-3bfec2c327d4 851 0 2026-01-28 01:19:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74bc45499 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-31-26 calico-apiserver-74bc45499-2znnx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali97e1ca024cd [] [] }} ContainerID="2019f4bd1bd7c4bd1b044b4b7995427eec89c5dec1391045915fd50b7f4ef2fb" Namespace="calico-apiserver" Pod="calico-apiserver-74bc45499-2znnx" WorkloadEndpoint="ip--172--31--31--26-k8s-calico--apiserver--74bc45499--2znnx-" Jan 28 01:19:35.591364 containerd[1849]: 2026-01-28 01:19:35.340 [INFO][4804] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2019f4bd1bd7c4bd1b044b4b7995427eec89c5dec1391045915fd50b7f4ef2fb" Namespace="calico-apiserver" Pod="calico-apiserver-74bc45499-2znnx" WorkloadEndpoint="ip--172--31--31--26-k8s-calico--apiserver--74bc45499--2znnx-eth0" Jan 28 01:19:35.591364 containerd[1849]: 2026-01-28 01:19:35.415 [INFO][4834] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2019f4bd1bd7c4bd1b044b4b7995427eec89c5dec1391045915fd50b7f4ef2fb" HandleID="k8s-pod-network.2019f4bd1bd7c4bd1b044b4b7995427eec89c5dec1391045915fd50b7f4ef2fb" Workload="ip--172--31--31--26-k8s-calico--apiserver--74bc45499--2znnx-eth0" Jan 28 01:19:35.591710 containerd[1849]: 2026-01-28 01:19:35.415 [INFO][4834] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2019f4bd1bd7c4bd1b044b4b7995427eec89c5dec1391045915fd50b7f4ef2fb" HandleID="k8s-pod-network.2019f4bd1bd7c4bd1b044b4b7995427eec89c5dec1391045915fd50b7f4ef2fb" Workload="ip--172--31--31--26-k8s-calico--apiserver--74bc45499--2znnx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036a2c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-31-26", "pod":"calico-apiserver-74bc45499-2znnx", "timestamp":"2026-01-28 01:19:35.415047549 +0000 UTC"}, Hostname:"ip-172-31-31-26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:19:35.591710 containerd[1849]: 2026-01-28 01:19:35.417 [INFO][4834] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:19:35.591710 containerd[1849]: 2026-01-28 01:19:35.437 [INFO][4834] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:19:35.591710 containerd[1849]: 2026-01-28 01:19:35.437 [INFO][4834] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-26' Jan 28 01:19:35.591710 containerd[1849]: 2026-01-28 01:19:35.494 [INFO][4834] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2019f4bd1bd7c4bd1b044b4b7995427eec89c5dec1391045915fd50b7f4ef2fb" host="ip-172-31-31-26" Jan 28 01:19:35.591710 containerd[1849]: 2026-01-28 01:19:35.508 [INFO][4834] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-26" Jan 28 01:19:35.591710 containerd[1849]: 2026-01-28 01:19:35.514 [INFO][4834] ipam/ipam.go 511: Trying affinity for 192.168.24.0/26 host="ip-172-31-31-26" Jan 28 01:19:35.591710 containerd[1849]: 2026-01-28 01:19:35.517 [INFO][4834] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.0/26 host="ip-172-31-31-26" Jan 28 01:19:35.591710 containerd[1849]: 2026-01-28 01:19:35.519 [INFO][4834] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.0/26 host="ip-172-31-31-26" Jan 28 01:19:35.592497 containerd[1849]: 2026-01-28 01:19:35.519 [INFO][4834] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.0/26 handle="k8s-pod-network.2019f4bd1bd7c4bd1b044b4b7995427eec89c5dec1391045915fd50b7f4ef2fb" host="ip-172-31-31-26" Jan 28 01:19:35.592497 containerd[1849]: 2026-01-28 01:19:35.524 [INFO][4834] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2019f4bd1bd7c4bd1b044b4b7995427eec89c5dec1391045915fd50b7f4ef2fb Jan 28 01:19:35.592497 containerd[1849]: 2026-01-28 01:19:35.536 [INFO][4834] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.0/26 handle="k8s-pod-network.2019f4bd1bd7c4bd1b044b4b7995427eec89c5dec1391045915fd50b7f4ef2fb" host="ip-172-31-31-26" Jan 28 01:19:35.592497 containerd[1849]: 2026-01-28 01:19:35.552 [INFO][4834] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.4/26] block=192.168.24.0/26 handle="k8s-pod-network.2019f4bd1bd7c4bd1b044b4b7995427eec89c5dec1391045915fd50b7f4ef2fb" host="ip-172-31-31-26" Jan 28 01:19:35.592497 containerd[1849]: 2026-01-28 01:19:35.552 [INFO][4834] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.4/26] handle="k8s-pod-network.2019f4bd1bd7c4bd1b044b4b7995427eec89c5dec1391045915fd50b7f4ef2fb" host="ip-172-31-31-26" Jan 28 01:19:35.592497 containerd[1849]: 2026-01-28 01:19:35.552 [INFO][4834] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:19:35.592497 containerd[1849]: 2026-01-28 01:19:35.552 [INFO][4834] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.4/26] IPv6=[] ContainerID="2019f4bd1bd7c4bd1b044b4b7995427eec89c5dec1391045915fd50b7f4ef2fb" HandleID="k8s-pod-network.2019f4bd1bd7c4bd1b044b4b7995427eec89c5dec1391045915fd50b7f4ef2fb" Workload="ip--172--31--31--26-k8s-calico--apiserver--74bc45499--2znnx-eth0" Jan 28 01:19:35.594151 containerd[1849]: 2026-01-28 01:19:35.554 [INFO][4804] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2019f4bd1bd7c4bd1b044b4b7995427eec89c5dec1391045915fd50b7f4ef2fb" Namespace="calico-apiserver" Pod="calico-apiserver-74bc45499-2znnx" WorkloadEndpoint="ip--172--31--31--26-k8s-calico--apiserver--74bc45499--2znnx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--26-k8s-calico--apiserver--74bc45499--2znnx-eth0", GenerateName:"calico-apiserver-74bc45499-", Namespace:"calico-apiserver", SelfLink:"", UID:"d6acbfd4-88d6-4133-9434-3bfec2c327d4", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 19, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74bc45499", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-26", ContainerID:"", Pod:"calico-apiserver-74bc45499-2znnx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali97e1ca024cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:19:35.594293 containerd[1849]: 2026-01-28 01:19:35.555 [INFO][4804] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.4/32] ContainerID="2019f4bd1bd7c4bd1b044b4b7995427eec89c5dec1391045915fd50b7f4ef2fb" Namespace="calico-apiserver" Pod="calico-apiserver-74bc45499-2znnx" WorkloadEndpoint="ip--172--31--31--26-k8s-calico--apiserver--74bc45499--2znnx-eth0" Jan 28 01:19:35.594293 containerd[1849]: 2026-01-28 01:19:35.555 [INFO][4804] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali97e1ca024cd ContainerID="2019f4bd1bd7c4bd1b044b4b7995427eec89c5dec1391045915fd50b7f4ef2fb" Namespace="calico-apiserver" Pod="calico-apiserver-74bc45499-2znnx" WorkloadEndpoint="ip--172--31--31--26-k8s-calico--apiserver--74bc45499--2znnx-eth0" Jan 28 01:19:35.594293 containerd[1849]: 2026-01-28 01:19:35.563 [INFO][4804] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2019f4bd1bd7c4bd1b044b4b7995427eec89c5dec1391045915fd50b7f4ef2fb" Namespace="calico-apiserver" Pod="calico-apiserver-74bc45499-2znnx" WorkloadEndpoint="ip--172--31--31--26-k8s-calico--apiserver--74bc45499--2znnx-eth0" Jan 28 01:19:35.594419 containerd[1849]: 2026-01-28 01:19:35.564 [INFO][4804] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2019f4bd1bd7c4bd1b044b4b7995427eec89c5dec1391045915fd50b7f4ef2fb" Namespace="calico-apiserver" Pod="calico-apiserver-74bc45499-2znnx" WorkloadEndpoint="ip--172--31--31--26-k8s-calico--apiserver--74bc45499--2znnx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--26-k8s-calico--apiserver--74bc45499--2znnx-eth0", GenerateName:"calico-apiserver-74bc45499-", Namespace:"calico-apiserver", SelfLink:"", UID:"d6acbfd4-88d6-4133-9434-3bfec2c327d4", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 19, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74bc45499", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-26", ContainerID:"2019f4bd1bd7c4bd1b044b4b7995427eec89c5dec1391045915fd50b7f4ef2fb", Pod:"calico-apiserver-74bc45499-2znnx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali97e1ca024cd", MAC:"26:7a:a7:0f:57:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:19:35.594715 containerd[1849]: 2026-01-28 01:19:35.579 [INFO][4804] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2019f4bd1bd7c4bd1b044b4b7995427eec89c5dec1391045915fd50b7f4ef2fb" Namespace="calico-apiserver" Pod="calico-apiserver-74bc45499-2znnx" WorkloadEndpoint="ip--172--31--31--26-k8s-calico--apiserver--74bc45499--2znnx-eth0" Jan 28 01:19:35.598000 audit: BPF prog-id=223 op=LOAD Jan 28 01:19:35.599000 audit: BPF prog-id=224 op=LOAD Jan 28 01:19:35.599000 audit[4868]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4856 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:35.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162386434363131663361613965326639396531353363356537373437 Jan 28 01:19:35.599000 audit: BPF prog-id=224 op=UNLOAD Jan 28 01:19:35.599000 audit[4868]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4856 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:35.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162386434363131663361613965326639396531353363356537373437 Jan 28 01:19:35.600000 audit: BPF prog-id=225 op=LOAD Jan 28 01:19:35.600000 audit[4868]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4856 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:35.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162386434363131663361613965326639396531353363356537373437 Jan 28 01:19:35.600000 audit: BPF prog-id=226 op=LOAD Jan 28 01:19:35.600000 audit[4868]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4856 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:35.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162386434363131663361613965326639396531353363356537373437 Jan 28 01:19:35.600000 audit: BPF prog-id=226 op=UNLOAD Jan 28 01:19:35.600000 audit[4868]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4856 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:35.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162386434363131663361613965326639396531353363356537373437 Jan 28 01:19:35.600000 audit: BPF prog-id=225 op=UNLOAD Jan 28 01:19:35.600000 audit[4868]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4856 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:35.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162386434363131663361613965326639396531353363356537373437 Jan 28 01:19:35.600000 audit: BPF prog-id=227 op=LOAD Jan 28 01:19:35.600000 audit[4868]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4856 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:35.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162386434363131663361613965326639396531353363356537373437 Jan 28 01:19:35.667991 containerd[1849]: time="2026-01-28T01:19:35.667884269Z" level=info msg="connecting to shim 2019f4bd1bd7c4bd1b044b4b7995427eec89c5dec1391045915fd50b7f4ef2fb" address="unix:///run/containerd/s/b9fa206f019f1f2197f67a9e7bbbc56961a3a9ade02befff3d914d2f288f7ef8" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:19:35.672502 kubelet[3202]: E0128 01:19:35.672344 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74bc45499-4hxx4" podUID="325cd625-25dd-4d22-8523-67e469e6d0e9" Jan 28 01:19:35.674463 kubelet[3202]: E0128 01:19:35.674340 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-799d7c8d49-2vrfp" podUID="9accd949-c4fa-4ce7-b3a8-524deb448a06" Jan 28 01:19:35.730475 containerd[1849]: time="2026-01-28T01:19:35.730015867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dbbbff994-sqgh4,Uid:c988f11c-6f16-4100-9308-ea1983457126,Namespace:calico-system,Attempt:0,} returns sandbox id \"ab8d4611f3aa9e2f99e153c5e77470bf432c2eb49865b28d050848648d72e64e\"" Jan 28 01:19:35.738546 containerd[1849]: time="2026-01-28T01:19:35.738494048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:19:35.737000 audit[4922]: NETFILTER_CFG table=filter:128 family=2 entries=40 op=nft_register_chain pid=4922 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:19:35.737000 audit[4922]: SYSCALL arch=c000003e syscall=46 success=yes exit=20764 a0=3 a1=7fff71425db0 a2=0 a3=7fff71425d9c items=0 ppid=4484 pid=4922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:35.737000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:19:35.747252 systemd[1]: Started cri-containerd-2019f4bd1bd7c4bd1b044b4b7995427eec89c5dec1391045915fd50b7f4ef2fb.scope - libcontainer container 2019f4bd1bd7c4bd1b044b4b7995427eec89c5dec1391045915fd50b7f4ef2fb. Jan 28 01:19:35.799291 kernel: kauditd_printk_skb: 281 callbacks suppressed Jan 28 01:19:35.799421 kernel: audit: type=1325 audit(1769563175.793:689): table=filter:129 family=2 entries=45 op=nft_register_chain pid=4942 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:19:35.793000 audit[4942]: NETFILTER_CFG table=filter:129 family=2 entries=45 op=nft_register_chain pid=4942 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:19:35.805860 kernel: audit: type=1300 audit(1769563175.793:689): arch=c000003e syscall=46 success=yes exit=24264 a0=3 a1=7ffd0c46e7d0 a2=0 a3=7ffd0c46e7bc items=0 ppid=4484 pid=4942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:35.793000 audit[4942]: SYSCALL arch=c000003e syscall=46 success=yes exit=24264 a0=3 a1=7ffd0c46e7d0 a2=0 a3=7ffd0c46e7bc items=0 ppid=4484 pid=4942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:35.793000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:19:35.816759 kernel: audit: type=1327 audit(1769563175.793:689): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:19:35.933000 audit: BPF prog-id=228 op=LOAD Jan 28 01:19:35.935882 kernel: audit: type=1334 audit(1769563175.933:690): prog-id=228 op=LOAD Jan 28 01:19:35.936000 audit: BPF prog-id=229 op=LOAD Jan 28 01:19:35.939779 kernel: audit: type=1334 audit(1769563175.936:691): prog-id=229 op=LOAD Jan 28 01:19:35.936000 audit[4921]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4901 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:35.947468 kernel: audit: type=1300 audit(1769563175.936:691): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4901 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:35.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230313966346264316264376334626431623034346234623739393534 Jan 28 01:19:35.955202 kernel: audit: type=1327 audit(1769563175.936:691): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230313966346264316264376334626431623034346234623739393534 Jan 28 01:19:35.936000 audit: BPF prog-id=229 op=UNLOAD Jan 28 01:19:35.964782 kernel: audit: type=1334 audit(1769563175.936:692): prog-id=229 op=UNLOAD Jan 28 01:19:35.964889 kernel: audit: type=1300 audit(1769563175.936:692): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4901 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:35.936000 audit[4921]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4901 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:35.970772 kernel: audit: type=1327 audit(1769563175.936:692): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230313966346264316264376334626431623034346234623739393534 Jan 28 01:19:35.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230313966346264316264376334626431623034346234623739393534 Jan 28 01:19:35.936000 audit: BPF prog-id=230 op=LOAD Jan 28 01:19:35.936000 audit[4921]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4901 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:35.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230313966346264316264376334626431623034346234623739393534 Jan 28 01:19:35.937000 audit: BPF prog-id=231 op=LOAD Jan 28 01:19:35.937000 audit[4921]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4901 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:35.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230313966346264316264376334626431623034346234623739393534 Jan 28 01:19:35.937000 audit: BPF prog-id=231 op=UNLOAD Jan 28 01:19:35.937000 audit[4921]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4901 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:35.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230313966346264316264376334626431623034346234623739393534 Jan 28 01:19:35.937000 audit: BPF prog-id=230 op=UNLOAD Jan 28 01:19:35.937000 audit[4921]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4901 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:35.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230313966346264316264376334626431623034346234623739393534 Jan 28 01:19:35.938000 audit: BPF prog-id=232 op=LOAD Jan 28 01:19:35.938000 audit[4921]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4901 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:35.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230313966346264316264376334626431623034346234623739393534 Jan 28 01:19:35.984000 audit[4944]: NETFILTER_CFG table=filter:130 family=2 entries=20 op=nft_register_rule pid=4944 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:35.984000 audit[4944]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe0e7b6d20 a2=0 a3=7ffe0e7b6d0c items=0 ppid=3451 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:35.984000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:35.989000 audit[4944]: NETFILTER_CFG table=nat:131 family=2 entries=14 op=nft_register_rule pid=4944 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:35.989000 audit[4944]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe0e7b6d20 a2=0 a3=0 items=0 ppid=3451 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:35.989000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:36.038483 containerd[1849]: time="2026-01-28T01:19:36.038428683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74bc45499-2znnx,Uid:d6acbfd4-88d6-4133-9434-3bfec2c327d4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2019f4bd1bd7c4bd1b044b4b7995427eec89c5dec1391045915fd50b7f4ef2fb\"" Jan 28 01:19:36.067967 systemd-networkd[1458]: calif960986ffda: Gained IPv6LL Jan 28 01:19:36.185590 containerd[1849]: time="2026-01-28T01:19:36.185532293Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:19:36.187933 containerd[1849]: time="2026-01-28T01:19:36.187732571Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:19:36.187933 containerd[1849]: time="2026-01-28T01:19:36.187790623Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 01:19:36.188172 kubelet[3202]: E0128 01:19:36.188097 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:19:36.188172 kubelet[3202]: E0128 01:19:36.188156 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:19:36.188585 kubelet[3202]: E0128 01:19:36.188462 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2875x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-dbbbff994-sqgh4_calico-system(c988f11c-6f16-4100-9308-ea1983457126): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:19:36.189430 containerd[1849]: time="2026-01-28T01:19:36.188992569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:19:36.189989 kubelet[3202]: E0128 01:19:36.189723 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dbbbff994-sqgh4" podUID="c988f11c-6f16-4100-9308-ea1983457126" Jan 28 01:19:36.222765 containerd[1849]: time="2026-01-28T01:19:36.222700535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5fjx2,Uid:6c6224c1-45a4-4e67-9483-34412dd5913e,Namespace:calico-system,Attempt:0,}" Jan 28 01:19:36.370440 systemd-networkd[1458]: cali272fb8e5d5b: Link UP Jan 28 01:19:36.371924 systemd-networkd[1458]: cali272fb8e5d5b: Gained carrier Jan 28 01:19:36.391492 containerd[1849]: 2026-01-28 01:19:36.271 [INFO][4952] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--26-k8s-csi--node--driver--5fjx2-eth0 csi-node-driver- calico-system 6c6224c1-45a4-4e67-9483-34412dd5913e 727 0 2026-01-28 01:19:10 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-31-26 csi-node-driver-5fjx2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali272fb8e5d5b [] [] }} ContainerID="07ceccf1f2a99e0fdde5b5447a4e72a3907c6d711b86a822473a82343d596e76" Namespace="calico-system" Pod="csi-node-driver-5fjx2" WorkloadEndpoint="ip--172--31--31--26-k8s-csi--node--driver--5fjx2-" Jan 28 01:19:36.391492 containerd[1849]: 2026-01-28 01:19:36.272 [INFO][4952] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="07ceccf1f2a99e0fdde5b5447a4e72a3907c6d711b86a822473a82343d596e76" Namespace="calico-system" Pod="csi-node-driver-5fjx2" WorkloadEndpoint="ip--172--31--31--26-k8s-csi--node--driver--5fjx2-eth0" Jan 28 01:19:36.391492 containerd[1849]: 2026-01-28 01:19:36.318 [INFO][4964] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07ceccf1f2a99e0fdde5b5447a4e72a3907c6d711b86a822473a82343d596e76" HandleID="k8s-pod-network.07ceccf1f2a99e0fdde5b5447a4e72a3907c6d711b86a822473a82343d596e76" Workload="ip--172--31--31--26-k8s-csi--node--driver--5fjx2-eth0" Jan 28 01:19:36.391716 containerd[1849]: 2026-01-28 01:19:36.318 [INFO][4964] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="07ceccf1f2a99e0fdde5b5447a4e72a3907c6d711b86a822473a82343d596e76" HandleID="k8s-pod-network.07ceccf1f2a99e0fdde5b5447a4e72a3907c6d711b86a822473a82343d596e76" Workload="ip--172--31--31--26-k8s-csi--node--driver--5fjx2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-26", "pod":"csi-node-driver-5fjx2", "timestamp":"2026-01-28 01:19:36.318590625 +0000 UTC"}, Hostname:"ip-172-31-31-26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:19:36.391716 containerd[1849]: 2026-01-28 01:19:36.318 [INFO][4964] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:19:36.391716 containerd[1849]: 2026-01-28 01:19:36.318 [INFO][4964] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:19:36.391716 containerd[1849]: 2026-01-28 01:19:36.318 [INFO][4964] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-26' Jan 28 01:19:36.391716 containerd[1849]: 2026-01-28 01:19:36.328 [INFO][4964] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.07ceccf1f2a99e0fdde5b5447a4e72a3907c6d711b86a822473a82343d596e76" host="ip-172-31-31-26" Jan 28 01:19:36.391716 containerd[1849]: 2026-01-28 01:19:36.333 [INFO][4964] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-26" Jan 28 01:19:36.391716 containerd[1849]: 2026-01-28 01:19:36.338 [INFO][4964] ipam/ipam.go 511: Trying affinity for 192.168.24.0/26 host="ip-172-31-31-26" Jan 28 01:19:36.391716 containerd[1849]: 2026-01-28 01:19:36.341 [INFO][4964] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.0/26 host="ip-172-31-31-26" Jan 28 01:19:36.391716 containerd[1849]: 2026-01-28 01:19:36.343 [INFO][4964] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.0/26 host="ip-172-31-31-26" Jan 28 01:19:36.391716 containerd[1849]: 2026-01-28 01:19:36.343 [INFO][4964] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.0/26 handle="k8s-pod-network.07ceccf1f2a99e0fdde5b5447a4e72a3907c6d711b86a822473a82343d596e76" host="ip-172-31-31-26" Jan 28 01:19:36.392058 containerd[1849]: 2026-01-28 01:19:36.345 [INFO][4964] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.07ceccf1f2a99e0fdde5b5447a4e72a3907c6d711b86a822473a82343d596e76 Jan 28 01:19:36.392058 containerd[1849]: 2026-01-28 01:19:36.350 [INFO][4964] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.0/26 handle="k8s-pod-network.07ceccf1f2a99e0fdde5b5447a4e72a3907c6d711b86a822473a82343d596e76" host="ip-172-31-31-26" Jan 28 01:19:36.392058 containerd[1849]: 2026-01-28 01:19:36.362 [INFO][4964] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.5/26] block=192.168.24.0/26 handle="k8s-pod-network.07ceccf1f2a99e0fdde5b5447a4e72a3907c6d711b86a822473a82343d596e76" host="ip-172-31-31-26" Jan 28 01:19:36.392058 containerd[1849]: 2026-01-28 01:19:36.362 [INFO][4964] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.5/26] handle="k8s-pod-network.07ceccf1f2a99e0fdde5b5447a4e72a3907c6d711b86a822473a82343d596e76" host="ip-172-31-31-26" Jan 28 01:19:36.392058 containerd[1849]: 2026-01-28 01:19:36.362 [INFO][4964] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:19:36.392058 containerd[1849]: 2026-01-28 01:19:36.362 [INFO][4964] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.5/26] IPv6=[] ContainerID="07ceccf1f2a99e0fdde5b5447a4e72a3907c6d711b86a822473a82343d596e76" HandleID="k8s-pod-network.07ceccf1f2a99e0fdde5b5447a4e72a3907c6d711b86a822473a82343d596e76" Workload="ip--172--31--31--26-k8s-csi--node--driver--5fjx2-eth0" Jan 28 01:19:36.392193 containerd[1849]: 2026-01-28 01:19:36.365 [INFO][4952] cni-plugin/k8s.go 418: Populated endpoint ContainerID="07ceccf1f2a99e0fdde5b5447a4e72a3907c6d711b86a822473a82343d596e76" Namespace="calico-system" Pod="csi-node-driver-5fjx2" WorkloadEndpoint="ip--172--31--31--26-k8s-csi--node--driver--5fjx2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--26-k8s-csi--node--driver--5fjx2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6c6224c1-45a4-4e67-9483-34412dd5913e", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 19, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-26", ContainerID:"", Pod:"csi-node-driver-5fjx2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali272fb8e5d5b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:19:36.392269 containerd[1849]: 2026-01-28 01:19:36.365 [INFO][4952] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.5/32] ContainerID="07ceccf1f2a99e0fdde5b5447a4e72a3907c6d711b86a822473a82343d596e76" Namespace="calico-system" Pod="csi-node-driver-5fjx2" WorkloadEndpoint="ip--172--31--31--26-k8s-csi--node--driver--5fjx2-eth0" Jan 28 01:19:36.392269 containerd[1849]: 2026-01-28 01:19:36.365 [INFO][4952] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali272fb8e5d5b ContainerID="07ceccf1f2a99e0fdde5b5447a4e72a3907c6d711b86a822473a82343d596e76" Namespace="calico-system" Pod="csi-node-driver-5fjx2" WorkloadEndpoint="ip--172--31--31--26-k8s-csi--node--driver--5fjx2-eth0" Jan 28 01:19:36.392269 containerd[1849]: 2026-01-28 01:19:36.372 [INFO][4952] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="07ceccf1f2a99e0fdde5b5447a4e72a3907c6d711b86a822473a82343d596e76" Namespace="calico-system" Pod="csi-node-driver-5fjx2" WorkloadEndpoint="ip--172--31--31--26-k8s-csi--node--driver--5fjx2-eth0" Jan 28 01:19:36.392339 containerd[1849]: 2026-01-28 01:19:36.373 [INFO][4952] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="07ceccf1f2a99e0fdde5b5447a4e72a3907c6d711b86a822473a82343d596e76" Namespace="calico-system" Pod="csi-node-driver-5fjx2" WorkloadEndpoint="ip--172--31--31--26-k8s-csi--node--driver--5fjx2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--26-k8s-csi--node--driver--5fjx2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6c6224c1-45a4-4e67-9483-34412dd5913e", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 19, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-26", ContainerID:"07ceccf1f2a99e0fdde5b5447a4e72a3907c6d711b86a822473a82343d596e76", Pod:"csi-node-driver-5fjx2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali272fb8e5d5b", MAC:"ba:14:a2:9f:8e:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:19:36.392393 containerd[1849]: 2026-01-28 01:19:36.387 [INFO][4952] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="07ceccf1f2a99e0fdde5b5447a4e72a3907c6d711b86a822473a82343d596e76" Namespace="calico-system" Pod="csi-node-driver-5fjx2" WorkloadEndpoint="ip--172--31--31--26-k8s-csi--node--driver--5fjx2-eth0" Jan 28 01:19:36.434524 containerd[1849]: time="2026-01-28T01:19:36.434428689Z" level=info msg="connecting to shim 07ceccf1f2a99e0fdde5b5447a4e72a3907c6d711b86a822473a82343d596e76" address="unix:///run/containerd/s/6413096268d234b1cda9f4d5b2b35da62dad71710edc7089993a0d1b1603cd2a" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:19:36.464210 systemd[1]: Started cri-containerd-07ceccf1f2a99e0fdde5b5447a4e72a3907c6d711b86a822473a82343d596e76.scope - libcontainer container 07ceccf1f2a99e0fdde5b5447a4e72a3907c6d711b86a822473a82343d596e76. Jan 28 01:19:36.466773 containerd[1849]: time="2026-01-28T01:19:36.466660854Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:19:36.469476 containerd[1849]: time="2026-01-28T01:19:36.469418710Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:19:36.469610 containerd[1849]: time="2026-01-28T01:19:36.469582052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:19:36.470274 kubelet[3202]: E0128 01:19:36.470147 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:19:36.470274 kubelet[3202]: E0128 01:19:36.470205 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:19:36.470820 kubelet[3202]: E0128 01:19:36.470693 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pqxhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74bc45499-2znnx_calico-apiserver(d6acbfd4-88d6-4133-9434-3bfec2c327d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:19:36.472804 kubelet[3202]: E0128 01:19:36.472721 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74bc45499-2znnx" podUID="d6acbfd4-88d6-4133-9434-3bfec2c327d4" Jan 28 01:19:36.478000 audit: BPF prog-id=233 op=LOAD Jan 28 01:19:36.479000 audit: BPF prog-id=234 op=LOAD Jan 28 01:19:36.479000 audit[4996]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4985 pid=4996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:36.479000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037636563636631663261393965306664646535623534343761346537 Jan 28 01:19:36.480000 audit: BPF prog-id=234 op=UNLOAD Jan 28 01:19:36.480000 audit[4996]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4985 pid=4996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:36.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037636563636631663261393965306664646535623534343761346537 Jan 28 01:19:36.480000 audit: BPF prog-id=235 op=LOAD Jan 28 01:19:36.480000 audit[4996]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4985 pid=4996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:36.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037636563636631663261393965306664646535623534343761346537 Jan 28 01:19:36.480000 audit: BPF prog-id=236 op=LOAD Jan 28 01:19:36.480000 audit[4996]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4985 pid=4996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:36.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037636563636631663261393965306664646535623534343761346537 Jan 28 01:19:36.480000 audit: BPF prog-id=236 op=UNLOAD Jan 28 01:19:36.480000 audit[4996]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4985 pid=4996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:36.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037636563636631663261393965306664646535623534343761346537 Jan 28 01:19:36.480000 audit: BPF prog-id=235 op=UNLOAD Jan 28 01:19:36.480000 audit[4996]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4985 pid=4996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:36.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037636563636631663261393965306664646535623534343761346537 Jan 28 01:19:36.480000 audit: BPF prog-id=237 op=LOAD Jan 28 01:19:36.480000 audit[4996]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4985 pid=4996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:36.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037636563636631663261393965306664646535623534343761346537 Jan 28 01:19:36.500772 kubelet[3202]: E0128 01:19:36.498898 3202 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c6224c1_45a4_4e67_9483_34412dd5913e.slice/cri-containerd-07ceccf1f2a99e0fdde5b5447a4e72a3907c6d711b86a822473a82343d596e76.scope\": RecentStats: unable to find data in memory cache]" Jan 28 01:19:36.506415 containerd[1849]: time="2026-01-28T01:19:36.506101709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5fjx2,Uid:6c6224c1-45a4-4e67-9483-34412dd5913e,Namespace:calico-system,Attempt:0,} returns sandbox id \"07ceccf1f2a99e0fdde5b5447a4e72a3907c6d711b86a822473a82343d596e76\"" Jan 28 01:19:36.510133 containerd[1849]: time="2026-01-28T01:19:36.509906196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:19:36.575000 audit[5023]: NETFILTER_CFG table=filter:132 family=2 entries=48 op=nft_register_chain pid=5023 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:19:36.575000 audit[5023]: SYSCALL arch=c000003e syscall=46 success=yes exit=23140 a0=3 a1=7fff6cfb1ba0 a2=0 a3=7fff6cfb1b8c items=0 ppid=4484 pid=5023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:36.575000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:19:36.675077 kubelet[3202]: E0128 01:19:36.674982 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74bc45499-2znnx" podUID="d6acbfd4-88d6-4133-9434-3bfec2c327d4" Jan 28 01:19:36.679183 kubelet[3202]: E0128 01:19:36.678417 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dbbbff994-sqgh4" podUID="c988f11c-6f16-4100-9308-ea1983457126" Jan 28 01:19:36.679887 kubelet[3202]: E0128 01:19:36.679795 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74bc45499-4hxx4" podUID="325cd625-25dd-4d22-8523-67e469e6d0e9" Jan 28 01:19:36.721000 audit[5025]: NETFILTER_CFG table=filter:133 family=2 entries=20 op=nft_register_rule pid=5025 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:36.721000 audit[5025]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffeaf283a00 a2=0 a3=7ffeaf2839ec items=0 ppid=3451 pid=5025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:36.721000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:36.725000 audit[5025]: NETFILTER_CFG table=nat:134 family=2 entries=14 op=nft_register_rule pid=5025 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:36.725000 audit[5025]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffeaf283a00 a2=0 a3=0 items=0 ppid=3451 pid=5025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:36.725000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:36.759501 containerd[1849]: time="2026-01-28T01:19:36.759460810Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:19:36.761904 containerd[1849]: time="2026-01-28T01:19:36.761717293Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:19:36.762169 containerd[1849]: time="2026-01-28T01:19:36.761810678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 01:19:36.762304 kubelet[3202]: E0128 01:19:36.762277 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:19:36.762380 kubelet[3202]: E0128 01:19:36.762317 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:19:36.763050 kubelet[3202]: E0128 01:19:36.763003 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4bxdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5fjx2_calico-system(6c6224c1-45a4-4e67-9483-34412dd5913e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:19:36.764929 containerd[1849]: time="2026-01-28T01:19:36.764899355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:19:37.056361 containerd[1849]: time="2026-01-28T01:19:37.056315494Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:19:37.058669 containerd[1849]: time="2026-01-28T01:19:37.058559614Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:19:37.058669 containerd[1849]: time="2026-01-28T01:19:37.058616138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 01:19:37.058971 kubelet[3202]: E0128 01:19:37.058889 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:19:37.059051 kubelet[3202]: E0128 01:19:37.058982 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:19:37.059323 kubelet[3202]: E0128 01:19:37.059118 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4bxdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5fjx2_calico-system(6c6224c1-45a4-4e67-9483-34412dd5913e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:19:37.060617 kubelet[3202]: E0128 01:19:37.060294 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5fjx2" podUID="6c6224c1-45a4-4e67-9483-34412dd5913e" Jan 28 01:19:37.155934 systemd-networkd[1458]: cali8ece591c57f: Gained IPv6LL Jan 28 01:19:37.203042 containerd[1849]: time="2026-01-28T01:19:37.203003765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xwxpv,Uid:ff239d8a-c337-42b4-8142-43d1fa64b8e0,Namespace:kube-system,Attempt:0,}" Jan 28 01:19:37.203889 containerd[1849]: time="2026-01-28T01:19:37.203852078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pcsbp,Uid:cbd1002c-7263-43fc-8b17-789e41b44261,Namespace:calico-system,Attempt:0,}" Jan 28 01:19:37.204668 containerd[1849]: time="2026-01-28T01:19:37.204634373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-597546f57d-dtd92,Uid:77c5b13e-b6c7-4da4-8d69-cf95701836c8,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:19:37.402153 systemd-networkd[1458]: cali243e00c19ab: Link UP Jan 28 01:19:37.404033 systemd-networkd[1458]: cali243e00c19ab: Gained carrier Jan 28 01:19:37.412495 systemd-networkd[1458]: cali97e1ca024cd: Gained IPv6LL Jan 28 01:19:37.431300 containerd[1849]: 2026-01-28 01:19:37.294 [INFO][5026] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--26-k8s-coredns--674b8bbfcf--xwxpv-eth0 coredns-674b8bbfcf- kube-system ff239d8a-c337-42b4-8142-43d1fa64b8e0 841 0 2026-01-28 01:18:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-31-26 coredns-674b8bbfcf-xwxpv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali243e00c19ab [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bbddedafc8d6f27eb69e7b7c8930abe2104f543e26fdc2ce132860151a27e0c3" Namespace="kube-system" Pod="coredns-674b8bbfcf-xwxpv" WorkloadEndpoint="ip--172--31--31--26-k8s-coredns--674b8bbfcf--xwxpv-" Jan 28 01:19:37.431300 containerd[1849]: 2026-01-28 01:19:37.294 [INFO][5026] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bbddedafc8d6f27eb69e7b7c8930abe2104f543e26fdc2ce132860151a27e0c3" Namespace="kube-system" Pod="coredns-674b8bbfcf-xwxpv" WorkloadEndpoint="ip--172--31--31--26-k8s-coredns--674b8bbfcf--xwxpv-eth0" Jan 28 01:19:37.431300 containerd[1849]: 2026-01-28 01:19:37.350 [INFO][5063] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bbddedafc8d6f27eb69e7b7c8930abe2104f543e26fdc2ce132860151a27e0c3" HandleID="k8s-pod-network.bbddedafc8d6f27eb69e7b7c8930abe2104f543e26fdc2ce132860151a27e0c3" Workload="ip--172--31--31--26-k8s-coredns--674b8bbfcf--xwxpv-eth0" Jan 28 01:19:37.431768 containerd[1849]: 2026-01-28 01:19:37.351 [INFO][5063] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bbddedafc8d6f27eb69e7b7c8930abe2104f543e26fdc2ce132860151a27e0c3" HandleID="k8s-pod-network.bbddedafc8d6f27eb69e7b7c8930abe2104f543e26fdc2ce132860151a27e0c3" Workload="ip--172--31--31--26-k8s-coredns--674b8bbfcf--xwxpv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-31-26", "pod":"coredns-674b8bbfcf-xwxpv", "timestamp":"2026-01-28 01:19:37.350765063 +0000 UTC"}, Hostname:"ip-172-31-31-26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:19:37.431768 containerd[1849]: 2026-01-28 01:19:37.351 [INFO][5063] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:19:37.431768 containerd[1849]: 2026-01-28 01:19:37.351 [INFO][5063] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:19:37.431768 containerd[1849]: 2026-01-28 01:19:37.351 [INFO][5063] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-26' Jan 28 01:19:37.431768 containerd[1849]: 2026-01-28 01:19:37.361 [INFO][5063] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bbddedafc8d6f27eb69e7b7c8930abe2104f543e26fdc2ce132860151a27e0c3" host="ip-172-31-31-26" Jan 28 01:19:37.431768 containerd[1849]: 2026-01-28 01:19:37.369 [INFO][5063] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-26" Jan 28 01:19:37.431768 containerd[1849]: 2026-01-28 01:19:37.377 [INFO][5063] ipam/ipam.go 511: Trying affinity for 192.168.24.0/26 host="ip-172-31-31-26" Jan 28 01:19:37.431768 containerd[1849]: 2026-01-28 01:19:37.380 [INFO][5063] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.0/26 host="ip-172-31-31-26" Jan 28 01:19:37.431768 containerd[1849]: 2026-01-28 01:19:37.382 [INFO][5063] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.0/26 host="ip-172-31-31-26" Jan 28 01:19:37.431768 containerd[1849]: 2026-01-28 01:19:37.382 [INFO][5063] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.0/26 handle="k8s-pod-network.bbddedafc8d6f27eb69e7b7c8930abe2104f543e26fdc2ce132860151a27e0c3" host="ip-172-31-31-26" Jan 28 01:19:37.433007 containerd[1849]: 2026-01-28 01:19:37.383 [INFO][5063] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bbddedafc8d6f27eb69e7b7c8930abe2104f543e26fdc2ce132860151a27e0c3 Jan 28 01:19:37.433007 containerd[1849]: 2026-01-28 01:19:37.387 [INFO][5063] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.0/26 handle="k8s-pod-network.bbddedafc8d6f27eb69e7b7c8930abe2104f543e26fdc2ce132860151a27e0c3" host="ip-172-31-31-26" Jan 28 01:19:37.433007 containerd[1849]: 2026-01-28 01:19:37.395 [INFO][5063] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.6/26] block=192.168.24.0/26 handle="k8s-pod-network.bbddedafc8d6f27eb69e7b7c8930abe2104f543e26fdc2ce132860151a27e0c3" host="ip-172-31-31-26" Jan 28 01:19:37.433007 containerd[1849]: 2026-01-28 01:19:37.395 [INFO][5063] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.6/26] handle="k8s-pod-network.bbddedafc8d6f27eb69e7b7c8930abe2104f543e26fdc2ce132860151a27e0c3" host="ip-172-31-31-26" Jan 28 01:19:37.433007 containerd[1849]: 2026-01-28 01:19:37.395 [INFO][5063] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:19:37.433007 containerd[1849]: 2026-01-28 01:19:37.395 [INFO][5063] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.6/26] IPv6=[] ContainerID="bbddedafc8d6f27eb69e7b7c8930abe2104f543e26fdc2ce132860151a27e0c3" HandleID="k8s-pod-network.bbddedafc8d6f27eb69e7b7c8930abe2104f543e26fdc2ce132860151a27e0c3" Workload="ip--172--31--31--26-k8s-coredns--674b8bbfcf--xwxpv-eth0" Jan 28 01:19:37.433242 containerd[1849]: 2026-01-28 01:19:37.398 [INFO][5026] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bbddedafc8d6f27eb69e7b7c8930abe2104f543e26fdc2ce132860151a27e0c3" Namespace="kube-system" Pod="coredns-674b8bbfcf-xwxpv" WorkloadEndpoint="ip--172--31--31--26-k8s-coredns--674b8bbfcf--xwxpv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--26-k8s-coredns--674b8bbfcf--xwxpv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ff239d8a-c337-42b4-8142-43d1fa64b8e0", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 18, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-26", ContainerID:"", Pod:"coredns-674b8bbfcf-xwxpv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali243e00c19ab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:19:37.433242 containerd[1849]: 2026-01-28 01:19:37.399 [INFO][5026] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.6/32] ContainerID="bbddedafc8d6f27eb69e7b7c8930abe2104f543e26fdc2ce132860151a27e0c3" Namespace="kube-system" Pod="coredns-674b8bbfcf-xwxpv" WorkloadEndpoint="ip--172--31--31--26-k8s-coredns--674b8bbfcf--xwxpv-eth0" Jan 28 01:19:37.433242 containerd[1849]: 2026-01-28 01:19:37.399 [INFO][5026] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali243e00c19ab ContainerID="bbddedafc8d6f27eb69e7b7c8930abe2104f543e26fdc2ce132860151a27e0c3" Namespace="kube-system" Pod="coredns-674b8bbfcf-xwxpv" WorkloadEndpoint="ip--172--31--31--26-k8s-coredns--674b8bbfcf--xwxpv-eth0" Jan 28 01:19:37.433242 containerd[1849]: 2026-01-28 01:19:37.407 [INFO][5026] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bbddedafc8d6f27eb69e7b7c8930abe2104f543e26fdc2ce132860151a27e0c3" Namespace="kube-system" Pod="coredns-674b8bbfcf-xwxpv" WorkloadEndpoint="ip--172--31--31--26-k8s-coredns--674b8bbfcf--xwxpv-eth0" Jan 28 01:19:37.433242 containerd[1849]: 2026-01-28 01:19:37.409 [INFO][5026] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bbddedafc8d6f27eb69e7b7c8930abe2104f543e26fdc2ce132860151a27e0c3" Namespace="kube-system" Pod="coredns-674b8bbfcf-xwxpv" WorkloadEndpoint="ip--172--31--31--26-k8s-coredns--674b8bbfcf--xwxpv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--26-k8s-coredns--674b8bbfcf--xwxpv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ff239d8a-c337-42b4-8142-43d1fa64b8e0", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 18, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-26", ContainerID:"bbddedafc8d6f27eb69e7b7c8930abe2104f543e26fdc2ce132860151a27e0c3", Pod:"coredns-674b8bbfcf-xwxpv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali243e00c19ab", MAC:"c2:3a:f2:01:35:cb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:19:37.433242 containerd[1849]: 2026-01-28 01:19:37.423 [INFO][5026] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bbddedafc8d6f27eb69e7b7c8930abe2104f543e26fdc2ce132860151a27e0c3" Namespace="kube-system" Pod="coredns-674b8bbfcf-xwxpv" WorkloadEndpoint="ip--172--31--31--26-k8s-coredns--674b8bbfcf--xwxpv-eth0" Jan 28 01:19:37.488196 containerd[1849]: time="2026-01-28T01:19:37.488072361Z" level=info msg="connecting to shim bbddedafc8d6f27eb69e7b7c8930abe2104f543e26fdc2ce132860151a27e0c3" address="unix:///run/containerd/s/c77809142462bb77f1846ca2d77d06f209d5da6685e9c37a8c4e72d727952a01" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:19:37.538931 systemd[1]: Started cri-containerd-bbddedafc8d6f27eb69e7b7c8930abe2104f543e26fdc2ce132860151a27e0c3.scope - libcontainer container bbddedafc8d6f27eb69e7b7c8930abe2104f543e26fdc2ce132860151a27e0c3. Jan 28 01:19:37.553453 systemd-networkd[1458]: cali43c08ede274: Link UP Jan 28 01:19:37.561362 systemd-networkd[1458]: cali43c08ede274: Gained carrier Jan 28 01:19:37.589000 audit[5132]: NETFILTER_CFG table=filter:135 family=2 entries=64 op=nft_register_chain pid=5132 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:19:37.589000 audit[5132]: SYSCALL arch=c000003e syscall=46 success=yes exit=30156 a0=3 a1=7ffdfc3c2720 a2=0 a3=7ffdfc3c270c items=0 ppid=4484 pid=5132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:37.589000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:19:37.602000 audit: BPF prog-id=238 op=LOAD Jan 28 01:19:37.602000 audit: BPF prog-id=239 op=LOAD Jan 28 01:19:37.602000 audit[5115]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5102 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:37.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262646465646166633864366632376562363965376237633839333061 Jan 28 01:19:37.604000 audit: BPF prog-id=239 op=UNLOAD Jan 28 01:19:37.604000 audit[5115]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5102 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:37.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262646465646166633864366632376562363965376237633839333061 Jan 28 01:19:37.605000 audit: BPF prog-id=240 op=LOAD Jan 28 01:19:37.605000 audit[5115]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5102 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:37.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262646465646166633864366632376562363965376237633839333061 Jan 28 01:19:37.607000 audit: BPF prog-id=241 op=LOAD Jan 28 01:19:37.607000 audit[5115]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5102 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:37.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262646465646166633864366632376562363965376237633839333061 Jan 28 01:19:37.607000 audit: BPF prog-id=241 op=UNLOAD Jan 28 01:19:37.607000 audit[5115]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5102 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:37.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262646465646166633864366632376562363965376237633839333061 Jan 28 01:19:37.608000 audit: BPF prog-id=240 op=UNLOAD Jan 28 01:19:37.608000 audit[5115]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5102 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:37.608000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262646465646166633864366632376562363965376237633839333061 Jan 28 01:19:37.609000 audit: BPF prog-id=242 op=LOAD Jan 28 01:19:37.613231 containerd[1849]: 2026-01-28 01:19:37.302 [INFO][5030] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--26-k8s-goldmane--666569f655--pcsbp-eth0 goldmane-666569f655- calico-system cbd1002c-7263-43fc-8b17-789e41b44261 852 0 2026-01-28 01:19:07 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-31-26 goldmane-666569f655-pcsbp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali43c08ede274 [] [] }} ContainerID="2fc7e413907c903a77097fe1603a618f1a519b7f3bbb53ebc6506a63e1b74642" Namespace="calico-system" Pod="goldmane-666569f655-pcsbp" WorkloadEndpoint="ip--172--31--31--26-k8s-goldmane--666569f655--pcsbp-" Jan 28 01:19:37.613231 containerd[1849]: 2026-01-28 01:19:37.303 [INFO][5030] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2fc7e413907c903a77097fe1603a618f1a519b7f3bbb53ebc6506a63e1b74642" Namespace="calico-system" Pod="goldmane-666569f655-pcsbp" WorkloadEndpoint="ip--172--31--31--26-k8s-goldmane--666569f655--pcsbp-eth0" Jan 28 01:19:37.613231 containerd[1849]: 2026-01-28 01:19:37.372 [INFO][5069] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2fc7e413907c903a77097fe1603a618f1a519b7f3bbb53ebc6506a63e1b74642" HandleID="k8s-pod-network.2fc7e413907c903a77097fe1603a618f1a519b7f3bbb53ebc6506a63e1b74642" Workload="ip--172--31--31--26-k8s-goldmane--666569f655--pcsbp-eth0" Jan 28 01:19:37.613231 containerd[1849]: 2026-01-28 01:19:37.373 [INFO][5069] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2fc7e413907c903a77097fe1603a618f1a519b7f3bbb53ebc6506a63e1b74642" HandleID="k8s-pod-network.2fc7e413907c903a77097fe1603a618f1a519b7f3bbb53ebc6506a63e1b74642" Workload="ip--172--31--31--26-k8s-goldmane--666569f655--pcsbp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5800), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-26", "pod":"goldmane-666569f655-pcsbp", "timestamp":"2026-01-28 01:19:37.372260867 +0000 UTC"}, Hostname:"ip-172-31-31-26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:19:37.613231 containerd[1849]: 2026-01-28 01:19:37.373 [INFO][5069] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:19:37.613231 containerd[1849]: 2026-01-28 01:19:37.395 [INFO][5069] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:19:37.613231 containerd[1849]: 2026-01-28 01:19:37.395 [INFO][5069] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-26' Jan 28 01:19:37.613231 containerd[1849]: 2026-01-28 01:19:37.461 [INFO][5069] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2fc7e413907c903a77097fe1603a618f1a519b7f3bbb53ebc6506a63e1b74642" host="ip-172-31-31-26" Jan 28 01:19:37.613231 containerd[1849]: 2026-01-28 01:19:37.477 [INFO][5069] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-26" Jan 28 01:19:37.613231 containerd[1849]: 2026-01-28 01:19:37.486 [INFO][5069] ipam/ipam.go 511: Trying affinity for 192.168.24.0/26 host="ip-172-31-31-26" Jan 28 01:19:37.613231 containerd[1849]: 2026-01-28 01:19:37.491 [INFO][5069] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.0/26 host="ip-172-31-31-26" Jan 28 01:19:37.613231 containerd[1849]: 2026-01-28 01:19:37.495 [INFO][5069] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.0/26 host="ip-172-31-31-26" Jan 28 01:19:37.613231 containerd[1849]: 2026-01-28 01:19:37.495 [INFO][5069] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.0/26 handle="k8s-pod-network.2fc7e413907c903a77097fe1603a618f1a519b7f3bbb53ebc6506a63e1b74642" host="ip-172-31-31-26" Jan 28 01:19:37.613231 containerd[1849]: 2026-01-28 01:19:37.497 [INFO][5069] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2fc7e413907c903a77097fe1603a618f1a519b7f3bbb53ebc6506a63e1b74642 Jan 28 01:19:37.613231 containerd[1849]: 2026-01-28 01:19:37.510 [INFO][5069] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.0/26 handle="k8s-pod-network.2fc7e413907c903a77097fe1603a618f1a519b7f3bbb53ebc6506a63e1b74642" host="ip-172-31-31-26" Jan 28 01:19:37.613231 containerd[1849]: 2026-01-28 01:19:37.528 [INFO][5069] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.7/26] block=192.168.24.0/26 handle="k8s-pod-network.2fc7e413907c903a77097fe1603a618f1a519b7f3bbb53ebc6506a63e1b74642" host="ip-172-31-31-26" Jan 28 01:19:37.613231 containerd[1849]: 2026-01-28 01:19:37.529 [INFO][5069] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.7/26] handle="k8s-pod-network.2fc7e413907c903a77097fe1603a618f1a519b7f3bbb53ebc6506a63e1b74642" host="ip-172-31-31-26" Jan 28 01:19:37.613231 containerd[1849]: 2026-01-28 01:19:37.530 [INFO][5069] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:19:37.613231 containerd[1849]: 2026-01-28 01:19:37.530 [INFO][5069] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.7/26] IPv6=[] ContainerID="2fc7e413907c903a77097fe1603a618f1a519b7f3bbb53ebc6506a63e1b74642" HandleID="k8s-pod-network.2fc7e413907c903a77097fe1603a618f1a519b7f3bbb53ebc6506a63e1b74642" Workload="ip--172--31--31--26-k8s-goldmane--666569f655--pcsbp-eth0" Jan 28 01:19:37.609000 audit[5115]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5102 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:37.609000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262646465646166633864366632376562363965376237633839333061 Jan 28 01:19:37.616396 containerd[1849]: 2026-01-28 01:19:37.537 [INFO][5030] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2fc7e413907c903a77097fe1603a618f1a519b7f3bbb53ebc6506a63e1b74642" Namespace="calico-system" Pod="goldmane-666569f655-pcsbp" WorkloadEndpoint="ip--172--31--31--26-k8s-goldmane--666569f655--pcsbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--26-k8s-goldmane--666569f655--pcsbp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"cbd1002c-7263-43fc-8b17-789e41b44261", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 19, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-26", ContainerID:"", Pod:"goldmane-666569f655-pcsbp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.24.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali43c08ede274", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:19:37.616396 containerd[1849]: 2026-01-28 01:19:37.537 [INFO][5030] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.7/32] ContainerID="2fc7e413907c903a77097fe1603a618f1a519b7f3bbb53ebc6506a63e1b74642" Namespace="calico-system" Pod="goldmane-666569f655-pcsbp" WorkloadEndpoint="ip--172--31--31--26-k8s-goldmane--666569f655--pcsbp-eth0" Jan 28 01:19:37.616396 containerd[1849]: 2026-01-28 01:19:37.537 [INFO][5030] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali43c08ede274 ContainerID="2fc7e413907c903a77097fe1603a618f1a519b7f3bbb53ebc6506a63e1b74642" Namespace="calico-system" Pod="goldmane-666569f655-pcsbp" WorkloadEndpoint="ip--172--31--31--26-k8s-goldmane--666569f655--pcsbp-eth0" Jan 28 01:19:37.616396 containerd[1849]: 2026-01-28 01:19:37.565 [INFO][5030] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2fc7e413907c903a77097fe1603a618f1a519b7f3bbb53ebc6506a63e1b74642" Namespace="calico-system" Pod="goldmane-666569f655-pcsbp" WorkloadEndpoint="ip--172--31--31--26-k8s-goldmane--666569f655--pcsbp-eth0" Jan 28 01:19:37.616396 containerd[1849]: 2026-01-28 01:19:37.566 [INFO][5030] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2fc7e413907c903a77097fe1603a618f1a519b7f3bbb53ebc6506a63e1b74642" Namespace="calico-system" Pod="goldmane-666569f655-pcsbp" WorkloadEndpoint="ip--172--31--31--26-k8s-goldmane--666569f655--pcsbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--26-k8s-goldmane--666569f655--pcsbp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"cbd1002c-7263-43fc-8b17-789e41b44261", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 19, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-26", ContainerID:"2fc7e413907c903a77097fe1603a618f1a519b7f3bbb53ebc6506a63e1b74642", Pod:"goldmane-666569f655-pcsbp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.24.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali43c08ede274", MAC:"26:f7:63:ff:94:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:19:37.616396 containerd[1849]: 2026-01-28 01:19:37.595 [INFO][5030] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2fc7e413907c903a77097fe1603a618f1a519b7f3bbb53ebc6506a63e1b74642" Namespace="calico-system" Pod="goldmane-666569f655-pcsbp" WorkloadEndpoint="ip--172--31--31--26-k8s-goldmane--666569f655--pcsbp-eth0" Jan 28 01:19:37.678794 containerd[1849]: time="2026-01-28T01:19:37.678637884Z" level=info msg="connecting to shim 2fc7e413907c903a77097fe1603a618f1a519b7f3bbb53ebc6506a63e1b74642" address="unix:///run/containerd/s/f0afbf1386a14c8df0fb2733e20384768dd838d8eacf2b8f4f6cb349dd96952a" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:19:37.691054 kubelet[3202]: E0128 01:19:37.691012 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dbbbff994-sqgh4" podUID="c988f11c-6f16-4100-9308-ea1983457126" Jan 28 01:19:37.693472 kubelet[3202]: E0128 01:19:37.691554 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74bc45499-2znnx" podUID="d6acbfd4-88d6-4133-9434-3bfec2c327d4" Jan 28 01:19:37.698324 kubelet[3202]: E0128 01:19:37.697108 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5fjx2" podUID="6c6224c1-45a4-4e67-9483-34412dd5913e" Jan 28 01:19:37.725858 systemd-networkd[1458]: calid8fab81d167: Link UP Jan 28 01:19:37.727931 systemd-networkd[1458]: calid8fab81d167: Gained carrier Jan 28 01:19:37.766779 containerd[1849]: 2026-01-28 01:19:37.326 [INFO][5041] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--26-k8s-calico--apiserver--597546f57d--dtd92-eth0 calico-apiserver-597546f57d- calico-apiserver 77c5b13e-b6c7-4da4-8d69-cf95701836c8 850 0 2026-01-28 01:19:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:597546f57d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-31-26 calico-apiserver-597546f57d-dtd92 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid8fab81d167 [] [] }} ContainerID="38d5787637aa543243ef5a2b18bd3c5eecd3fd421473c3d1e8f0e6fcd2f7fdfa" Namespace="calico-apiserver" Pod="calico-apiserver-597546f57d-dtd92" WorkloadEndpoint="ip--172--31--31--26-k8s-calico--apiserver--597546f57d--dtd92-" Jan 28 01:19:37.766779 containerd[1849]: 2026-01-28 01:19:37.326 [INFO][5041] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="38d5787637aa543243ef5a2b18bd3c5eecd3fd421473c3d1e8f0e6fcd2f7fdfa" Namespace="calico-apiserver" Pod="calico-apiserver-597546f57d-dtd92" WorkloadEndpoint="ip--172--31--31--26-k8s-calico--apiserver--597546f57d--dtd92-eth0" Jan 28 01:19:37.766779 containerd[1849]: 2026-01-28 01:19:37.383 [INFO][5077] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="38d5787637aa543243ef5a2b18bd3c5eecd3fd421473c3d1e8f0e6fcd2f7fdfa" HandleID="k8s-pod-network.38d5787637aa543243ef5a2b18bd3c5eecd3fd421473c3d1e8f0e6fcd2f7fdfa" Workload="ip--172--31--31--26-k8s-calico--apiserver--597546f57d--dtd92-eth0" Jan 28 01:19:37.766779 containerd[1849]: 2026-01-28 01:19:37.383 [INFO][5077] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="38d5787637aa543243ef5a2b18bd3c5eecd3fd421473c3d1e8f0e6fcd2f7fdfa" HandleID="k8s-pod-network.38d5787637aa543243ef5a2b18bd3c5eecd3fd421473c3d1e8f0e6fcd2f7fdfa" Workload="ip--172--31--31--26-k8s-calico--apiserver--597546f57d--dtd92-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00025bb30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-31-26", "pod":"calico-apiserver-597546f57d-dtd92", "timestamp":"2026-01-28 01:19:37.383001876 +0000 UTC"}, Hostname:"ip-172-31-31-26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:19:37.766779 containerd[1849]: 2026-01-28 01:19:37.383 [INFO][5077] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:19:37.766779 containerd[1849]: 2026-01-28 01:19:37.530 [INFO][5077] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:19:37.766779 containerd[1849]: 2026-01-28 01:19:37.531 [INFO][5077] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-26' Jan 28 01:19:37.766779 containerd[1849]: 2026-01-28 01:19:37.585 [INFO][5077] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.38d5787637aa543243ef5a2b18bd3c5eecd3fd421473c3d1e8f0e6fcd2f7fdfa" host="ip-172-31-31-26" Jan 28 01:19:37.766779 containerd[1849]: 2026-01-28 01:19:37.612 [INFO][5077] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-26" Jan 28 01:19:37.766779 containerd[1849]: 2026-01-28 01:19:37.626 [INFO][5077] ipam/ipam.go 511: Trying affinity for 192.168.24.0/26 host="ip-172-31-31-26" Jan 28 01:19:37.766779 containerd[1849]: 2026-01-28 01:19:37.629 [INFO][5077] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.0/26 host="ip-172-31-31-26" Jan 28 01:19:37.766779 containerd[1849]: 2026-01-28 01:19:37.635 [INFO][5077] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.0/26 host="ip-172-31-31-26" Jan 28 01:19:37.766779 containerd[1849]: 2026-01-28 01:19:37.635 [INFO][5077] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.0/26 handle="k8s-pod-network.38d5787637aa543243ef5a2b18bd3c5eecd3fd421473c3d1e8f0e6fcd2f7fdfa" host="ip-172-31-31-26" Jan 28 01:19:37.766779 containerd[1849]: 2026-01-28 01:19:37.640 [INFO][5077] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.38d5787637aa543243ef5a2b18bd3c5eecd3fd421473c3d1e8f0e6fcd2f7fdfa Jan 28 01:19:37.766779 containerd[1849]: 2026-01-28 01:19:37.660 [INFO][5077] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.0/26 handle="k8s-pod-network.38d5787637aa543243ef5a2b18bd3c5eecd3fd421473c3d1e8f0e6fcd2f7fdfa" host="ip-172-31-31-26" Jan 28 01:19:37.766779 containerd[1849]: 2026-01-28 01:19:37.685 [INFO][5077] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.8/26] block=192.168.24.0/26 handle="k8s-pod-network.38d5787637aa543243ef5a2b18bd3c5eecd3fd421473c3d1e8f0e6fcd2f7fdfa" host="ip-172-31-31-26" Jan 28 01:19:37.766779 containerd[1849]: 2026-01-28 01:19:37.685 [INFO][5077] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.8/26] handle="k8s-pod-network.38d5787637aa543243ef5a2b18bd3c5eecd3fd421473c3d1e8f0e6fcd2f7fdfa" host="ip-172-31-31-26" Jan 28 01:19:37.766779 containerd[1849]: 2026-01-28 01:19:37.685 [INFO][5077] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:19:37.766779 containerd[1849]: 2026-01-28 01:19:37.686 [INFO][5077] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.8/26] IPv6=[] ContainerID="38d5787637aa543243ef5a2b18bd3c5eecd3fd421473c3d1e8f0e6fcd2f7fdfa" HandleID="k8s-pod-network.38d5787637aa543243ef5a2b18bd3c5eecd3fd421473c3d1e8f0e6fcd2f7fdfa" Workload="ip--172--31--31--26-k8s-calico--apiserver--597546f57d--dtd92-eth0" Jan 28 01:19:37.767771 containerd[1849]: 2026-01-28 01:19:37.710 [INFO][5041] cni-plugin/k8s.go 418: Populated endpoint ContainerID="38d5787637aa543243ef5a2b18bd3c5eecd3fd421473c3d1e8f0e6fcd2f7fdfa" Namespace="calico-apiserver" Pod="calico-apiserver-597546f57d-dtd92" WorkloadEndpoint="ip--172--31--31--26-k8s-calico--apiserver--597546f57d--dtd92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--26-k8s-calico--apiserver--597546f57d--dtd92-eth0", GenerateName:"calico-apiserver-597546f57d-", Namespace:"calico-apiserver", SelfLink:"", UID:"77c5b13e-b6c7-4da4-8d69-cf95701836c8", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 19, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"597546f57d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-26", ContainerID:"", Pod:"calico-apiserver-597546f57d-dtd92", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid8fab81d167", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:19:37.767771 containerd[1849]: 2026-01-28 01:19:37.711 [INFO][5041] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.8/32] ContainerID="38d5787637aa543243ef5a2b18bd3c5eecd3fd421473c3d1e8f0e6fcd2f7fdfa" Namespace="calico-apiserver" Pod="calico-apiserver-597546f57d-dtd92" WorkloadEndpoint="ip--172--31--31--26-k8s-calico--apiserver--597546f57d--dtd92-eth0" Jan 28 01:19:37.767771 containerd[1849]: 2026-01-28 01:19:37.711 [INFO][5041] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid8fab81d167 ContainerID="38d5787637aa543243ef5a2b18bd3c5eecd3fd421473c3d1e8f0e6fcd2f7fdfa" Namespace="calico-apiserver" Pod="calico-apiserver-597546f57d-dtd92" WorkloadEndpoint="ip--172--31--31--26-k8s-calico--apiserver--597546f57d--dtd92-eth0" Jan 28 01:19:37.767771 containerd[1849]: 2026-01-28 01:19:37.729 [INFO][5041] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="38d5787637aa543243ef5a2b18bd3c5eecd3fd421473c3d1e8f0e6fcd2f7fdfa" Namespace="calico-apiserver" Pod="calico-apiserver-597546f57d-dtd92" WorkloadEndpoint="ip--172--31--31--26-k8s-calico--apiserver--597546f57d--dtd92-eth0" Jan 28 01:19:37.767771 containerd[1849]: 2026-01-28 01:19:37.730 [INFO][5041] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="38d5787637aa543243ef5a2b18bd3c5eecd3fd421473c3d1e8f0e6fcd2f7fdfa" Namespace="calico-apiserver" Pod="calico-apiserver-597546f57d-dtd92" WorkloadEndpoint="ip--172--31--31--26-k8s-calico--apiserver--597546f57d--dtd92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--26-k8s-calico--apiserver--597546f57d--dtd92-eth0", GenerateName:"calico-apiserver-597546f57d-", Namespace:"calico-apiserver", SelfLink:"", UID:"77c5b13e-b6c7-4da4-8d69-cf95701836c8", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 19, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"597546f57d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-26", ContainerID:"38d5787637aa543243ef5a2b18bd3c5eecd3fd421473c3d1e8f0e6fcd2f7fdfa", Pod:"calico-apiserver-597546f57d-dtd92", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid8fab81d167", MAC:"76:c3:a7:0d:ab:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:19:37.767771 containerd[1849]: 2026-01-28 01:19:37.753 [INFO][5041] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="38d5787637aa543243ef5a2b18bd3c5eecd3fd421473c3d1e8f0e6fcd2f7fdfa" Namespace="calico-apiserver" Pod="calico-apiserver-597546f57d-dtd92" WorkloadEndpoint="ip--172--31--31--26-k8s-calico--apiserver--597546f57d--dtd92-eth0" Jan 28 01:19:37.785205 containerd[1849]: time="2026-01-28T01:19:37.785015818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xwxpv,Uid:ff239d8a-c337-42b4-8142-43d1fa64b8e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbddedafc8d6f27eb69e7b7c8930abe2104f543e26fdc2ce132860151a27e0c3\"" Jan 28 01:19:37.799304 systemd[1]: Started cri-containerd-2fc7e413907c903a77097fe1603a618f1a519b7f3bbb53ebc6506a63e1b74642.scope - libcontainer container 2fc7e413907c903a77097fe1603a618f1a519b7f3bbb53ebc6506a63e1b74642. Jan 28 01:19:37.803178 containerd[1849]: time="2026-01-28T01:19:37.803106848Z" level=info msg="CreateContainer within sandbox \"bbddedafc8d6f27eb69e7b7c8930abe2104f543e26fdc2ce132860151a27e0c3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:19:37.825000 audit[5162]: NETFILTER_CFG table=filter:136 family=2 entries=60 op=nft_register_chain pid=5162 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:19:37.825000 audit[5162]: SYSCALL arch=c000003e syscall=46 success=yes exit=29916 a0=3 a1=7ffc491c98f0 a2=0 a3=7ffc491c98dc items=0 ppid=4484 pid=5162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:37.825000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:19:37.867000 audit: BPF prog-id=243 op=LOAD Jan 28 01:19:37.869000 audit: BPF prog-id=244 op=LOAD Jan 28 01:19:37.869000 audit[5170]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=5155 pid=5170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:37.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266633765343133393037633930336137373039376665313630336136 Jan 28 01:19:37.869000 audit: BPF prog-id=244 op=UNLOAD Jan 28 01:19:37.869000 audit[5170]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5155 pid=5170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:37.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266633765343133393037633930336137373039376665313630336136 Jan 28 01:19:37.869000 audit: BPF prog-id=245 op=LOAD Jan 28 01:19:37.869000 audit[5170]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=5155 pid=5170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:37.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266633765343133393037633930336137373039376665313630336136 Jan 28 01:19:37.869000 audit: BPF prog-id=246 op=LOAD Jan 28 01:19:37.869000 audit[5170]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=5155 pid=5170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:37.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266633765343133393037633930336137373039376665313630336136 Jan 28 01:19:37.869000 audit: BPF prog-id=246 op=UNLOAD Jan 28 01:19:37.869000 audit[5170]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5155 pid=5170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:37.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266633765343133393037633930336137373039376665313630336136 Jan 28 01:19:37.869000 audit: BPF prog-id=245 op=UNLOAD Jan 28 01:19:37.869000 audit[5170]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5155 pid=5170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:37.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266633765343133393037633930336137373039376665313630336136 Jan 28 01:19:37.869000 audit: BPF prog-id=247 op=LOAD Jan 28 01:19:37.869000 audit[5170]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=5155 pid=5170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:37.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266633765343133393037633930336137373039376665313630336136 Jan 28 01:19:37.873530 containerd[1849]: time="2026-01-28T01:19:37.871982953Z" level=info msg="connecting to shim 38d5787637aa543243ef5a2b18bd3c5eecd3fd421473c3d1e8f0e6fcd2f7fdfa" address="unix:///run/containerd/s/714c9600006f71003cb07194f675e16a46d0d16e2ad105b37765f7925faa3a7e" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:19:37.929000 audit[5229]: NETFILTER_CFG table=filter:137 family=2 entries=57 op=nft_register_chain pid=5229 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:19:37.929000 audit[5229]: SYSCALL arch=c000003e syscall=46 success=yes exit=27812 a0=3 a1=7fff17163370 a2=0 a3=7fff1716335c items=0 ppid=4484 pid=5229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:37.929000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:19:37.940704 containerd[1849]: time="2026-01-28T01:19:37.940471917Z" level=info msg="Container dd8775e2ea59ddc050f0d3b5f557f858a6fb7be7c317670247ef3daa35893038: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:19:37.948410 systemd[1]: Started cri-containerd-38d5787637aa543243ef5a2b18bd3c5eecd3fd421473c3d1e8f0e6fcd2f7fdfa.scope - libcontainer container 38d5787637aa543243ef5a2b18bd3c5eecd3fd421473c3d1e8f0e6fcd2f7fdfa. Jan 28 01:19:37.972697 containerd[1849]: time="2026-01-28T01:19:37.972625717Z" level=info msg="CreateContainer within sandbox \"bbddedafc8d6f27eb69e7b7c8930abe2104f543e26fdc2ce132860151a27e0c3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dd8775e2ea59ddc050f0d3b5f557f858a6fb7be7c317670247ef3daa35893038\"" Jan 28 01:19:37.979792 containerd[1849]: time="2026-01-28T01:19:37.979725133Z" level=info msg="StartContainer for \"dd8775e2ea59ddc050f0d3b5f557f858a6fb7be7c317670247ef3daa35893038\"" Jan 28 01:19:37.982851 containerd[1849]: time="2026-01-28T01:19:37.982802907Z" level=info msg="connecting to shim dd8775e2ea59ddc050f0d3b5f557f858a6fb7be7c317670247ef3daa35893038" address="unix:///run/containerd/s/c77809142462bb77f1846ca2d77d06f209d5da6685e9c37a8c4e72d727952a01" protocol=ttrpc version=3 Jan 28 01:19:38.018265 systemd[1]: Started cri-containerd-dd8775e2ea59ddc050f0d3b5f557f858a6fb7be7c317670247ef3daa35893038.scope - libcontainer container dd8775e2ea59ddc050f0d3b5f557f858a6fb7be7c317670247ef3daa35893038. Jan 28 01:19:38.023529 containerd[1849]: time="2026-01-28T01:19:38.023454430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pcsbp,Uid:cbd1002c-7263-43fc-8b17-789e41b44261,Namespace:calico-system,Attempt:0,} returns sandbox id \"2fc7e413907c903a77097fe1603a618f1a519b7f3bbb53ebc6506a63e1b74642\"" Jan 28 01:19:38.027991 containerd[1849]: time="2026-01-28T01:19:38.027955184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:19:38.042000 audit: BPF prog-id=248 op=LOAD Jan 28 01:19:38.045000 audit: BPF prog-id=249 op=LOAD Jan 28 01:19:38.045000 audit[5244]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=5102 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:38.045000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464383737356532656135396464633035306630643362356635353766 Jan 28 01:19:38.045000 audit: BPF prog-id=249 op=UNLOAD Jan 28 01:19:38.045000 audit[5244]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5102 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:38.045000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464383737356532656135396464633035306630643362356635353766 Jan 28 01:19:38.045000 audit: BPF prog-id=250 op=LOAD Jan 28 01:19:38.045000 audit[5244]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=5102 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:38.045000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464383737356532656135396464633035306630643362356635353766 Jan 28 01:19:38.045000 audit: BPF prog-id=251 op=LOAD Jan 28 01:19:38.045000 audit[5244]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=5102 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:38.045000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464383737356532656135396464633035306630643362356635353766 Jan 28 01:19:38.046000 audit: BPF prog-id=251 op=UNLOAD Jan 28 01:19:38.046000 audit[5244]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5102 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:38.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464383737356532656135396464633035306630643362356635353766 Jan 28 01:19:38.046000 audit: BPF prog-id=250 op=UNLOAD Jan 28 01:19:38.046000 audit[5244]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5102 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:38.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464383737356532656135396464633035306630643362356635353766 Jan 28 01:19:38.048000 audit: BPF prog-id=252 op=LOAD Jan 28 01:19:38.048000 audit: BPF prog-id=253 op=LOAD Jan 28 01:19:38.048000 audit[5244]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=5102 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:38.048000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464383737356532656135396464633035306630643362356635353766 Jan 28 01:19:38.050000 audit: BPF prog-id=254 op=LOAD Jan 28 01:19:38.050000 audit[5224]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5210 pid=5224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:38.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338643537383736333761613534333234336566356132623138626433 Jan 28 01:19:38.050000 audit: BPF prog-id=254 op=UNLOAD Jan 28 01:19:38.050000 audit[5224]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5210 pid=5224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:38.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338643537383736333761613534333234336566356132623138626433 Jan 28 01:19:38.050000 audit: BPF prog-id=255 op=LOAD Jan 28 01:19:38.050000 audit[5224]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5210 pid=5224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:38.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338643537383736333761613534333234336566356132623138626433 Jan 28 01:19:38.050000 audit: BPF prog-id=256 op=LOAD Jan 28 01:19:38.050000 audit[5224]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5210 pid=5224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:38.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338643537383736333761613534333234336566356132623138626433 Jan 28 01:19:38.051000 audit: BPF prog-id=256 op=UNLOAD Jan 28 01:19:38.051000 audit[5224]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5210 pid=5224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:38.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338643537383736333761613534333234336566356132623138626433 Jan 28 01:19:38.051000 audit: BPF prog-id=255 op=UNLOAD Jan 28 01:19:38.051000 audit[5224]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5210 pid=5224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:38.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338643537383736333761613534333234336566356132623138626433 Jan 28 01:19:38.051000 audit: BPF prog-id=257 op=LOAD Jan 28 01:19:38.051000 audit[5224]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5210 pid=5224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:38.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338643537383736333761613534333234336566356132623138626433 Jan 28 01:19:38.081063 containerd[1849]: time="2026-01-28T01:19:38.081027890Z" level=info msg="StartContainer for \"dd8775e2ea59ddc050f0d3b5f557f858a6fb7be7c317670247ef3daa35893038\" returns successfully" Jan 28 01:19:38.114136 containerd[1849]: time="2026-01-28T01:19:38.114066008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-597546f57d-dtd92,Uid:77c5b13e-b6c7-4da4-8d69-cf95701836c8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"38d5787637aa543243ef5a2b18bd3c5eecd3fd421473c3d1e8f0e6fcd2f7fdfa\"" Jan 28 01:19:38.206966 containerd[1849]: time="2026-01-28T01:19:38.206833530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ggpx4,Uid:4f85ac2a-2c17-47d1-bfad-b39e838e4361,Namespace:kube-system,Attempt:0,}" Jan 28 01:19:38.244922 systemd-networkd[1458]: cali272fb8e5d5b: Gained IPv6LL Jan 28 01:19:38.304575 containerd[1849]: time="2026-01-28T01:19:38.303998404Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:19:38.306533 containerd[1849]: time="2026-01-28T01:19:38.306192367Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:19:38.307928 kubelet[3202]: E0128 01:19:38.307569 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:19:38.307928 kubelet[3202]: E0128 01:19:38.307626 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:19:38.308963 containerd[1849]: time="2026-01-28T01:19:38.308353212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 01:19:38.311001 containerd[1849]: time="2026-01-28T01:19:38.310717440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:19:38.311308 kubelet[3202]: E0128 01:19:38.310831 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-544bz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pcsbp_calico-system(cbd1002c-7263-43fc-8b17-789e41b44261): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:19:38.312537 kubelet[3202]: E0128 01:19:38.312506 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pcsbp" podUID="cbd1002c-7263-43fc-8b17-789e41b44261" Jan 28 01:19:38.366454 systemd-networkd[1458]: cali3d2bbab7507: Link UP Jan 28 01:19:38.367161 systemd-networkd[1458]: cali3d2bbab7507: Gained carrier Jan 28 01:19:38.391368 containerd[1849]: 2026-01-28 01:19:38.253 [INFO][5286] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--26-k8s-coredns--674b8bbfcf--ggpx4-eth0 coredns-674b8bbfcf- kube-system 4f85ac2a-2c17-47d1-bfad-b39e838e4361 848 0 2026-01-28 01:18:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-31-26 coredns-674b8bbfcf-ggpx4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3d2bbab7507 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6fe953f001253c201d90650a681ecdeae6041b8ca059254b52d28831bceaf8ba" Namespace="kube-system" Pod="coredns-674b8bbfcf-ggpx4" WorkloadEndpoint="ip--172--31--31--26-k8s-coredns--674b8bbfcf--ggpx4-" Jan 28 01:19:38.391368 containerd[1849]: 2026-01-28 01:19:38.253 [INFO][5286] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6fe953f001253c201d90650a681ecdeae6041b8ca059254b52d28831bceaf8ba" Namespace="kube-system" Pod="coredns-674b8bbfcf-ggpx4" WorkloadEndpoint="ip--172--31--31--26-k8s-coredns--674b8bbfcf--ggpx4-eth0" Jan 28 01:19:38.391368 containerd[1849]: 2026-01-28 01:19:38.286 [INFO][5298] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6fe953f001253c201d90650a681ecdeae6041b8ca059254b52d28831bceaf8ba" HandleID="k8s-pod-network.6fe953f001253c201d90650a681ecdeae6041b8ca059254b52d28831bceaf8ba" Workload="ip--172--31--31--26-k8s-coredns--674b8bbfcf--ggpx4-eth0" Jan 28 01:19:38.391368 containerd[1849]: 2026-01-28 01:19:38.286 [INFO][5298] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6fe953f001253c201d90650a681ecdeae6041b8ca059254b52d28831bceaf8ba" HandleID="k8s-pod-network.6fe953f001253c201d90650a681ecdeae6041b8ca059254b52d28831bceaf8ba" Workload="ip--172--31--31--26-k8s-coredns--674b8bbfcf--ggpx4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad3a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-31-26", "pod":"coredns-674b8bbfcf-ggpx4", "timestamp":"2026-01-28 01:19:38.286536817 +0000 UTC"}, Hostname:"ip-172-31-31-26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:19:38.391368 containerd[1849]: 2026-01-28 01:19:38.286 [INFO][5298] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:19:38.391368 containerd[1849]: 2026-01-28 01:19:38.286 [INFO][5298] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:19:38.391368 containerd[1849]: 2026-01-28 01:19:38.286 [INFO][5298] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-26' Jan 28 01:19:38.391368 containerd[1849]: 2026-01-28 01:19:38.301 [INFO][5298] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6fe953f001253c201d90650a681ecdeae6041b8ca059254b52d28831bceaf8ba" host="ip-172-31-31-26" Jan 28 01:19:38.391368 containerd[1849]: 2026-01-28 01:19:38.312 [INFO][5298] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-26" Jan 28 01:19:38.391368 containerd[1849]: 2026-01-28 01:19:38.321 [INFO][5298] ipam/ipam.go 511: Trying affinity for 192.168.24.0/26 host="ip-172-31-31-26" Jan 28 01:19:38.391368 containerd[1849]: 2026-01-28 01:19:38.325 [INFO][5298] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.0/26 host="ip-172-31-31-26" Jan 28 01:19:38.391368 containerd[1849]: 2026-01-28 01:19:38.329 [INFO][5298] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.0/26 host="ip-172-31-31-26" Jan 28 01:19:38.391368 containerd[1849]: 2026-01-28 01:19:38.329 [INFO][5298] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.0/26 handle="k8s-pod-network.6fe953f001253c201d90650a681ecdeae6041b8ca059254b52d28831bceaf8ba" host="ip-172-31-31-26" Jan 28 01:19:38.391368 containerd[1849]: 2026-01-28 01:19:38.332 [INFO][5298] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6fe953f001253c201d90650a681ecdeae6041b8ca059254b52d28831bceaf8ba Jan 28 01:19:38.391368 containerd[1849]: 2026-01-28 01:19:38.337 [INFO][5298] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.0/26 handle="k8s-pod-network.6fe953f001253c201d90650a681ecdeae6041b8ca059254b52d28831bceaf8ba" host="ip-172-31-31-26" Jan 28 01:19:38.391368 containerd[1849]: 2026-01-28 01:19:38.353 [INFO][5298] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.9/26] block=192.168.24.0/26 handle="k8s-pod-network.6fe953f001253c201d90650a681ecdeae6041b8ca059254b52d28831bceaf8ba" host="ip-172-31-31-26" Jan 28 01:19:38.391368 containerd[1849]: 2026-01-28 01:19:38.353 [INFO][5298] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.9/26] handle="k8s-pod-network.6fe953f001253c201d90650a681ecdeae6041b8ca059254b52d28831bceaf8ba" host="ip-172-31-31-26" Jan 28 01:19:38.391368 containerd[1849]: 2026-01-28 01:19:38.353 [INFO][5298] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:19:38.391368 containerd[1849]: 2026-01-28 01:19:38.353 [INFO][5298] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.9/26] IPv6=[] ContainerID="6fe953f001253c201d90650a681ecdeae6041b8ca059254b52d28831bceaf8ba" HandleID="k8s-pod-network.6fe953f001253c201d90650a681ecdeae6041b8ca059254b52d28831bceaf8ba" Workload="ip--172--31--31--26-k8s-coredns--674b8bbfcf--ggpx4-eth0" Jan 28 01:19:38.392376 containerd[1849]: 2026-01-28 01:19:38.358 [INFO][5286] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6fe953f001253c201d90650a681ecdeae6041b8ca059254b52d28831bceaf8ba" Namespace="kube-system" Pod="coredns-674b8bbfcf-ggpx4" WorkloadEndpoint="ip--172--31--31--26-k8s-coredns--674b8bbfcf--ggpx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--26-k8s-coredns--674b8bbfcf--ggpx4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4f85ac2a-2c17-47d1-bfad-b39e838e4361", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 18, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-26", ContainerID:"", Pod:"coredns-674b8bbfcf-ggpx4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3d2bbab7507", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:19:38.392376 containerd[1849]: 2026-01-28 01:19:38.359 [INFO][5286] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.9/32] ContainerID="6fe953f001253c201d90650a681ecdeae6041b8ca059254b52d28831bceaf8ba" Namespace="kube-system" Pod="coredns-674b8bbfcf-ggpx4" WorkloadEndpoint="ip--172--31--31--26-k8s-coredns--674b8bbfcf--ggpx4-eth0" Jan 28 01:19:38.392376 containerd[1849]: 2026-01-28 01:19:38.359 [INFO][5286] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3d2bbab7507 ContainerID="6fe953f001253c201d90650a681ecdeae6041b8ca059254b52d28831bceaf8ba" Namespace="kube-system" Pod="coredns-674b8bbfcf-ggpx4" WorkloadEndpoint="ip--172--31--31--26-k8s-coredns--674b8bbfcf--ggpx4-eth0" Jan 28 01:19:38.392376 containerd[1849]: 2026-01-28 01:19:38.368 [INFO][5286] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6fe953f001253c201d90650a681ecdeae6041b8ca059254b52d28831bceaf8ba" Namespace="kube-system" Pod="coredns-674b8bbfcf-ggpx4" WorkloadEndpoint="ip--172--31--31--26-k8s-coredns--674b8bbfcf--ggpx4-eth0" Jan 28 01:19:38.392376 containerd[1849]: 2026-01-28 01:19:38.368 [INFO][5286] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6fe953f001253c201d90650a681ecdeae6041b8ca059254b52d28831bceaf8ba" Namespace="kube-system" Pod="coredns-674b8bbfcf-ggpx4" WorkloadEndpoint="ip--172--31--31--26-k8s-coredns--674b8bbfcf--ggpx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--26-k8s-coredns--674b8bbfcf--ggpx4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4f85ac2a-2c17-47d1-bfad-b39e838e4361", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 18, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-26", ContainerID:"6fe953f001253c201d90650a681ecdeae6041b8ca059254b52d28831bceaf8ba", Pod:"coredns-674b8bbfcf-ggpx4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3d2bbab7507", MAC:"da:eb:da:6a:a9:c9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:19:38.392376 containerd[1849]: 2026-01-28 01:19:38.385 [INFO][5286] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6fe953f001253c201d90650a681ecdeae6041b8ca059254b52d28831bceaf8ba" Namespace="kube-system" Pod="coredns-674b8bbfcf-ggpx4" WorkloadEndpoint="ip--172--31--31--26-k8s-coredns--674b8bbfcf--ggpx4-eth0" Jan 28 01:19:38.420000 audit[5313]: NETFILTER_CFG table=filter:138 family=2 entries=62 op=nft_register_chain pid=5313 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:19:38.420000 audit[5313]: SYSCALL arch=c000003e syscall=46 success=yes exit=27932 a0=3 a1=7ffd3aa5fe60 a2=0 a3=7ffd3aa5fe4c items=0 ppid=4484 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:38.420000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:19:38.436996 containerd[1849]: time="2026-01-28T01:19:38.436856228Z" level=info msg="connecting to shim 6fe953f001253c201d90650a681ecdeae6041b8ca059254b52d28831bceaf8ba" address="unix:///run/containerd/s/e95990f7083d9d1550fbfa8546392af4650f8a74fc262295487fbbd41cf20a38" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:19:38.477599 systemd[1]: Started cri-containerd-6fe953f001253c201d90650a681ecdeae6041b8ca059254b52d28831bceaf8ba.scope - libcontainer container 6fe953f001253c201d90650a681ecdeae6041b8ca059254b52d28831bceaf8ba. Jan 28 01:19:38.518046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2706964579.mount: Deactivated successfully. Jan 28 01:19:38.535000 audit: BPF prog-id=258 op=LOAD Jan 28 01:19:38.536000 audit: BPF prog-id=259 op=LOAD Jan 28 01:19:38.536000 audit[5333]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5322 pid=5333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:38.536000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666653935336630303132353363323031643930363530613638316563 Jan 28 01:19:38.537000 audit: BPF prog-id=259 op=UNLOAD Jan 28 01:19:38.537000 audit[5333]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5322 pid=5333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:38.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666653935336630303132353363323031643930363530613638316563 Jan 28 01:19:38.537000 audit: BPF prog-id=260 op=LOAD Jan 28 01:19:38.537000 audit[5333]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5322 pid=5333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:38.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666653935336630303132353363323031643930363530613638316563 Jan 28 01:19:38.537000 audit: BPF prog-id=261 op=LOAD Jan 28 01:19:38.537000 audit[5333]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5322 pid=5333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:38.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666653935336630303132353363323031643930363530613638316563 Jan 28 01:19:38.537000 audit: BPF prog-id=261 op=UNLOAD Jan 28 01:19:38.537000 audit[5333]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5322 pid=5333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:38.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666653935336630303132353363323031643930363530613638316563 Jan 28 01:19:38.537000 audit: BPF prog-id=260 op=UNLOAD Jan 28 01:19:38.537000 audit[5333]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5322 pid=5333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:38.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666653935336630303132353363323031643930363530613638316563 Jan 28 01:19:38.537000 audit: BPF prog-id=262 op=LOAD Jan 28 01:19:38.537000 audit[5333]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5322 pid=5333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:38.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666653935336630303132353363323031643930363530613638316563 Jan 28 01:19:38.596338 containerd[1849]: time="2026-01-28T01:19:38.596298867Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:19:38.599721 containerd[1849]: time="2026-01-28T01:19:38.598893178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ggpx4,Uid:4f85ac2a-2c17-47d1-bfad-b39e838e4361,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fe953f001253c201d90650a681ecdeae6041b8ca059254b52d28831bceaf8ba\"" Jan 28 01:19:38.599721 containerd[1849]: time="2026-01-28T01:19:38.599029512Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:19:38.599721 containerd[1849]: time="2026-01-28T01:19:38.599208751Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:19:38.601324 kubelet[3202]: E0128 01:19:38.600110 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:19:38.601324 kubelet[3202]: E0128 01:19:38.600701 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:19:38.602377 kubelet[3202]: E0128 01:19:38.601633 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5pfgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-597546f57d-dtd92_calico-apiserver(77c5b13e-b6c7-4da4-8d69-cf95701836c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:19:38.603013 kubelet[3202]: E0128 01:19:38.602881 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597546f57d-dtd92" podUID="77c5b13e-b6c7-4da4-8d69-cf95701836c8" Jan 28 01:19:38.613003 containerd[1849]: time="2026-01-28T01:19:38.612900905Z" level=info msg="CreateContainer within sandbox \"6fe953f001253c201d90650a681ecdeae6041b8ca059254b52d28831bceaf8ba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:19:38.646791 containerd[1849]: time="2026-01-28T01:19:38.644929620Z" level=info msg="Container 91f4f89e3719f3c3f4efeb8c42cb331dc6006d6c3feb8685a3c96ef19749902f: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:19:38.651066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2023654935.mount: Deactivated successfully. Jan 28 01:19:38.665395 containerd[1849]: time="2026-01-28T01:19:38.665340913Z" level=info msg="CreateContainer within sandbox \"6fe953f001253c201d90650a681ecdeae6041b8ca059254b52d28831bceaf8ba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"91f4f89e3719f3c3f4efeb8c42cb331dc6006d6c3feb8685a3c96ef19749902f\"" Jan 28 01:19:38.667805 containerd[1849]: time="2026-01-28T01:19:38.667604386Z" level=info msg="StartContainer for \"91f4f89e3719f3c3f4efeb8c42cb331dc6006d6c3feb8685a3c96ef19749902f\"" Jan 28 01:19:38.669322 containerd[1849]: time="2026-01-28T01:19:38.669223533Z" level=info msg="connecting to shim 91f4f89e3719f3c3f4efeb8c42cb331dc6006d6c3feb8685a3c96ef19749902f" address="unix:///run/containerd/s/e95990f7083d9d1550fbfa8546392af4650f8a74fc262295487fbbd41cf20a38" protocol=ttrpc version=3 Jan 28 01:19:38.690343 kubelet[3202]: E0128 01:19:38.690224 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pcsbp" podUID="cbd1002c-7263-43fc-8b17-789e41b44261" Jan 28 01:19:38.694074 kubelet[3202]: E0128 01:19:38.694003 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597546f57d-dtd92" podUID="77c5b13e-b6c7-4da4-8d69-cf95701836c8" Jan 28 01:19:38.719008 systemd[1]: Started cri-containerd-91f4f89e3719f3c3f4efeb8c42cb331dc6006d6c3feb8685a3c96ef19749902f.scope - libcontainer container 91f4f89e3719f3c3f4efeb8c42cb331dc6006d6c3feb8685a3c96ef19749902f. Jan 28 01:19:38.772000 audit: BPF prog-id=263 op=LOAD Jan 28 01:19:38.774000 audit: BPF prog-id=264 op=LOAD Jan 28 01:19:38.774000 audit[5363]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=5322 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:38.774000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931663466383965333731396633633366346566656238633432636233 Jan 28 01:19:38.774000 audit: BPF prog-id=264 op=UNLOAD Jan 28 01:19:38.774000 audit[5363]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5322 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:38.774000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931663466383965333731396633633366346566656238633432636233 Jan 28 01:19:38.774000 audit: BPF prog-id=265 op=LOAD Jan 28 01:19:38.774000 audit[5363]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=5322 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:38.774000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931663466383965333731396633633366346566656238633432636233 Jan 28 01:19:38.774000 audit: BPF prog-id=266 op=LOAD Jan 28 01:19:38.774000 audit[5363]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=5322 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:38.774000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931663466383965333731396633633366346566656238633432636233 Jan 28 01:19:38.774000 audit: BPF prog-id=266 op=UNLOAD Jan 28 01:19:38.774000 audit[5363]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5322 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:38.774000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931663466383965333731396633633366346566656238633432636233 Jan 28 01:19:38.774000 audit: BPF prog-id=265 op=UNLOAD Jan 28 01:19:38.774000 audit[5363]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5322 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:38.774000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931663466383965333731396633633366346566656238633432636233 Jan 28 01:19:38.774000 audit: BPF prog-id=267 op=LOAD Jan 28 01:19:38.774000 audit[5363]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=5322 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:38.774000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931663466383965333731396633633366346566656238633432636233 Jan 28 01:19:38.830655 containerd[1849]: time="2026-01-28T01:19:38.830365706Z" level=info msg="StartContainer for \"91f4f89e3719f3c3f4efeb8c42cb331dc6006d6c3feb8685a3c96ef19749902f\" returns successfully" Jan 28 01:19:38.851000 audit[5384]: NETFILTER_CFG table=filter:139 family=2 entries=20 op=nft_register_rule pid=5384 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:38.851000 audit[5384]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffd5f50d90 a2=0 a3=7fffd5f50d7c items=0 ppid=3451 pid=5384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:38.851000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:38.863778 kubelet[3202]: I0128 01:19:38.863453 3202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xwxpv" podStartSLOduration=46.832022281 podStartE2EDuration="46.832022281s" podCreationTimestamp="2026-01-28 01:18:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:19:38.8101724 +0000 UTC m=+52.828059401" watchObservedRunningTime="2026-01-28 01:19:38.832022281 +0000 UTC m=+52.849909270" Jan 28 01:19:38.863000 audit[5384]: NETFILTER_CFG table=nat:140 family=2 entries=14 op=nft_register_rule pid=5384 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:38.863000 audit[5384]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffd5f50d90 a2=0 a3=0 items=0 ppid=3451 pid=5384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:38.863000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:38.886000 audit[5396]: NETFILTER_CFG table=filter:141 family=2 entries=17 op=nft_register_rule pid=5396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:38.886000 audit[5396]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcb9212b40 a2=0 a3=7ffcb9212b2c items=0 ppid=3451 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:38.886000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:38.891000 audit[5396]: NETFILTER_CFG table=nat:142 family=2 entries=35 op=nft_register_chain pid=5396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:38.891000 audit[5396]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffcb9212b40 a2=0 a3=7ffcb9212b2c items=0 ppid=3451 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:38.891000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:39.076068 systemd-networkd[1458]: cali243e00c19ab: Gained IPv6LL Jan 28 01:19:39.139924 systemd-networkd[1458]: calid8fab81d167: Gained IPv6LL Jan 28 01:19:39.267957 systemd-networkd[1458]: cali43c08ede274: Gained IPv6LL Jan 28 01:19:39.710972 kubelet[3202]: E0128 01:19:39.710731 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pcsbp" podUID="cbd1002c-7263-43fc-8b17-789e41b44261" Jan 28 01:19:39.711406 kubelet[3202]: E0128 01:19:39.711081 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597546f57d-dtd92" podUID="77c5b13e-b6c7-4da4-8d69-cf95701836c8" Jan 28 01:19:39.760974 kubelet[3202]: I0128 01:19:39.760840 3202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ggpx4" podStartSLOduration=47.760824382 podStartE2EDuration="47.760824382s" podCreationTimestamp="2026-01-28 01:18:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:19:39.733599295 +0000 UTC m=+53.751486284" watchObservedRunningTime="2026-01-28 01:19:39.760824382 +0000 UTC m=+53.778711434" Jan 28 01:19:39.824000 audit[5407]: NETFILTER_CFG table=filter:143 family=2 entries=14 op=nft_register_rule pid=5407 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:39.824000 audit[5407]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcd80b9fe0 a2=0 a3=7ffcd80b9fcc items=0 ppid=3451 pid=5407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:39.824000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:39.868000 audit[5407]: NETFILTER_CFG table=nat:144 family=2 entries=56 op=nft_register_chain pid=5407 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:19:39.868000 audit[5407]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffcd80b9fe0 a2=0 a3=7ffcd80b9fcc items=0 ppid=3451 pid=5407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:39.868000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:19:40.293934 systemd-networkd[1458]: cali3d2bbab7507: Gained IPv6LL Jan 28 01:19:42.948056 ntpd[1823]: Listen normally on 6 vxlan.calico 192.168.24.0:123 Jan 28 01:19:42.949568 ntpd[1823]: 28 Jan 01:19:42 ntpd[1823]: Listen normally on 6 vxlan.calico 192.168.24.0:123 Jan 28 01:19:42.949568 ntpd[1823]: 28 Jan 01:19:42 ntpd[1823]: Listen normally on 7 cali3055b20ed95 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 28 01:19:42.949568 ntpd[1823]: 28 Jan 01:19:42 ntpd[1823]: Listen normally on 8 vxlan.calico [fe80::64ad:3eff:fed2:1613%5]:123 Jan 28 01:19:42.949568 ntpd[1823]: 28 Jan 01:19:42 ntpd[1823]: Listen normally on 9 calif960986ffda [fe80::ecee:eeff:feee:eeee%8]:123 Jan 28 01:19:42.949568 ntpd[1823]: 28 Jan 01:19:42 ntpd[1823]: Listen normally on 10 cali8ece591c57f [fe80::ecee:eeff:feee:eeee%9]:123 Jan 28 01:19:42.949568 ntpd[1823]: 28 Jan 01:19:42 ntpd[1823]: Listen normally on 11 cali97e1ca024cd [fe80::ecee:eeff:feee:eeee%10]:123 Jan 28 01:19:42.949568 ntpd[1823]: 28 Jan 01:19:42 ntpd[1823]: Listen normally on 12 cali272fb8e5d5b [fe80::ecee:eeff:feee:eeee%11]:123 Jan 28 01:19:42.949568 ntpd[1823]: 28 Jan 01:19:42 ntpd[1823]: Listen normally on 13 cali243e00c19ab [fe80::ecee:eeff:feee:eeee%12]:123 Jan 28 01:19:42.949568 ntpd[1823]: 28 Jan 01:19:42 ntpd[1823]: Listen normally on 14 cali43c08ede274 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 28 01:19:42.949568 ntpd[1823]: 28 Jan 01:19:42 ntpd[1823]: Listen normally on 15 calid8fab81d167 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 28 01:19:42.949568 ntpd[1823]: 28 Jan 01:19:42 ntpd[1823]: Listen normally on 16 cali3d2bbab7507 [fe80::ecee:eeff:feee:eeee%15]:123 Jan 28 01:19:42.948121 ntpd[1823]: Listen normally on 7 cali3055b20ed95 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 28 01:19:42.948147 ntpd[1823]: Listen normally on 8 vxlan.calico [fe80::64ad:3eff:fed2:1613%5]:123 Jan 28 01:19:42.948167 ntpd[1823]: Listen normally on 9 calif960986ffda [fe80::ecee:eeff:feee:eeee%8]:123 Jan 28 01:19:42.948184 ntpd[1823]: Listen normally on 10 cali8ece591c57f [fe80::ecee:eeff:feee:eeee%9]:123 Jan 28 01:19:42.948202 ntpd[1823]: Listen normally on 11 cali97e1ca024cd [fe80::ecee:eeff:feee:eeee%10]:123 Jan 28 01:19:42.948222 ntpd[1823]: Listen normally on 12 cali272fb8e5d5b [fe80::ecee:eeff:feee:eeee%11]:123 Jan 28 01:19:42.948241 ntpd[1823]: Listen normally on 13 cali243e00c19ab [fe80::ecee:eeff:feee:eeee%12]:123 Jan 28 01:19:42.948268 ntpd[1823]: Listen normally on 14 cali43c08ede274 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 28 01:19:42.948287 ntpd[1823]: Listen normally on 15 calid8fab81d167 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 28 01:19:42.948304 ntpd[1823]: Listen normally on 16 cali3d2bbab7507 [fe80::ecee:eeff:feee:eeee%15]:123 Jan 28 01:19:49.207918 containerd[1849]: time="2026-01-28T01:19:49.207880999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:19:49.476533 containerd[1849]: time="2026-01-28T01:19:49.476408279Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:19:49.478589 containerd[1849]: time="2026-01-28T01:19:49.478542143Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:19:49.478739 containerd[1849]: time="2026-01-28T01:19:49.478561481Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 01:19:49.478822 kubelet[3202]: E0128 01:19:49.478791 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:19:49.479123 kubelet[3202]: E0128 01:19:49.478835 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:19:49.479123 kubelet[3202]: E0128 01:19:49.478982 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4bxdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5fjx2_calico-system(6c6224c1-45a4-4e67-9483-34412dd5913e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:19:49.481201 containerd[1849]: time="2026-01-28T01:19:49.481136897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:19:49.740802 containerd[1849]: time="2026-01-28T01:19:49.740586433Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:19:49.742857 containerd[1849]: time="2026-01-28T01:19:49.742809651Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:19:49.742857 containerd[1849]: time="2026-01-28T01:19:49.742890088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 01:19:49.743099 kubelet[3202]: E0128 01:19:49.743047 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:19:49.743175 kubelet[3202]: E0128 01:19:49.743121 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:19:49.743586 kubelet[3202]: E0128 01:19:49.743267 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4bxdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5fjx2_calico-system(6c6224c1-45a4-4e67-9483-34412dd5913e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:19:49.745177 kubelet[3202]: E0128 01:19:49.745038 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5fjx2" podUID="6c6224c1-45a4-4e67-9483-34412dd5913e" Jan 28 01:19:50.204135 containerd[1849]: time="2026-01-28T01:19:50.204087443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:19:50.488855 containerd[1849]: time="2026-01-28T01:19:50.488721615Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:19:50.490893 containerd[1849]: time="2026-01-28T01:19:50.490841618Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:19:50.491015 containerd[1849]: time="2026-01-28T01:19:50.490931950Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 01:19:50.491174 kubelet[3202]: E0128 01:19:50.491100 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:19:50.491174 kubelet[3202]: E0128 01:19:50.491167 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:19:50.491691 kubelet[3202]: E0128 01:19:50.491430 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2875x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-dbbbff994-sqgh4_calico-system(c988f11c-6f16-4100-9308-ea1983457126): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:19:50.492310 containerd[1849]: time="2026-01-28T01:19:50.492271721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:19:50.492649 kubelet[3202]: E0128 01:19:50.492606 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dbbbff994-sqgh4" podUID="c988f11c-6f16-4100-9308-ea1983457126" Jan 28 01:19:50.769680 containerd[1849]: time="2026-01-28T01:19:50.769511445Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:19:50.771689 containerd[1849]: time="2026-01-28T01:19:50.771611688Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:19:50.771794 containerd[1849]: time="2026-01-28T01:19:50.771716281Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 01:19:50.771980 kubelet[3202]: E0128 01:19:50.771951 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:19:50.772032 kubelet[3202]: E0128 01:19:50.772002 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:19:50.772151 kubelet[3202]: E0128 01:19:50.772101 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e0b05c5628ca455b907d3d9bf2c70df6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6pdxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-799d7c8d49-2vrfp_calico-system(9accd949-c4fa-4ce7-b3a8-524deb448a06): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:19:50.775078 containerd[1849]: time="2026-01-28T01:19:50.775038494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:19:51.054721 containerd[1849]: time="2026-01-28T01:19:51.054588074Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:19:51.056673 containerd[1849]: time="2026-01-28T01:19:51.056623164Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:19:51.056794 containerd[1849]: time="2026-01-28T01:19:51.056710364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 01:19:51.056929 kubelet[3202]: E0128 01:19:51.056894 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:19:51.056983 kubelet[3202]: E0128 01:19:51.056938 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:19:51.057087 kubelet[3202]: E0128 01:19:51.057047 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6pdxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-799d7c8d49-2vrfp_calico-system(9accd949-c4fa-4ce7-b3a8-524deb448a06): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:19:51.058565 kubelet[3202]: E0128 01:19:51.058503 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-799d7c8d49-2vrfp" podUID="9accd949-c4fa-4ce7-b3a8-524deb448a06" Jan 28 01:19:51.204388 containerd[1849]: time="2026-01-28T01:19:51.204251229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:19:51.594622 containerd[1849]: time="2026-01-28T01:19:51.594573961Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:19:51.596632 containerd[1849]: time="2026-01-28T01:19:51.596586025Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:19:51.596793 containerd[1849]: time="2026-01-28T01:19:51.596609380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:19:51.596858 kubelet[3202]: E0128 01:19:51.596818 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:19:51.597257 kubelet[3202]: E0128 01:19:51.596866 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:19:51.597257 kubelet[3202]: E0128 01:19:51.596986 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pqxhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74bc45499-2znnx_calico-apiserver(d6acbfd4-88d6-4133-9434-3bfec2c327d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:19:51.598483 kubelet[3202]: E0128 01:19:51.598446 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74bc45499-2znnx" podUID="d6acbfd4-88d6-4133-9434-3bfec2c327d4" Jan 28 01:19:51.864082 kernel: kauditd_printk_skb: 214 callbacks suppressed Jan 28 01:19:51.864205 kernel: audit: type=1130 audit(1769563191.856:769): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.31.26:22-68.220.241.50:60950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:19:51.864254 kernel: hrtimer: interrupt took 367396 ns Jan 28 01:19:51.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.31.26:22-68.220.241.50:60950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:19:51.856618 systemd[1]: Started sshd@7-172.31.31.26:22-68.220.241.50:60950.service - OpenSSH per-connection server daemon (68.220.241.50:60950). Jan 28 01:19:52.204379 containerd[1849]: time="2026-01-28T01:19:52.204117072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:19:52.382000 audit[5426]: USER_ACCT pid=5426 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:19:52.410007 kernel: audit: type=1101 audit(1769563192.382:770): pid=5426 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:19:52.411964 kernel: audit: type=1103 audit(1769563192.388:771): pid=5426 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:19:52.412018 kernel: audit: type=1006 audit(1769563192.388:772): pid=5426 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 28 01:19:52.412052 kernel: audit: type=1300 audit(1769563192.388:772): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe1ebcc8a0 a2=3 a3=0 items=0 ppid=1 pid=5426 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:52.412095 kernel: audit: type=1327 audit(1769563192.388:772): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:19:52.388000 audit[5426]: CRED_ACQ pid=5426 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:19:52.444721 kernel: audit: type=1105 audit(1769563192.420:773): pid=5426 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:19:52.444827 kernel: audit: type=1103 audit(1769563192.423:774): pid=5430 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:19:52.388000 audit[5426]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe1ebcc8a0 a2=3 a3=0 items=0 ppid=1 pid=5426 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:52.388000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:19:52.420000 audit[5426]: USER_START pid=5426 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:19:52.423000 audit[5430]: CRED_ACQ pid=5430 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:19:52.445065 sshd[5426]: Accepted publickey for core from 68.220.241.50 port 60950 ssh2: RSA SHA256:PpvjS6sxgjOf+voyr4NrS2kTF8aDF7ek5ziSVtOzP6U Jan 28 01:19:52.390861 sshd-session[5426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:19:52.411386 systemd-logind[1833]: New session 9 of user core. Jan 28 01:19:52.417973 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 28 01:19:52.457037 containerd[1849]: time="2026-01-28T01:19:52.456890587Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:19:52.459016 containerd[1849]: time="2026-01-28T01:19:52.458956298Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:19:52.459111 containerd[1849]: time="2026-01-28T01:19:52.458988277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:19:52.459332 kubelet[3202]: E0128 01:19:52.459295 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:19:52.459403 kubelet[3202]: E0128 01:19:52.459341 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:19:52.459535 kubelet[3202]: E0128 01:19:52.459484 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9vv22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74bc45499-4hxx4_calico-apiserver(325cd625-25dd-4d22-8523-67e469e6d0e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:19:52.460996 kubelet[3202]: E0128 01:19:52.460958 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74bc45499-4hxx4" podUID="325cd625-25dd-4d22-8523-67e469e6d0e9" Jan 28 01:19:53.401968 sshd[5430]: Connection closed by 68.220.241.50 port 60950 Jan 28 01:19:53.402586 sshd-session[5426]: pam_unix(sshd:session): session closed for user core Jan 28 01:19:53.405000 audit[5426]: USER_END pid=5426 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:19:53.414894 kernel: audit: type=1106 audit(1769563193.405:775): pid=5426 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:19:53.414946 kernel: audit: type=1104 audit(1769563193.405:776): pid=5426 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:19:53.405000 audit[5426]: CRED_DISP pid=5426 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:19:53.408469 systemd[1]: sshd@7-172.31.31.26:22-68.220.241.50:60950.service: Deactivated successfully. Jan 28 01:19:53.411094 systemd[1]: session-9.scope: Deactivated successfully. Jan 28 01:19:53.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.31.26:22-68.220.241.50:60950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:19:53.416937 systemd-logind[1833]: Session 9 logged out. Waiting for processes to exit. Jan 28 01:19:53.418686 systemd-logind[1833]: Removed session 9. Jan 28 01:19:54.204564 containerd[1849]: time="2026-01-28T01:19:54.204272058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:19:54.491648 containerd[1849]: time="2026-01-28T01:19:54.491464355Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:19:54.495282 containerd[1849]: time="2026-01-28T01:19:54.495222391Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:19:54.495379 containerd[1849]: time="2026-01-28T01:19:54.495311994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 01:19:54.495496 kubelet[3202]: E0128 01:19:54.495447 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:19:54.495905 kubelet[3202]: E0128 01:19:54.495496 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:19:54.495905 kubelet[3202]: E0128 01:19:54.495635 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-544bz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pcsbp_calico-system(cbd1002c-7263-43fc-8b17-789e41b44261): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:19:54.496962 kubelet[3202]: E0128 01:19:54.496903 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pcsbp" podUID="cbd1002c-7263-43fc-8b17-789e41b44261" Jan 28 01:19:55.203449 containerd[1849]: time="2026-01-28T01:19:55.203414170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:19:55.632084 containerd[1849]: time="2026-01-28T01:19:55.631869846Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:19:55.634245 containerd[1849]: time="2026-01-28T01:19:55.634062800Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:19:55.634245 containerd[1849]: time="2026-01-28T01:19:55.634123251Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:19:55.634836 kubelet[3202]: E0128 01:19:55.634661 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:19:55.634836 kubelet[3202]: E0128 01:19:55.634829 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:19:55.635149 kubelet[3202]: E0128 01:19:55.634956 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5pfgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-597546f57d-dtd92_calico-apiserver(77c5b13e-b6c7-4da4-8d69-cf95701836c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:19:55.636124 kubelet[3202]: E0128 01:19:55.636087 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597546f57d-dtd92" podUID="77c5b13e-b6c7-4da4-8d69-cf95701836c8" Jan 28 01:19:58.494928 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:19:58.495041 kernel: audit: type=1130 audit(1769563198.491:778): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.31.26:22-68.220.241.50:43992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:19:58.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.31.26:22-68.220.241.50:43992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:19:58.492271 systemd[1]: Started sshd@8-172.31.31.26:22-68.220.241.50:43992.service - OpenSSH per-connection server daemon (68.220.241.50:43992). Jan 28 01:19:58.947000 audit[5454]: USER_ACCT pid=5454 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:19:58.950691 sshd-session[5454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:19:58.952237 sshd[5454]: Accepted publickey for core from 68.220.241.50 port 43992 ssh2: RSA SHA256:PpvjS6sxgjOf+voyr4NrS2kTF8aDF7ek5ziSVtOzP6U Jan 28 01:19:58.957713 kernel: audit: type=1101 audit(1769563198.947:779): pid=5454 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:19:58.957812 kernel: audit: type=1103 audit(1769563198.949:780): pid=5454 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:19:58.949000 audit[5454]: CRED_ACQ pid=5454 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:19:58.960414 systemd-logind[1833]: New session 10 of user core. Jan 28 01:19:58.949000 audit[5454]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc58b47700 a2=3 a3=0 items=0 ppid=1 pid=5454 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:58.964825 kernel: audit: type=1006 audit(1769563198.949:781): pid=5454 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 28 01:19:58.964951 kernel: audit: type=1300 audit(1769563198.949:781): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc58b47700 a2=3 a3=0 items=0 ppid=1 pid=5454 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:19:58.949000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:19:58.970768 kernel: audit: type=1327 audit(1769563198.949:781): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:19:58.971995 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 28 01:19:58.974000 audit[5454]: USER_START pid=5454 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:19:58.976000 audit[5458]: CRED_ACQ pid=5458 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:19:58.982057 kernel: audit: type=1105 audit(1769563198.974:782): pid=5454 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:19:58.982140 kernel: audit: type=1103 audit(1769563198.976:783): pid=5458 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:19:59.271441 sshd[5458]: Connection closed by 68.220.241.50 port 43992 Jan 28 01:19:59.272913 sshd-session[5454]: pam_unix(sshd:session): session closed for user core Jan 28 01:19:59.274000 audit[5454]: USER_END pid=5454 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:19:59.282882 kernel: audit: type=1106 audit(1769563199.274:784): pid=5454 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:19:59.274000 audit[5454]: CRED_DISP pid=5454 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:19:59.288778 kernel: audit: type=1104 audit(1769563199.274:785): pid=5454 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:19:59.289197 systemd[1]: sshd@8-172.31.31.26:22-68.220.241.50:43992.service: Deactivated successfully. Jan 28 01:19:59.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.31.26:22-68.220.241.50:43992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:19:59.292045 systemd[1]: session-10.scope: Deactivated successfully. Jan 28 01:19:59.294059 systemd-logind[1833]: Session 10 logged out. Waiting for processes to exit. Jan 28 01:19:59.295624 systemd-logind[1833]: Removed session 10. Jan 28 01:20:02.210778 kubelet[3202]: E0128 01:20:02.210558 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-799d7c8d49-2vrfp" podUID="9accd949-c4fa-4ce7-b3a8-524deb448a06" Jan 28 01:20:04.221124 kubelet[3202]: E0128 01:20:04.221085 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74bc45499-4hxx4" podUID="325cd625-25dd-4d22-8523-67e469e6d0e9" Jan 28 01:20:04.357739 systemd[1]: Started sshd@9-172.31.31.26:22-68.220.241.50:60578.service - OpenSSH per-connection server daemon (68.220.241.50:60578). Jan 28 01:20:04.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.31.26:22-68.220.241.50:60578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:04.359136 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:20:04.359232 kernel: audit: type=1130 audit(1769563204.357:787): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.31.26:22-68.220.241.50:60578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:04.813000 audit[5495]: USER_ACCT pid=5495 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:04.814883 sshd[5495]: Accepted publickey for core from 68.220.241.50 port 60578 ssh2: RSA SHA256:PpvjS6sxgjOf+voyr4NrS2kTF8aDF7ek5ziSVtOzP6U Jan 28 01:20:04.817108 sshd-session[5495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:20:04.820074 kernel: audit: type=1101 audit(1769563204.813:788): pid=5495 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:04.820149 kernel: audit: type=1103 audit(1769563204.815:789): pid=5495 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:04.815000 audit[5495]: CRED_ACQ pid=5495 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:04.827789 kernel: audit: type=1006 audit(1769563204.815:790): pid=5495 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 28 01:20:04.827886 kernel: audit: type=1300 audit(1769563204.815:790): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa0ce3640 a2=3 a3=0 items=0 ppid=1 pid=5495 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:20:04.815000 audit[5495]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa0ce3640 a2=3 a3=0 items=0 ppid=1 pid=5495 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:20:04.835470 kernel: audit: type=1327 audit(1769563204.815:790): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:20:04.815000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:20:04.836584 systemd-logind[1833]: New session 11 of user core. Jan 28 01:20:04.845003 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 28 01:20:04.847000 audit[5495]: USER_START pid=5495 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:04.850000 audit[5500]: CRED_ACQ pid=5500 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:04.855777 kernel: audit: type=1105 audit(1769563204.847:791): pid=5495 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:04.855864 kernel: audit: type=1103 audit(1769563204.850:792): pid=5500 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:05.203547 kubelet[3202]: E0128 01:20:05.203458 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pcsbp" podUID="cbd1002c-7263-43fc-8b17-789e41b44261" Jan 28 01:20:05.204608 kubelet[3202]: E0128 01:20:05.204560 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5fjx2" podUID="6c6224c1-45a4-4e67-9483-34412dd5913e" Jan 28 01:20:05.204730 kubelet[3202]: E0128 01:20:05.204638 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74bc45499-2znnx" podUID="d6acbfd4-88d6-4133-9434-3bfec2c327d4" Jan 28 01:20:05.226734 sshd[5500]: Connection closed by 68.220.241.50 port 60578 Jan 28 01:20:05.227938 sshd-session[5495]: pam_unix(sshd:session): session closed for user core Jan 28 01:20:05.228000 audit[5495]: USER_END pid=5495 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:05.235889 kernel: audit: type=1106 audit(1769563205.228:793): pid=5495 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:05.232427 systemd-logind[1833]: Session 11 logged out. Waiting for processes to exit. Jan 28 01:20:05.234206 systemd[1]: sshd@9-172.31.31.26:22-68.220.241.50:60578.service: Deactivated successfully. Jan 28 01:20:05.237697 systemd[1]: session-11.scope: Deactivated successfully. Jan 28 01:20:05.240368 systemd-logind[1833]: Removed session 11. Jan 28 01:20:05.228000 audit[5495]: CRED_DISP pid=5495 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:05.246777 kernel: audit: type=1104 audit(1769563205.228:794): pid=5495 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:05.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.31.26:22-68.220.241.50:60578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:05.338071 systemd[1]: Started sshd@10-172.31.31.26:22-68.220.241.50:60584.service - OpenSSH per-connection server daemon (68.220.241.50:60584). Jan 28 01:20:05.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.31.26:22-68.220.241.50:60584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:05.805000 audit[5513]: USER_ACCT pid=5513 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:05.806482 sshd[5513]: Accepted publickey for core from 68.220.241.50 port 60584 ssh2: RSA SHA256:PpvjS6sxgjOf+voyr4NrS2kTF8aDF7ek5ziSVtOzP6U Jan 28 01:20:05.806000 audit[5513]: CRED_ACQ pid=5513 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:05.806000 audit[5513]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed5354c60 a2=3 a3=0 items=0 ppid=1 pid=5513 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:20:05.806000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:20:05.808295 sshd-session[5513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:20:05.814052 systemd-logind[1833]: New session 12 of user core. Jan 28 01:20:05.820015 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 28 01:20:05.823000 audit[5513]: USER_START pid=5513 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:05.825000 audit[5518]: CRED_ACQ pid=5518 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:06.191698 sshd[5518]: Connection closed by 68.220.241.50 port 60584 Jan 28 01:20:06.193425 sshd-session[5513]: pam_unix(sshd:session): session closed for user core Jan 28 01:20:06.196000 audit[5513]: USER_END pid=5513 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:06.197000 audit[5513]: CRED_DISP pid=5513 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:06.203824 systemd-logind[1833]: Session 12 logged out. Waiting for processes to exit. Jan 28 01:20:06.204641 systemd[1]: sshd@10-172.31.31.26:22-68.220.241.50:60584.service: Deactivated successfully. Jan 28 01:20:06.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.31.26:22-68.220.241.50:60584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:06.208629 kubelet[3202]: E0128 01:20:06.207209 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dbbbff994-sqgh4" podUID="c988f11c-6f16-4100-9308-ea1983457126" Jan 28 01:20:06.212191 systemd[1]: session-12.scope: Deactivated successfully. Jan 28 01:20:06.218929 systemd-logind[1833]: Removed session 12. Jan 28 01:20:06.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.31.26:22-68.220.241.50:60600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:06.283187 systemd[1]: Started sshd@11-172.31.31.26:22-68.220.241.50:60600.service - OpenSSH per-connection server daemon (68.220.241.50:60600). Jan 28 01:20:06.770000 audit[5528]: USER_ACCT pid=5528 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:06.772041 sshd[5528]: Accepted publickey for core from 68.220.241.50 port 60600 ssh2: RSA SHA256:PpvjS6sxgjOf+voyr4NrS2kTF8aDF7ek5ziSVtOzP6U Jan 28 01:20:06.773000 audit[5528]: CRED_ACQ pid=5528 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:06.774000 audit[5528]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd0a5d3b0 a2=3 a3=0 items=0 ppid=1 pid=5528 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:20:06.774000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:20:06.778238 sshd-session[5528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:20:06.795634 systemd-logind[1833]: New session 13 of user core. Jan 28 01:20:06.803002 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 28 01:20:06.815000 audit[5528]: USER_START pid=5528 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:06.818000 audit[5533]: CRED_ACQ pid=5533 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:07.141329 sshd[5533]: Connection closed by 68.220.241.50 port 60600 Jan 28 01:20:07.142241 sshd-session[5528]: pam_unix(sshd:session): session closed for user core Jan 28 01:20:07.142000 audit[5528]: USER_END pid=5528 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:07.143000 audit[5528]: CRED_DISP pid=5528 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:07.147699 systemd-logind[1833]: Session 13 logged out. Waiting for processes to exit. Jan 28 01:20:07.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.31.26:22-68.220.241.50:60600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:07.147845 systemd[1]: sshd@11-172.31.31.26:22-68.220.241.50:60600.service: Deactivated successfully. Jan 28 01:20:07.150433 systemd[1]: session-13.scope: Deactivated successfully. Jan 28 01:20:07.159221 systemd-logind[1833]: Removed session 13. Jan 28 01:20:10.211527 kubelet[3202]: E0128 01:20:10.202946 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597546f57d-dtd92" podUID="77c5b13e-b6c7-4da4-8d69-cf95701836c8" Jan 28 01:20:12.225160 systemd[1]: Started sshd@12-172.31.31.26:22-68.220.241.50:60604.service - OpenSSH per-connection server daemon (68.220.241.50:60604). Jan 28 01:20:12.227231 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 28 01:20:12.227303 kernel: audit: type=1130 audit(1769563212.224:814): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.31.26:22-68.220.241.50:60604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:12.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.31.26:22-68.220.241.50:60604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:12.721000 audit[5552]: USER_ACCT pid=5552 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:12.728817 kernel: audit: type=1101 audit(1769563212.721:815): pid=5552 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:12.729520 sshd[5552]: Accepted publickey for core from 68.220.241.50 port 60604 ssh2: RSA SHA256:PpvjS6sxgjOf+voyr4NrS2kTF8aDF7ek5ziSVtOzP6U Jan 28 01:20:12.729000 audit[5552]: CRED_ACQ pid=5552 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:12.737295 kernel: audit: type=1103 audit(1769563212.729:816): pid=5552 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:12.731429 sshd-session[5552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:20:12.741795 kernel: audit: type=1006 audit(1769563212.729:817): pid=5552 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 28 01:20:12.742982 kernel: audit: type=1300 audit(1769563212.729:817): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5694e810 a2=3 a3=0 items=0 ppid=1 pid=5552 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:20:12.729000 audit[5552]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5694e810 a2=3 a3=0 items=0 ppid=1 pid=5552 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:20:12.750496 kernel: audit: type=1327 audit(1769563212.729:817): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:20:12.729000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:20:12.756731 systemd-logind[1833]: New session 14 of user core. Jan 28 01:20:12.771294 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 28 01:20:12.778000 audit[5552]: USER_START pid=5552 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:12.786775 kernel: audit: type=1105 audit(1769563212.778:818): pid=5552 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:12.788000 audit[5556]: CRED_ACQ pid=5556 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:12.794775 kernel: audit: type=1103 audit(1769563212.788:819): pid=5556 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:13.053084 sshd[5556]: Connection closed by 68.220.241.50 port 60604 Jan 28 01:20:13.055283 sshd-session[5552]: pam_unix(sshd:session): session closed for user core Jan 28 01:20:13.058000 audit[5552]: USER_END pid=5552 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:13.063000 audit[5552]: CRED_DISP pid=5552 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:13.067477 kernel: audit: type=1106 audit(1769563213.058:820): pid=5552 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:13.067549 kernel: audit: type=1104 audit(1769563213.063:821): pid=5552 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:13.068024 systemd[1]: sshd@12-172.31.31.26:22-68.220.241.50:60604.service: Deactivated successfully. Jan 28 01:20:13.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.31.26:22-68.220.241.50:60604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:13.070647 systemd[1]: session-14.scope: Deactivated successfully. Jan 28 01:20:13.071828 systemd-logind[1833]: Session 14 logged out. Waiting for processes to exit. Jan 28 01:20:13.073561 systemd-logind[1833]: Removed session 14. Jan 28 01:20:13.203594 containerd[1849]: time="2026-01-28T01:20:13.203540870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:20:13.469535 containerd[1849]: time="2026-01-28T01:20:13.469383057Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:20:13.471463 containerd[1849]: time="2026-01-28T01:20:13.471372135Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:20:13.471463 containerd[1849]: time="2026-01-28T01:20:13.471427954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 01:20:13.471951 kubelet[3202]: E0128 01:20:13.471734 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:20:13.471951 kubelet[3202]: E0128 01:20:13.471787 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:20:13.471951 kubelet[3202]: E0128 01:20:13.471908 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e0b05c5628ca455b907d3d9bf2c70df6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6pdxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-799d7c8d49-2vrfp_calico-system(9accd949-c4fa-4ce7-b3a8-524deb448a06): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:20:13.474835 containerd[1849]: time="2026-01-28T01:20:13.474813960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:20:13.762989 containerd[1849]: time="2026-01-28T01:20:13.762610198Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:20:13.765021 containerd[1849]: time="2026-01-28T01:20:13.764910102Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:20:13.765021 containerd[1849]: time="2026-01-28T01:20:13.764998987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 01:20:13.765201 kubelet[3202]: E0128 01:20:13.765151 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:20:13.765201 kubelet[3202]: E0128 01:20:13.765196 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:20:13.765336 kubelet[3202]: E0128 01:20:13.765306 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6pdxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-799d7c8d49-2vrfp_calico-system(9accd949-c4fa-4ce7-b3a8-524deb448a06): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:20:13.766487 kubelet[3202]: E0128 01:20:13.766436 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-799d7c8d49-2vrfp" podUID="9accd949-c4fa-4ce7-b3a8-524deb448a06" Jan 28 01:20:16.204843 containerd[1849]: time="2026-01-28T01:20:16.204803082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:20:16.517255 containerd[1849]: time="2026-01-28T01:20:16.517113145Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:20:16.519487 containerd[1849]: time="2026-01-28T01:20:16.519427259Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:20:16.519632 containerd[1849]: time="2026-01-28T01:20:16.519446753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:20:16.519838 kubelet[3202]: E0128 01:20:16.519789 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:20:16.520740 kubelet[3202]: E0128 01:20:16.519850 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:20:16.520740 kubelet[3202]: E0128 01:20:16.520037 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9vv22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74bc45499-4hxx4_calico-apiserver(325cd625-25dd-4d22-8523-67e469e6d0e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:20:16.521306 kubelet[3202]: E0128 01:20:16.521269 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74bc45499-4hxx4" podUID="325cd625-25dd-4d22-8523-67e469e6d0e9" Jan 28 01:20:18.149588 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:20:18.151958 kernel: audit: type=1130 audit(1769563218.144:823): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.31.26:22-68.220.241.50:41360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:18.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.31.26:22-68.220.241.50:41360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:18.144946 systemd[1]: Started sshd@13-172.31.31.26:22-68.220.241.50:41360.service - OpenSSH per-connection server daemon (68.220.241.50:41360). Jan 28 01:20:18.207952 containerd[1849]: time="2026-01-28T01:20:18.207082338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:20:18.548103 containerd[1849]: time="2026-01-28T01:20:18.548054035Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:20:18.550211 containerd[1849]: time="2026-01-28T01:20:18.550157064Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:20:18.550398 containerd[1849]: time="2026-01-28T01:20:18.550187091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:20:18.550434 kubelet[3202]: E0128 01:20:18.550387 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:20:18.550709 kubelet[3202]: E0128 01:20:18.550432 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:20:18.550709 kubelet[3202]: E0128 01:20:18.550560 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pqxhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74bc45499-2znnx_calico-apiserver(d6acbfd4-88d6-4133-9434-3bfec2c327d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:20:18.551982 kubelet[3202]: E0128 01:20:18.551916 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74bc45499-2znnx" podUID="d6acbfd4-88d6-4133-9434-3bfec2c327d4" Jan 28 01:20:18.613000 audit[5574]: USER_ACCT pid=5574 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:18.620733 sshd[5574]: Accepted publickey for core from 68.220.241.50 port 41360 ssh2: RSA SHA256:PpvjS6sxgjOf+voyr4NrS2kTF8aDF7ek5ziSVtOzP6U Jan 28 01:20:18.621177 kernel: audit: type=1101 audit(1769563218.613:824): pid=5574 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:18.615000 audit[5574]: CRED_ACQ pid=5574 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:18.626949 kernel: audit: type=1103 audit(1769563218.615:825): pid=5574 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:18.621616 sshd-session[5574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:20:18.637540 kernel: audit: type=1006 audit(1769563218.615:826): pid=5574 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 28 01:20:18.637678 kernel: audit: type=1300 audit(1769563218.615:826): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3a8cd4f0 a2=3 a3=0 items=0 ppid=1 pid=5574 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:20:18.615000 audit[5574]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3a8cd4f0 a2=3 a3=0 items=0 ppid=1 pid=5574 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:20:18.615000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:20:18.640789 kernel: audit: type=1327 audit(1769563218.615:826): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:20:18.647649 systemd-logind[1833]: New session 15 of user core. Jan 28 01:20:18.654002 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 28 01:20:18.656000 audit[5574]: USER_START pid=5574 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:18.667359 kernel: audit: type=1105 audit(1769563218.656:827): pid=5574 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:18.667462 kernel: audit: type=1103 audit(1769563218.657:828): pid=5580 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:18.657000 audit[5580]: CRED_ACQ pid=5580 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:18.945955 sshd[5580]: Connection closed by 68.220.241.50 port 41360 Jan 28 01:20:18.946835 sshd-session[5574]: pam_unix(sshd:session): session closed for user core Jan 28 01:20:18.948000 audit[5574]: USER_END pid=5574 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:18.956919 kernel: audit: type=1106 audit(1769563218.948:829): pid=5574 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:18.957626 systemd[1]: sshd@13-172.31.31.26:22-68.220.241.50:41360.service: Deactivated successfully. Jan 28 01:20:18.948000 audit[5574]: CRED_DISP pid=5574 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:18.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.31.26:22-68.220.241.50:41360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:18.963794 kernel: audit: type=1104 audit(1769563218.948:830): pid=5574 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:18.965059 systemd[1]: session-15.scope: Deactivated successfully. Jan 28 01:20:18.968163 systemd-logind[1833]: Session 15 logged out. Waiting for processes to exit. Jan 28 01:20:18.969940 systemd-logind[1833]: Removed session 15. Jan 28 01:20:19.216002 containerd[1849]: time="2026-01-28T01:20:19.215625432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:20:19.522380 containerd[1849]: time="2026-01-28T01:20:19.522103970Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:20:19.524306 containerd[1849]: time="2026-01-28T01:20:19.524253860Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:20:19.524564 kubelet[3202]: E0128 01:20:19.524501 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:20:19.524564 kubelet[3202]: E0128 01:20:19.524556 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:20:19.526530 kubelet[3202]: E0128 01:20:19.524713 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2875x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-dbbbff994-sqgh4_calico-system(c988f11c-6f16-4100-9308-ea1983457126): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:20:19.526530 kubelet[3202]: E0128 01:20:19.526208 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dbbbff994-sqgh4" podUID="c988f11c-6f16-4100-9308-ea1983457126" Jan 28 01:20:19.529256 containerd[1849]: time="2026-01-28T01:20:19.524271701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 01:20:20.214157 containerd[1849]: time="2026-01-28T01:20:20.211507350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:20:20.475262 containerd[1849]: time="2026-01-28T01:20:20.475009995Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:20:20.477221 containerd[1849]: time="2026-01-28T01:20:20.477158955Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:20:20.477441 containerd[1849]: time="2026-01-28T01:20:20.477241427Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 01:20:20.477524 kubelet[3202]: E0128 01:20:20.477456 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:20:20.477524 kubelet[3202]: E0128 01:20:20.477500 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:20:20.479563 kubelet[3202]: E0128 01:20:20.477721 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-544bz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pcsbp_calico-system(cbd1002c-7263-43fc-8b17-789e41b44261): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:20:20.480738 kubelet[3202]: E0128 01:20:20.480687 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pcsbp" podUID="cbd1002c-7263-43fc-8b17-789e41b44261" Jan 28 01:20:20.481030 containerd[1849]: time="2026-01-28T01:20:20.480995390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:20:20.752337 containerd[1849]: time="2026-01-28T01:20:20.752200711Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:20:20.754337 containerd[1849]: time="2026-01-28T01:20:20.754287959Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:20:20.754545 containerd[1849]: time="2026-01-28T01:20:20.754377965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 01:20:20.754674 kubelet[3202]: E0128 01:20:20.754616 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:20:20.754811 kubelet[3202]: E0128 01:20:20.754769 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:20:20.755013 kubelet[3202]: E0128 01:20:20.754978 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4bxdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5fjx2_calico-system(6c6224c1-45a4-4e67-9483-34412dd5913e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:20:20.759805 containerd[1849]: time="2026-01-28T01:20:20.758532101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:20:21.074888 containerd[1849]: time="2026-01-28T01:20:21.074667265Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:20:21.077468 containerd[1849]: time="2026-01-28T01:20:21.077412308Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:20:21.077587 containerd[1849]: time="2026-01-28T01:20:21.077436286Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 01:20:21.077822 kubelet[3202]: E0128 01:20:21.077779 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:20:21.077960 kubelet[3202]: E0128 01:20:21.077838 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:20:21.078063 kubelet[3202]: E0128 01:20:21.077984 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4bxdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5fjx2_calico-system(6c6224c1-45a4-4e67-9483-34412dd5913e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:20:21.079485 kubelet[3202]: E0128 01:20:21.079410 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5fjx2" podUID="6c6224c1-45a4-4e67-9483-34412dd5913e" Jan 28 01:20:23.203152 containerd[1849]: time="2026-01-28T01:20:23.202945816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:20:23.569438 containerd[1849]: time="2026-01-28T01:20:23.569319964Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:20:23.571586 containerd[1849]: time="2026-01-28T01:20:23.571516697Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:20:23.571727 containerd[1849]: time="2026-01-28T01:20:23.571605557Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:20:23.571830 kubelet[3202]: E0128 01:20:23.571792 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:20:23.572196 kubelet[3202]: E0128 01:20:23.571837 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:20:23.572196 kubelet[3202]: E0128 01:20:23.571962 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5pfgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-597546f57d-dtd92_calico-apiserver(77c5b13e-b6c7-4da4-8d69-cf95701836c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:20:23.574030 kubelet[3202]: E0128 01:20:23.573986 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597546f57d-dtd92" podUID="77c5b13e-b6c7-4da4-8d69-cf95701836c8" Jan 28 01:20:24.035008 systemd[1]: Started sshd@14-172.31.31.26:22-68.220.241.50:50648.service - OpenSSH per-connection server daemon (68.220.241.50:50648). Jan 28 01:20:24.042047 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:20:24.042244 kernel: audit: type=1130 audit(1769563224.034:832): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.31.26:22-68.220.241.50:50648 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:24.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.31.26:22-68.220.241.50:50648 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:24.499000 audit[5592]: USER_ACCT pid=5592 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:24.507176 sshd[5592]: Accepted publickey for core from 68.220.241.50 port 50648 ssh2: RSA SHA256:PpvjS6sxgjOf+voyr4NrS2kTF8aDF7ek5ziSVtOzP6U Jan 28 01:20:24.513310 kernel: audit: type=1101 audit(1769563224.499:833): pid=5592 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:24.513378 kernel: audit: type=1103 audit(1769563224.499:834): pid=5592 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:24.499000 audit[5592]: CRED_ACQ pid=5592 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:24.507809 sshd-session[5592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:20:24.499000 audit[5592]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa08fde20 a2=3 a3=0 items=0 ppid=1 pid=5592 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:20:24.519533 kernel: audit: type=1006 audit(1769563224.499:835): pid=5592 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 28 01:20:24.523927 kernel: audit: type=1300 audit(1769563224.499:835): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa08fde20 a2=3 a3=0 items=0 ppid=1 pid=5592 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:20:24.523961 kernel: audit: type=1327 audit(1769563224.499:835): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:20:24.499000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:20:24.529785 systemd-logind[1833]: New session 16 of user core. Jan 28 01:20:24.547391 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 28 01:20:24.551000 audit[5592]: USER_START pid=5592 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:24.558869 kernel: audit: type=1105 audit(1769563224.551:836): pid=5592 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:24.559000 audit[5596]: CRED_ACQ pid=5596 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:24.565789 kernel: audit: type=1103 audit(1769563224.559:837): pid=5596 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:24.824786 sshd[5596]: Connection closed by 68.220.241.50 port 50648 Jan 28 01:20:24.826262 sshd-session[5592]: pam_unix(sshd:session): session closed for user core Jan 28 01:20:24.826000 audit[5592]: USER_END pid=5592 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:24.832210 systemd[1]: sshd@14-172.31.31.26:22-68.220.241.50:50648.service: Deactivated successfully. Jan 28 01:20:24.834792 kernel: audit: type=1106 audit(1769563224.826:838): pid=5592 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:24.834943 systemd[1]: session-16.scope: Deactivated successfully. Jan 28 01:20:24.826000 audit[5592]: CRED_DISP pid=5592 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:24.837882 systemd-logind[1833]: Session 16 logged out. Waiting for processes to exit. Jan 28 01:20:24.839349 systemd-logind[1833]: Removed session 16. Jan 28 01:20:24.839786 kernel: audit: type=1104 audit(1769563224.826:839): pid=5592 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:24.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.31.26:22-68.220.241.50:50648 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:26.205092 kubelet[3202]: E0128 01:20:26.204910 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-799d7c8d49-2vrfp" podUID="9accd949-c4fa-4ce7-b3a8-524deb448a06" Jan 28 01:20:28.202763 kubelet[3202]: E0128 01:20:28.202402 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74bc45499-4hxx4" podUID="325cd625-25dd-4d22-8523-67e469e6d0e9" Jan 28 01:20:29.927100 systemd[1]: Started sshd@15-172.31.31.26:22-68.220.241.50:50654.service - OpenSSH per-connection server daemon (68.220.241.50:50654). Jan 28 01:20:29.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.31.26:22-68.220.241.50:50654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:29.929269 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:20:29.929353 kernel: audit: type=1130 audit(1769563229.926:841): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.31.26:22-68.220.241.50:50654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:30.409000 audit[5610]: USER_ACCT pid=5610 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:30.410435 sshd[5610]: Accepted publickey for core from 68.220.241.50 port 50654 ssh2: RSA SHA256:PpvjS6sxgjOf+voyr4NrS2kTF8aDF7ek5ziSVtOzP6U Jan 28 01:20:30.412830 sshd-session[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:20:30.411000 audit[5610]: CRED_ACQ pid=5610 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:30.416790 kernel: audit: type=1101 audit(1769563230.409:842): pid=5610 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:30.416949 kernel: audit: type=1103 audit(1769563230.411:843): pid=5610 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:30.411000 audit[5610]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4911fc20 a2=3 a3=0 items=0 ppid=1 pid=5610 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:20:30.425536 kernel: audit: type=1006 audit(1769563230.411:844): pid=5610 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 28 01:20:30.425595 kernel: audit: type=1300 audit(1769563230.411:844): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4911fc20 a2=3 a3=0 items=0 ppid=1 pid=5610 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:20:30.425283 systemd-logind[1833]: New session 17 of user core. Jan 28 01:20:30.411000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:20:30.430266 kernel: audit: type=1327 audit(1769563230.411:844): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:20:30.437021 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 28 01:20:30.439000 audit[5610]: USER_START pid=5610 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:30.441000 audit[5614]: CRED_ACQ pid=5614 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:30.447522 kernel: audit: type=1105 audit(1769563230.439:845): pid=5610 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:30.447598 kernel: audit: type=1103 audit(1769563230.441:846): pid=5614 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:30.750995 sshd[5614]: Connection closed by 68.220.241.50 port 50654 Jan 28 01:20:30.751502 sshd-session[5610]: pam_unix(sshd:session): session closed for user core Jan 28 01:20:30.753000 audit[5610]: USER_END pid=5610 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:30.757571 systemd[1]: sshd@15-172.31.31.26:22-68.220.241.50:50654.service: Deactivated successfully. Jan 28 01:20:30.759698 systemd[1]: session-17.scope: Deactivated successfully. Jan 28 01:20:30.761124 kernel: audit: type=1106 audit(1769563230.753:847): pid=5610 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:30.753000 audit[5610]: CRED_DISP pid=5610 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:30.761628 systemd-logind[1833]: Session 17 logged out. Waiting for processes to exit. Jan 28 01:20:30.762844 systemd-logind[1833]: Removed session 17. Jan 28 01:20:30.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.31.26:22-68.220.241.50:50654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:30.766801 kernel: audit: type=1104 audit(1769563230.753:848): pid=5610 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:30.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.31.26:22-68.220.241.50:50664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:30.833183 systemd[1]: Started sshd@16-172.31.31.26:22-68.220.241.50:50664.service - OpenSSH per-connection server daemon (68.220.241.50:50664). Jan 28 01:20:31.266000 audit[5626]: USER_ACCT pid=5626 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:31.267604 sshd[5626]: Accepted publickey for core from 68.220.241.50 port 50664 ssh2: RSA SHA256:PpvjS6sxgjOf+voyr4NrS2kTF8aDF7ek5ziSVtOzP6U Jan 28 01:20:31.267000 audit[5626]: CRED_ACQ pid=5626 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:31.267000 audit[5626]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef6c4b590 a2=3 a3=0 items=0 ppid=1 pid=5626 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:20:31.267000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:20:31.269552 sshd-session[5626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:20:31.275135 systemd-logind[1833]: New session 18 of user core. Jan 28 01:20:31.279963 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 28 01:20:31.282000 audit[5626]: USER_START pid=5626 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:31.284000 audit[5630]: CRED_ACQ pid=5630 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:31.947630 sshd[5630]: Connection closed by 68.220.241.50 port 50664 Jan 28 01:20:31.947907 sshd-session[5626]: pam_unix(sshd:session): session closed for user core Jan 28 01:20:31.950000 audit[5626]: USER_END pid=5626 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:31.950000 audit[5626]: CRED_DISP pid=5626 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:31.953803 systemd[1]: sshd@16-172.31.31.26:22-68.220.241.50:50664.service: Deactivated successfully. Jan 28 01:20:31.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.31.26:22-68.220.241.50:50664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:31.955731 systemd[1]: session-18.scope: Deactivated successfully. Jan 28 01:20:31.956601 systemd-logind[1833]: Session 18 logged out. Waiting for processes to exit. Jan 28 01:20:31.958241 systemd-logind[1833]: Removed session 18. Jan 28 01:20:32.036388 systemd[1]: Started sshd@17-172.31.31.26:22-68.220.241.50:50666.service - OpenSSH per-connection server daemon (68.220.241.50:50666). Jan 28 01:20:32.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.31.26:22-68.220.241.50:50666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:32.204901 kubelet[3202]: E0128 01:20:32.204741 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dbbbff994-sqgh4" podUID="c988f11c-6f16-4100-9308-ea1983457126" Jan 28 01:20:32.507000 audit[5640]: USER_ACCT pid=5640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:32.508303 sshd[5640]: Accepted publickey for core from 68.220.241.50 port 50666 ssh2: RSA SHA256:PpvjS6sxgjOf+voyr4NrS2kTF8aDF7ek5ziSVtOzP6U Jan 28 01:20:32.508000 audit[5640]: CRED_ACQ pid=5640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:32.508000 audit[5640]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef3cadca0 a2=3 a3=0 items=0 ppid=1 pid=5640 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:20:32.508000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:20:32.510370 sshd-session[5640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:20:32.516057 systemd-logind[1833]: New session 19 of user core. Jan 28 01:20:32.520989 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 28 01:20:32.523000 audit[5640]: USER_START pid=5640 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:32.525000 audit[5644]: CRED_ACQ pid=5644 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:33.207420 kubelet[3202]: E0128 01:20:33.207364 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74bc45499-2znnx" podUID="d6acbfd4-88d6-4133-9434-3bfec2c327d4" Jan 28 01:20:33.447000 audit[5655]: NETFILTER_CFG table=filter:145 family=2 entries=26 op=nft_register_rule pid=5655 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:20:33.447000 audit[5655]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffdf80b62a0 a2=0 a3=7ffdf80b628c items=0 ppid=3451 pid=5655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:20:33.447000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:20:33.453000 audit[5655]: NETFILTER_CFG table=nat:146 family=2 entries=20 op=nft_register_rule pid=5655 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:20:33.453000 audit[5655]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdf80b62a0 a2=0 a3=0 items=0 ppid=3451 pid=5655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:20:33.453000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:20:33.485000 audit[5657]: NETFILTER_CFG table=filter:147 family=2 entries=38 op=nft_register_rule pid=5657 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:20:33.485000 audit[5657]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc95278ae0 a2=0 a3=7ffc95278acc items=0 ppid=3451 pid=5657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:20:33.485000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:20:33.491000 audit[5657]: NETFILTER_CFG table=nat:148 family=2 entries=20 op=nft_register_rule pid=5657 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:20:33.491000 audit[5657]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc95278ae0 a2=0 a3=0 items=0 ppid=3451 pid=5657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:20:33.491000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:20:33.537969 sshd[5644]: Connection closed by 68.220.241.50 port 50666 Jan 28 01:20:33.540006 sshd-session[5640]: pam_unix(sshd:session): session closed for user core Jan 28 01:20:33.544000 audit[5640]: USER_END pid=5640 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:33.546000 audit[5640]: CRED_DISP pid=5640 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:33.551413 systemd[1]: sshd@17-172.31.31.26:22-68.220.241.50:50666.service: Deactivated successfully. Jan 28 01:20:33.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.31.26:22-68.220.241.50:50666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:33.555767 systemd[1]: session-19.scope: Deactivated successfully. Jan 28 01:20:33.557361 systemd-logind[1833]: Session 19 logged out. Waiting for processes to exit. Jan 28 01:20:33.560296 systemd-logind[1833]: Removed session 19. Jan 28 01:20:33.644112 systemd[1]: Started sshd@18-172.31.31.26:22-68.220.241.50:47750.service - OpenSSH per-connection server daemon (68.220.241.50:47750). Jan 28 01:20:33.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.31.26:22-68.220.241.50:47750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:34.132000 audit[5686]: USER_ACCT pid=5686 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:34.133358 sshd[5686]: Accepted publickey for core from 68.220.241.50 port 47750 ssh2: RSA SHA256:PpvjS6sxgjOf+voyr4NrS2kTF8aDF7ek5ziSVtOzP6U Jan 28 01:20:34.133000 audit[5686]: CRED_ACQ pid=5686 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:34.133000 audit[5686]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf6d60f40 a2=3 a3=0 items=0 ppid=1 pid=5686 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:20:34.133000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:20:34.135630 sshd-session[5686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:20:34.141618 systemd-logind[1833]: New session 20 of user core. Jan 28 01:20:34.146051 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 28 01:20:34.150000 audit[5686]: USER_START pid=5686 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:34.153000 audit[5690]: CRED_ACQ pid=5690 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:34.787990 sshd[5690]: Connection closed by 68.220.241.50 port 47750 Jan 28 01:20:34.788639 sshd-session[5686]: pam_unix(sshd:session): session closed for user core Jan 28 01:20:34.792000 audit[5686]: USER_END pid=5686 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:34.792000 audit[5686]: CRED_DISP pid=5686 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:34.807336 systemd[1]: sshd@18-172.31.31.26:22-68.220.241.50:47750.service: Deactivated successfully. Jan 28 01:20:34.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.31.26:22-68.220.241.50:47750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:34.811001 systemd[1]: session-20.scope: Deactivated successfully. Jan 28 01:20:34.813832 systemd-logind[1833]: Session 20 logged out. Waiting for processes to exit. Jan 28 01:20:34.816496 systemd-logind[1833]: Removed session 20. Jan 28 01:20:34.875692 systemd[1]: Started sshd@19-172.31.31.26:22-68.220.241.50:47764.service - OpenSSH per-connection server daemon (68.220.241.50:47764). Jan 28 01:20:34.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.31.26:22-68.220.241.50:47764 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:35.205866 kubelet[3202]: E0128 01:20:35.205818 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pcsbp" podUID="cbd1002c-7263-43fc-8b17-789e41b44261" Jan 28 01:20:35.206880 kubelet[3202]: E0128 01:20:35.206813 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5fjx2" podUID="6c6224c1-45a4-4e67-9483-34412dd5913e" Jan 28 01:20:35.327237 kernel: kauditd_printk_skb: 47 callbacks suppressed Jan 28 01:20:35.327371 kernel: audit: type=1101 audit(1769563235.324:882): pid=5700 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:35.324000 audit[5700]: USER_ACCT pid=5700 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:35.328862 sshd[5700]: Accepted publickey for core from 68.220.241.50 port 47764 ssh2: RSA SHA256:PpvjS6sxgjOf+voyr4NrS2kTF8aDF7ek5ziSVtOzP6U Jan 28 01:20:35.333000 audit[5700]: CRED_ACQ pid=5700 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:35.340795 kernel: audit: type=1103 audit(1769563235.333:883): pid=5700 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:35.342221 sshd-session[5700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:20:35.347919 kernel: audit: type=1006 audit(1769563235.339:884): pid=5700 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 28 01:20:35.339000 audit[5700]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc016fc6e0 a2=3 a3=0 items=0 ppid=1 pid=5700 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:20:35.355699 kernel: audit: type=1300 audit(1769563235.339:884): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc016fc6e0 a2=3 a3=0 items=0 ppid=1 pid=5700 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:20:35.355830 kernel: audit: type=1327 audit(1769563235.339:884): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:20:35.339000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:20:35.363232 systemd-logind[1833]: New session 21 of user core. Jan 28 01:20:35.369038 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 28 01:20:35.373000 audit[5700]: USER_START pid=5700 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:35.386367 kernel: audit: type=1105 audit(1769563235.373:885): pid=5700 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:35.386519 kernel: audit: type=1103 audit(1769563235.373:886): pid=5704 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:35.373000 audit[5704]: CRED_ACQ pid=5704 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:35.659909 sshd[5704]: Connection closed by 68.220.241.50 port 47764 Jan 28 01:20:35.660424 sshd-session[5700]: pam_unix(sshd:session): session closed for user core Jan 28 01:20:35.661000 audit[5700]: USER_END pid=5700 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:35.661000 audit[5700]: CRED_DISP pid=5700 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:35.670865 systemd[1]: sshd@19-172.31.31.26:22-68.220.241.50:47764.service: Deactivated successfully. Jan 28 01:20:35.671470 kernel: audit: type=1106 audit(1769563235.661:887): pid=5700 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:35.671536 kernel: audit: type=1104 audit(1769563235.661:888): pid=5700 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:35.673415 systemd[1]: session-21.scope: Deactivated successfully. Jan 28 01:20:35.675205 systemd-logind[1833]: Session 21 logged out. Waiting for processes to exit. Jan 28 01:20:35.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.31.26:22-68.220.241.50:47764 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:35.677447 systemd-logind[1833]: Removed session 21. Jan 28 01:20:35.680796 kernel: audit: type=1131 audit(1769563235.670:889): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.31.26:22-68.220.241.50:47764 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:39.203300 kubelet[3202]: E0128 01:20:39.203262 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597546f57d-dtd92" podUID="77c5b13e-b6c7-4da4-8d69-cf95701836c8" Jan 28 01:20:40.204549 kubelet[3202]: E0128 01:20:40.204394 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-799d7c8d49-2vrfp" podUID="9accd949-c4fa-4ce7-b3a8-524deb448a06" Jan 28 01:20:40.504000 audit[5716]: NETFILTER_CFG table=filter:149 family=2 entries=26 op=nft_register_rule pid=5716 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:20:40.509804 kernel: audit: type=1325 audit(1769563240.504:890): table=filter:149 family=2 entries=26 op=nft_register_rule pid=5716 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:20:40.519795 kernel: audit: type=1300 audit(1769563240.504:890): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffce5209270 a2=0 a3=7ffce520925c items=0 ppid=3451 pid=5716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:20:40.504000 audit[5716]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffce5209270 a2=0 a3=7ffce520925c items=0 ppid=3451 pid=5716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:20:40.504000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:20:40.512000 audit[5716]: NETFILTER_CFG table=nat:150 family=2 entries=104 op=nft_register_chain pid=5716 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:20:40.526991 kernel: audit: type=1327 audit(1769563240.504:890): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:20:40.527070 kernel: audit: type=1325 audit(1769563240.512:891): table=nat:150 family=2 entries=104 op=nft_register_chain pid=5716 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:20:40.528825 kernel: audit: type=1300 audit(1769563240.512:891): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffce5209270 a2=0 a3=7ffce520925c items=0 ppid=3451 pid=5716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:20:40.512000 audit[5716]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffce5209270 a2=0 a3=7ffce520925c items=0 ppid=3451 pid=5716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:20:40.512000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:20:40.540798 kernel: audit: type=1327 audit(1769563240.512:891): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:20:40.747188 systemd[1]: Started sshd@20-172.31.31.26:22-68.220.241.50:47766.service - OpenSSH per-connection server daemon (68.220.241.50:47766). Jan 28 01:20:40.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.31.26:22-68.220.241.50:47766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:40.752787 kernel: audit: type=1130 audit(1769563240.746:892): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.31.26:22-68.220.241.50:47766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:41.173000 audit[5719]: USER_ACCT pid=5719 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:41.175456 sshd[5719]: Accepted publickey for core from 68.220.241.50 port 47766 ssh2: RSA SHA256:PpvjS6sxgjOf+voyr4NrS2kTF8aDF7ek5ziSVtOzP6U Jan 28 01:20:41.180767 kernel: audit: type=1101 audit(1769563241.173:893): pid=5719 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:41.180000 audit[5719]: CRED_ACQ pid=5719 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:41.182586 sshd-session[5719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:20:41.189542 kernel: audit: type=1103 audit(1769563241.180:894): pid=5719 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:41.190689 kernel: audit: type=1006 audit(1769563241.180:895): pid=5719 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 28 01:20:41.180000 audit[5719]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe53b66710 a2=3 a3=0 items=0 ppid=1 pid=5719 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:20:41.180000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:20:41.196990 systemd-logind[1833]: New session 22 of user core. Jan 28 01:20:41.201193 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 28 01:20:41.205606 kubelet[3202]: E0128 01:20:41.204820 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74bc45499-4hxx4" podUID="325cd625-25dd-4d22-8523-67e469e6d0e9" Jan 28 01:20:41.206000 audit[5719]: USER_START pid=5719 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:41.209000 audit[5723]: CRED_ACQ pid=5723 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:41.479093 sshd[5723]: Connection closed by 68.220.241.50 port 47766 Jan 28 01:20:41.480904 sshd-session[5719]: pam_unix(sshd:session): session closed for user core Jan 28 01:20:41.481000 audit[5719]: USER_END pid=5719 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:41.481000 audit[5719]: CRED_DISP pid=5719 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:41.485513 systemd[1]: sshd@20-172.31.31.26:22-68.220.241.50:47766.service: Deactivated successfully. Jan 28 01:20:41.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.31.26:22-68.220.241.50:47766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:41.487925 systemd[1]: session-22.scope: Deactivated successfully. Jan 28 01:20:41.488805 systemd-logind[1833]: Session 22 logged out. Waiting for processes to exit. Jan 28 01:20:41.490327 systemd-logind[1833]: Removed session 22. Jan 28 01:20:44.203481 kubelet[3202]: E0128 01:20:44.203186 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dbbbff994-sqgh4" podUID="c988f11c-6f16-4100-9308-ea1983457126" Jan 28 01:20:46.204950 kubelet[3202]: E0128 01:20:46.204910 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74bc45499-2znnx" podUID="d6acbfd4-88d6-4133-9434-3bfec2c327d4" Jan 28 01:20:46.208201 kubelet[3202]: E0128 01:20:46.208159 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5fjx2" podUID="6c6224c1-45a4-4e67-9483-34412dd5913e" Jan 28 01:20:46.568816 systemd[1]: Started sshd@21-172.31.31.26:22-68.220.241.50:57732.service - OpenSSH per-connection server daemon (68.220.241.50:57732). Jan 28 01:20:46.573777 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 28 01:20:46.573868 kernel: audit: type=1130 audit(1769563246.568:901): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.31.26:22-68.220.241.50:57732 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:46.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.31.26:22-68.220.241.50:57732 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:47.005000 audit[5737]: USER_ACCT pid=5737 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:47.012187 sshd[5737]: Accepted publickey for core from 68.220.241.50 port 57732 ssh2: RSA SHA256:PpvjS6sxgjOf+voyr4NrS2kTF8aDF7ek5ziSVtOzP6U Jan 28 01:20:47.013410 kernel: audit: type=1101 audit(1769563247.005:902): pid=5737 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:47.013308 sshd-session[5737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:20:47.005000 audit[5737]: CRED_ACQ pid=5737 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:47.021769 kernel: audit: type=1103 audit(1769563247.005:903): pid=5737 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:47.027852 systemd-logind[1833]: New session 23 of user core. Jan 28 01:20:47.034984 kernel: audit: type=1006 audit(1769563247.005:904): pid=5737 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 28 01:20:47.005000 audit[5737]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1c5873a0 a2=3 a3=0 items=0 ppid=1 pid=5737 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:20:47.005000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:20:47.042904 kernel: audit: type=1300 audit(1769563247.005:904): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1c5873a0 a2=3 a3=0 items=0 ppid=1 pid=5737 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:20:47.044440 kernel: audit: type=1327 audit(1769563247.005:904): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:20:47.046076 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 28 01:20:47.050000 audit[5737]: USER_START pid=5737 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:47.058765 kernel: audit: type=1105 audit(1769563247.050:905): pid=5737 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:47.060000 audit[5741]: CRED_ACQ pid=5741 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:47.067797 kernel: audit: type=1103 audit(1769563247.060:906): pid=5741 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:47.376156 sshd[5741]: Connection closed by 68.220.241.50 port 57732 Jan 28 01:20:47.374929 sshd-session[5737]: pam_unix(sshd:session): session closed for user core Jan 28 01:20:47.377000 audit[5737]: USER_END pid=5737 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:47.384408 systemd[1]: sshd@21-172.31.31.26:22-68.220.241.50:57732.service: Deactivated successfully. Jan 28 01:20:47.384806 kernel: audit: type=1106 audit(1769563247.377:907): pid=5737 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:47.388308 systemd[1]: session-23.scope: Deactivated successfully. Jan 28 01:20:47.389542 systemd-logind[1833]: Session 23 logged out. Waiting for processes to exit. Jan 28 01:20:47.377000 audit[5737]: CRED_DISP pid=5737 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:47.396773 kernel: audit: type=1104 audit(1769563247.377:908): pid=5737 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:47.397411 systemd-logind[1833]: Removed session 23. Jan 28 01:20:47.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.31.26:22-68.220.241.50:57732 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:48.203622 kubelet[3202]: E0128 01:20:48.203342 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pcsbp" podUID="cbd1002c-7263-43fc-8b17-789e41b44261" Jan 28 01:20:51.203379 kubelet[3202]: E0128 01:20:51.203042 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597546f57d-dtd92" podUID="77c5b13e-b6c7-4da4-8d69-cf95701836c8" Jan 28 01:20:51.204437 kubelet[3202]: E0128 01:20:51.203647 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-799d7c8d49-2vrfp" podUID="9accd949-c4fa-4ce7-b3a8-524deb448a06" Jan 28 01:20:52.206016 kubelet[3202]: E0128 01:20:52.205124 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74bc45499-4hxx4" podUID="325cd625-25dd-4d22-8523-67e469e6d0e9" Jan 28 01:20:52.461133 systemd[1]: Started sshd@22-172.31.31.26:22-68.220.241.50:57734.service - OpenSSH per-connection server daemon (68.220.241.50:57734). Jan 28 01:20:52.465464 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:20:52.465535 kernel: audit: type=1130 audit(1769563252.460:910): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.31.26:22-68.220.241.50:57734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:52.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.31.26:22-68.220.241.50:57734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:52.972000 audit[5753]: USER_ACCT pid=5753 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:52.975103 sshd[5753]: Accepted publickey for core from 68.220.241.50 port 57734 ssh2: RSA SHA256:PpvjS6sxgjOf+voyr4NrS2kTF8aDF7ek5ziSVtOzP6U Jan 28 01:20:52.976524 sshd-session[5753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:20:52.978970 kernel: audit: type=1101 audit(1769563252.972:911): pid=5753 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:52.979046 kernel: audit: type=1103 audit(1769563252.974:912): pid=5753 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:52.974000 audit[5753]: CRED_ACQ pid=5753 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:52.974000 audit[5753]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd355cba40 a2=3 a3=0 items=0 ppid=1 pid=5753 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:20:52.992512 kernel: audit: type=1006 audit(1769563252.974:913): pid=5753 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 28 01:20:52.998886 kernel: audit: type=1300 audit(1769563252.974:913): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd355cba40 a2=3 a3=0 items=0 ppid=1 pid=5753 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:20:52.998939 kernel: audit: type=1327 audit(1769563252.974:913): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:20:52.974000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:20:53.001834 systemd-logind[1833]: New session 24 of user core. Jan 28 01:20:53.010009 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 28 01:20:53.014000 audit[5753]: USER_START pid=5753 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:53.028602 kernel: audit: type=1105 audit(1769563253.014:914): pid=5753 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:53.028782 kernel: audit: type=1103 audit(1769563253.022:915): pid=5757 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:53.022000 audit[5757]: CRED_ACQ pid=5757 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:53.583339 sshd[5757]: Connection closed by 68.220.241.50 port 57734 Jan 28 01:20:53.584338 sshd-session[5753]: pam_unix(sshd:session): session closed for user core Jan 28 01:20:53.586000 audit[5753]: USER_END pid=5753 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:53.594827 kernel: audit: type=1106 audit(1769563253.586:916): pid=5753 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:53.586000 audit[5753]: CRED_DISP pid=5753 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:53.600808 kernel: audit: type=1104 audit(1769563253.586:917): pid=5753 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:53.600927 systemd[1]: sshd@22-172.31.31.26:22-68.220.241.50:57734.service: Deactivated successfully. Jan 28 01:20:53.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.31.26:22-68.220.241.50:57734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:53.606374 systemd[1]: session-24.scope: Deactivated successfully. Jan 28 01:20:53.610529 systemd-logind[1833]: Session 24 logged out. Waiting for processes to exit. Jan 28 01:20:53.612925 systemd-logind[1833]: Removed session 24. Jan 28 01:20:58.204707 kubelet[3202]: E0128 01:20:58.204054 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dbbbff994-sqgh4" podUID="c988f11c-6f16-4100-9308-ea1983457126" Jan 28 01:20:58.678270 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:20:58.678397 kernel: audit: type=1130 audit(1769563258.675:919): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.31.26:22-68.220.241.50:43596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:58.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.31.26:22-68.220.241.50:43596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:58.676114 systemd[1]: Started sshd@23-172.31.31.26:22-68.220.241.50:43596.service - OpenSSH per-connection server daemon (68.220.241.50:43596). Jan 28 01:20:59.128000 audit[5778]: USER_ACCT pid=5778 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:59.129413 sshd[5778]: Accepted publickey for core from 68.220.241.50 port 43596 ssh2: RSA SHA256:PpvjS6sxgjOf+voyr4NrS2kTF8aDF7ek5ziSVtOzP6U Jan 28 01:20:59.132111 sshd-session[5778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:20:59.135043 kernel: audit: type=1101 audit(1769563259.128:920): pid=5778 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:59.135991 kernel: audit: type=1103 audit(1769563259.130:921): pid=5778 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:59.130000 audit[5778]: CRED_ACQ pid=5778 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:59.143484 kernel: audit: type=1006 audit(1769563259.130:922): pid=5778 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 28 01:20:59.143566 kernel: audit: type=1300 audit(1769563259.130:922): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc3e004e0 a2=3 a3=0 items=0 ppid=1 pid=5778 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:20:59.130000 audit[5778]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc3e004e0 a2=3 a3=0 items=0 ppid=1 pid=5778 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:20:59.141024 systemd-logind[1833]: New session 25 of user core. Jan 28 01:20:59.130000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:20:59.149241 kernel: audit: type=1327 audit(1769563259.130:922): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:20:59.151251 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 28 01:20:59.154000 audit[5778]: USER_START pid=5778 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:59.156000 audit[5782]: CRED_ACQ pid=5782 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:59.161919 kernel: audit: type=1105 audit(1769563259.154:923): pid=5778 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:59.161975 kernel: audit: type=1103 audit(1769563259.156:924): pid=5782 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:59.203362 kubelet[3202]: E0128 01:20:59.203263 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5fjx2" podUID="6c6224c1-45a4-4e67-9483-34412dd5913e" Jan 28 01:20:59.438791 sshd[5782]: Connection closed by 68.220.241.50 port 43596 Jan 28 01:20:59.439307 sshd-session[5778]: pam_unix(sshd:session): session closed for user core Jan 28 01:20:59.440000 audit[5778]: USER_END pid=5778 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:59.444548 systemd[1]: sshd@23-172.31.31.26:22-68.220.241.50:43596.service: Deactivated successfully. Jan 28 01:20:59.444888 systemd-logind[1833]: Session 25 logged out. Waiting for processes to exit. Jan 28 01:20:59.441000 audit[5778]: CRED_DISP pid=5778 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:59.449459 kernel: audit: type=1106 audit(1769563259.440:925): pid=5778 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:59.449565 kernel: audit: type=1104 audit(1769563259.441:926): pid=5778 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:20:59.449234 systemd[1]: session-25.scope: Deactivated successfully. Jan 28 01:20:59.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.31.26:22-68.220.241.50:43596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:20:59.455255 systemd-logind[1833]: Removed session 25. Jan 28 01:21:00.205353 containerd[1849]: time="2026-01-28T01:21:00.205269754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:21:00.490181 containerd[1849]: time="2026-01-28T01:21:00.489915453Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:21:00.492232 containerd[1849]: time="2026-01-28T01:21:00.492087014Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:21:00.492232 containerd[1849]: time="2026-01-28T01:21:00.492196743Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:21:00.492826 kubelet[3202]: E0128 01:21:00.492574 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:21:00.492826 kubelet[3202]: E0128 01:21:00.492627 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:21:00.494270 kubelet[3202]: E0128 01:21:00.494126 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pqxhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74bc45499-2znnx_calico-apiserver(d6acbfd4-88d6-4133-9434-3bfec2c327d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:21:00.495503 kubelet[3202]: E0128 01:21:00.495450 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74bc45499-2znnx" podUID="d6acbfd4-88d6-4133-9434-3bfec2c327d4" Jan 28 01:21:01.205379 containerd[1849]: time="2026-01-28T01:21:01.205071933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:21:01.477956 containerd[1849]: time="2026-01-28T01:21:01.477823076Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:21:01.479999 containerd[1849]: time="2026-01-28T01:21:01.479934254Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:21:01.480434 containerd[1849]: time="2026-01-28T01:21:01.480038917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 01:21:01.480841 kubelet[3202]: E0128 01:21:01.480649 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:21:01.480841 kubelet[3202]: E0128 01:21:01.480715 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:21:01.482114 kubelet[3202]: E0128 01:21:01.481937 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-544bz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pcsbp_calico-system(cbd1002c-7263-43fc-8b17-789e41b44261): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:21:01.483911 kubelet[3202]: E0128 01:21:01.483862 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pcsbp" podUID="cbd1002c-7263-43fc-8b17-789e41b44261" Jan 28 01:21:03.205454 kubelet[3202]: E0128 01:21:03.205403 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597546f57d-dtd92" podUID="77c5b13e-b6c7-4da4-8d69-cf95701836c8" Jan 28 01:21:03.880099 update_engine[1835]: I20260128 01:21:03.879930 1835 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 28 01:21:03.880099 update_engine[1835]: I20260128 01:21:03.880023 1835 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 28 01:21:03.884795 update_engine[1835]: I20260128 01:21:03.884110 1835 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 28 01:21:03.884795 update_engine[1835]: I20260128 01:21:03.884640 1835 omaha_request_params.cc:62] Current group set to alpha Jan 28 01:21:03.887340 update_engine[1835]: I20260128 01:21:03.887306 1835 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 28 01:21:03.887466 update_engine[1835]: I20260128 01:21:03.887450 1835 update_attempter.cc:643] Scheduling an action processor start. Jan 28 01:21:03.887547 update_engine[1835]: I20260128 01:21:03.887531 1835 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 28 01:21:03.887663 update_engine[1835]: I20260128 01:21:03.887650 1835 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 28 01:21:03.887996 update_engine[1835]: I20260128 01:21:03.887883 1835 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 28 01:21:03.889035 update_engine[1835]: I20260128 01:21:03.888064 1835 omaha_request_action.cc:272] Request: Jan 28 01:21:03.889035 update_engine[1835]: Jan 28 01:21:03.889035 update_engine[1835]: Jan 28 01:21:03.889035 update_engine[1835]: Jan 28 01:21:03.889035 update_engine[1835]: Jan 28 01:21:03.889035 update_engine[1835]: Jan 28 01:21:03.889035 update_engine[1835]: Jan 28 01:21:03.889035 update_engine[1835]: Jan 28 01:21:03.889035 update_engine[1835]: Jan 28 01:21:03.889035 update_engine[1835]: I20260128 01:21:03.888078 1835 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:21:03.937151 update_engine[1835]: I20260128 01:21:03.937029 1835 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:21:03.938381 update_engine[1835]: I20260128 01:21:03.938272 1835 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:21:03.955008 update_engine[1835]: E20260128 01:21:03.954943 1835 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 28 01:21:03.955161 update_engine[1835]: I20260128 01:21:03.955073 1835 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 28 01:21:03.961091 locksmithd[1905]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 28 01:21:04.536233 systemd[1]: Started sshd@24-172.31.31.26:22-68.220.241.50:39356.service - OpenSSH per-connection server daemon (68.220.241.50:39356). Jan 28 01:21:04.544252 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:21:04.544373 kernel: audit: type=1130 audit(1769563264.536:928): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.31.26:22-68.220.241.50:39356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:21:04.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.31.26:22-68.220.241.50:39356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:21:05.058000 audit[5833]: USER_ACCT pid=5833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:21:05.066532 kernel: audit: type=1101 audit(1769563265.058:929): pid=5833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:21:05.066817 sshd[5833]: Accepted publickey for core from 68.220.241.50 port 39356 ssh2: RSA SHA256:PpvjS6sxgjOf+voyr4NrS2kTF8aDF7ek5ziSVtOzP6U Jan 28 01:21:05.070133 sshd-session[5833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:21:05.067000 audit[5833]: CRED_ACQ pid=5833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:21:05.078008 kernel: audit: type=1103 audit(1769563265.067:930): pid=5833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:21:05.081797 kernel: audit: type=1006 audit(1769563265.067:931): pid=5833 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 28 01:21:05.089005 kernel: audit: type=1300 audit(1769563265.067:931): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc97f341b0 a2=3 a3=0 items=0 ppid=1 pid=5833 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:21:05.067000 audit[5833]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc97f341b0 a2=3 a3=0 items=0 ppid=1 pid=5833 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:21:05.067000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:21:05.093773 kernel: audit: type=1327 audit(1769563265.067:931): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:21:05.096866 systemd-logind[1833]: New session 26 of user core. Jan 28 01:21:05.107112 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 28 01:21:05.122789 kernel: audit: type=1105 audit(1769563265.113:932): pid=5833 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:21:05.113000 audit[5833]: USER_START pid=5833 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:21:05.121000 audit[5838]: CRED_ACQ pid=5838 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:21:05.131806 kernel: audit: type=1103 audit(1769563265.121:933): pid=5838 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:21:05.672151 sshd[5838]: Connection closed by 68.220.241.50 port 39356 Jan 28 01:21:05.672545 sshd-session[5833]: pam_unix(sshd:session): session closed for user core Jan 28 01:21:05.682953 kernel: audit: type=1106 audit(1769563265.674:934): pid=5833 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:21:05.674000 audit[5833]: USER_END pid=5833 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:21:05.688615 systemd[1]: sshd@24-172.31.31.26:22-68.220.241.50:39356.service: Deactivated successfully. Jan 28 01:21:05.674000 audit[5833]: CRED_DISP pid=5833 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:21:05.695911 kernel: audit: type=1104 audit(1769563265.674:935): pid=5833 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 28 01:21:05.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.31.26:22-68.220.241.50:39356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:21:05.700012 systemd[1]: session-26.scope: Deactivated successfully. Jan 28 01:21:05.707525 systemd-logind[1833]: Session 26 logged out. Waiting for processes to exit. Jan 28 01:21:05.711146 systemd-logind[1833]: Removed session 26. Jan 28 01:21:06.205926 containerd[1849]: time="2026-01-28T01:21:06.205872586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:21:06.492780 containerd[1849]: time="2026-01-28T01:21:06.492626871Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:21:06.494885 containerd[1849]: time="2026-01-28T01:21:06.494797836Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:21:06.495031 containerd[1849]: time="2026-01-28T01:21:06.494855541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:21:06.495207 kubelet[3202]: E0128 01:21:06.495163 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:21:06.495631 kubelet[3202]: E0128 01:21:06.495224 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:21:06.495631 kubelet[3202]: E0128 01:21:06.495474 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9vv22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74bc45499-4hxx4_calico-apiserver(325cd625-25dd-4d22-8523-67e469e6d0e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:21:06.496832 kubelet[3202]: E0128 01:21:06.496794 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74bc45499-4hxx4" podUID="325cd625-25dd-4d22-8523-67e469e6d0e9" Jan 28 01:21:06.497410 containerd[1849]: time="2026-01-28T01:21:06.497376477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:21:06.776898 containerd[1849]: time="2026-01-28T01:21:06.776614342Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:21:06.779030 containerd[1849]: time="2026-01-28T01:21:06.778871874Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:21:06.779443 containerd[1849]: time="2026-01-28T01:21:06.779239307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 01:21:06.782777 kubelet[3202]: E0128 01:21:06.781877 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:21:06.782777 kubelet[3202]: E0128 01:21:06.781951 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:21:06.782777 kubelet[3202]: E0128 01:21:06.782129 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e0b05c5628ca455b907d3d9bf2c70df6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6pdxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-799d7c8d49-2vrfp_calico-system(9accd949-c4fa-4ce7-b3a8-524deb448a06): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:21:06.787985 containerd[1849]: time="2026-01-28T01:21:06.787517584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:21:07.193948 containerd[1849]: time="2026-01-28T01:21:07.193832513Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:21:07.196342 containerd[1849]: time="2026-01-28T01:21:07.196199738Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:21:07.196342 containerd[1849]: time="2026-01-28T01:21:07.196310202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 01:21:07.196889 kubelet[3202]: E0128 01:21:07.196734 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:21:07.196889 kubelet[3202]: E0128 01:21:07.196815 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:21:07.197815 kubelet[3202]: E0128 01:21:07.197372 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6pdxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-799d7c8d49-2vrfp_calico-system(9accd949-c4fa-4ce7-b3a8-524deb448a06): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:21:07.199149 kubelet[3202]: E0128 01:21:07.199084 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-799d7c8d49-2vrfp" podUID="9accd949-c4fa-4ce7-b3a8-524deb448a06" Jan 28 01:21:09.205015 containerd[1849]: time="2026-01-28T01:21:09.203971055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:21:09.457050 containerd[1849]: time="2026-01-28T01:21:09.456050395Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:21:09.459165 containerd[1849]: time="2026-01-28T01:21:09.459039076Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:21:09.459165 containerd[1849]: time="2026-01-28T01:21:09.459138328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 01:21:09.459700 kubelet[3202]: E0128 01:21:09.459472 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:21:09.459700 kubelet[3202]: E0128 01:21:09.459520 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:21:09.461577 kubelet[3202]: E0128 01:21:09.460131 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2875x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-dbbbff994-sqgh4_calico-system(c988f11c-6f16-4100-9308-ea1983457126): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:21:09.461836 kubelet[3202]: E0128 01:21:09.461807 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dbbbff994-sqgh4" podUID="c988f11c-6f16-4100-9308-ea1983457126" Jan 28 01:21:11.202687 containerd[1849]: time="2026-01-28T01:21:11.202630555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:21:11.572613 containerd[1849]: time="2026-01-28T01:21:11.572399522Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:21:11.574585 containerd[1849]: time="2026-01-28T01:21:11.574513330Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:21:11.574898 containerd[1849]: time="2026-01-28T01:21:11.574556748Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 01:21:11.574949 kubelet[3202]: E0128 01:21:11.574829 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:21:11.574949 kubelet[3202]: E0128 01:21:11.574900 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:21:11.575378 kubelet[3202]: E0128 01:21:11.575135 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4bxdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5fjx2_calico-system(6c6224c1-45a4-4e67-9483-34412dd5913e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:21:11.590936 containerd[1849]: time="2026-01-28T01:21:11.590884631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:21:11.973514 containerd[1849]: time="2026-01-28T01:21:11.973465138Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:21:11.975800 containerd[1849]: time="2026-01-28T01:21:11.975720752Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:21:11.976179 containerd[1849]: time="2026-01-28T01:21:11.975845207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 01:21:11.976230 kubelet[3202]: E0128 01:21:11.976062 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:21:11.976230 kubelet[3202]: E0128 01:21:11.976135 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:21:11.976307 kubelet[3202]: E0128 01:21:11.976266 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4bxdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5fjx2_calico-system(6c6224c1-45a4-4e67-9483-34412dd5913e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:21:11.977490 kubelet[3202]: E0128 01:21:11.977444 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5fjx2" podUID="6c6224c1-45a4-4e67-9483-34412dd5913e" Jan 28 01:21:12.203816 kubelet[3202]: E0128 01:21:12.203669 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pcsbp" podUID="cbd1002c-7263-43fc-8b17-789e41b44261" Jan 28 01:21:13.203058 kubelet[3202]: E0128 01:21:13.203018 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74bc45499-2znnx" podUID="d6acbfd4-88d6-4133-9434-3bfec2c327d4" Jan 28 01:21:13.761351 update_engine[1835]: I20260128 01:21:13.761262 1835 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:21:13.761351 update_engine[1835]: I20260128 01:21:13.761363 1835 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:21:13.761845 update_engine[1835]: I20260128 01:21:13.761816 1835 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:21:13.763175 update_engine[1835]: E20260128 01:21:13.762892 1835 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 28 01:21:13.763175 update_engine[1835]: I20260128 01:21:13.762979 1835 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 28 01:21:17.203318 containerd[1849]: time="2026-01-28T01:21:17.203269191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:21:17.513433 containerd[1849]: time="2026-01-28T01:21:17.513307288Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:21:17.515587 containerd[1849]: time="2026-01-28T01:21:17.515535020Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:21:17.515891 containerd[1849]: time="2026-01-28T01:21:17.515556437Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:21:17.515928 kubelet[3202]: E0128 01:21:17.515769 3202 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:21:17.515928 kubelet[3202]: E0128 01:21:17.515831 3202 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:21:17.516228 kubelet[3202]: E0128 01:21:17.515974 3202 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5pfgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-597546f57d-dtd92_calico-apiserver(77c5b13e-b6c7-4da4-8d69-cf95701836c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:21:17.517222 kubelet[3202]: E0128 01:21:17.517178 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597546f57d-dtd92" podUID="77c5b13e-b6c7-4da4-8d69-cf95701836c8" Jan 28 01:21:18.203377 kubelet[3202]: E0128 01:21:18.203245 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74bc45499-4hxx4" podUID="325cd625-25dd-4d22-8523-67e469e6d0e9" Jan 28 01:21:19.865703 kubelet[3202]: E0128 01:21:19.865639 3202 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-26?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 01:21:20.683170 systemd[1]: cri-containerd-ea9363c3d913de59a074bb70af6d08931c92bdabf737955310fb0d2a3ddd64f7.scope: Deactivated successfully. Jan 28 01:21:20.684215 systemd[1]: cri-containerd-ea9363c3d913de59a074bb70af6d08931c92bdabf737955310fb0d2a3ddd64f7.scope: Consumed 12.390s CPU time, 114.8M memory peak, 45.6M read from disk. Jan 28 01:21:20.686000 audit: BPF prog-id=153 op=UNLOAD Jan 28 01:21:20.688915 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:21:20.688964 kernel: audit: type=1334 audit(1769563280.686:937): prog-id=153 op=UNLOAD Jan 28 01:21:20.691290 kernel: audit: type=1334 audit(1769563280.686:938): prog-id=157 op=UNLOAD Jan 28 01:21:20.686000 audit: BPF prog-id=157 op=UNLOAD Jan 28 01:21:20.822547 containerd[1849]: time="2026-01-28T01:21:20.822472242Z" level=info msg="received container exit event container_id:\"ea9363c3d913de59a074bb70af6d08931c92bdabf737955310fb0d2a3ddd64f7\" id:\"ea9363c3d913de59a074bb70af6d08931c92bdabf737955310fb0d2a3ddd64f7\" pid:3629 exit_status:1 exited_at:{seconds:1769563280 nanos:730865235}" Jan 28 01:21:20.896513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea9363c3d913de59a074bb70af6d08931c92bdabf737955310fb0d2a3ddd64f7-rootfs.mount: Deactivated successfully. Jan 28 01:21:21.041488 kubelet[3202]: I0128 01:21:21.041378 3202 scope.go:117] "RemoveContainer" containerID="ea9363c3d913de59a074bb70af6d08931c92bdabf737955310fb0d2a3ddd64f7" Jan 28 01:21:21.063445 containerd[1849]: time="2026-01-28T01:21:21.063404973Z" level=info msg="CreateContainer within sandbox \"bc3c4103fc03007385005856119110a170a4caa740f65784f0a2408de35c46a0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 28 01:21:21.107028 containerd[1849]: time="2026-01-28T01:21:21.106957443Z" level=info msg="Container 6531d885b8180ba88cf3c6eac9bb9140719668abe46e20d9745c82a20d8255a4: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:21:21.123525 containerd[1849]: time="2026-01-28T01:21:21.123480700Z" level=info msg="CreateContainer within sandbox \"bc3c4103fc03007385005856119110a170a4caa740f65784f0a2408de35c46a0\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"6531d885b8180ba88cf3c6eac9bb9140719668abe46e20d9745c82a20d8255a4\"" Jan 28 01:21:21.124078 containerd[1849]: time="2026-01-28T01:21:21.124049318Z" level=info msg="StartContainer for \"6531d885b8180ba88cf3c6eac9bb9140719668abe46e20d9745c82a20d8255a4\"" Jan 28 01:21:21.124972 containerd[1849]: time="2026-01-28T01:21:21.124930128Z" level=info msg="connecting to shim 6531d885b8180ba88cf3c6eac9bb9140719668abe46e20d9745c82a20d8255a4" address="unix:///run/containerd/s/7cf614ac5997902b155d896bdd3877b873629e33278dec004eb366bf590293f8" protocol=ttrpc version=3 Jan 28 01:21:21.156101 systemd[1]: Started cri-containerd-6531d885b8180ba88cf3c6eac9bb9140719668abe46e20d9745c82a20d8255a4.scope - libcontainer container 6531d885b8180ba88cf3c6eac9bb9140719668abe46e20d9745c82a20d8255a4. Jan 28 01:21:21.188000 audit: BPF prog-id=268 op=LOAD Jan 28 01:21:21.191865 kernel: audit: type=1334 audit(1769563281.188:939): prog-id=268 op=LOAD Jan 28 01:21:21.191960 kernel: audit: type=1334 audit(1769563281.190:940): prog-id=269 op=LOAD Jan 28 01:21:21.190000 audit: BPF prog-id=269 op=LOAD Jan 28 01:21:21.199998 kernel: audit: type=1300 audit(1769563281.190:940): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3407 pid=5871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:21:21.200130 kernel: audit: type=1327 audit(1769563281.190:940): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635333164383835623831383062613838636633633665616339626239 Jan 28 01:21:21.190000 audit[5871]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3407 pid=5871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:21:21.190000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635333164383835623831383062613838636633633665616339626239 Jan 28 01:21:21.190000 audit: BPF prog-id=269 op=UNLOAD Jan 28 01:21:21.190000 audit[5871]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3407 pid=5871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:21:21.221259 kernel: audit: type=1334 audit(1769563281.190:941): prog-id=269 op=UNLOAD Jan 28 01:21:21.221363 kernel: audit: type=1300 audit(1769563281.190:941): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3407 pid=5871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:21:21.190000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635333164383835623831383062613838636633633665616339626239 Jan 28 01:21:21.231820 kernel: audit: type=1327 audit(1769563281.190:941): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635333164383835623831383062613838636633633665616339626239 Jan 28 01:21:21.206000 audit: BPF prog-id=270 op=LOAD Jan 28 01:21:21.238841 kernel: audit: type=1334 audit(1769563281.206:942): prog-id=270 op=LOAD Jan 28 01:21:21.206000 audit[5871]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3407 pid=5871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:21:21.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635333164383835623831383062613838636633633665616339626239 Jan 28 01:21:21.207000 audit: BPF prog-id=271 op=LOAD Jan 28 01:21:21.207000 audit[5871]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3407 pid=5871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:21:21.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635333164383835623831383062613838636633633665616339626239 Jan 28 01:21:21.207000 audit: BPF prog-id=271 op=UNLOAD Jan 28 01:21:21.207000 audit[5871]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3407 pid=5871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:21:21.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635333164383835623831383062613838636633633665616339626239 Jan 28 01:21:21.208000 audit: BPF prog-id=270 op=UNLOAD Jan 28 01:21:21.208000 audit[5871]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3407 pid=5871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:21:21.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635333164383835623831383062613838636633633665616339626239 Jan 28 01:21:21.208000 audit: BPF prog-id=272 op=LOAD Jan 28 01:21:21.208000 audit[5871]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3407 pid=5871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:21:21.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635333164383835623831383062613838636633633665616339626239 Jan 28 01:21:21.263368 systemd[1]: cri-containerd-75ae5586186a29acb3fa5face99767761926623b2b06f45e8818c6fd2915b38c.scope: Deactivated successfully. Jan 28 01:21:21.263778 systemd[1]: cri-containerd-75ae5586186a29acb3fa5face99767761926623b2b06f45e8818c6fd2915b38c.scope: Consumed 5.088s CPU time, 105.7M memory peak, 95.7M read from disk. Jan 28 01:21:21.263000 audit: BPF prog-id=273 op=LOAD Jan 28 01:21:21.263000 audit: BPF prog-id=95 op=UNLOAD Jan 28 01:21:21.265000 audit: BPF prog-id=105 op=UNLOAD Jan 28 01:21:21.265000 audit: BPF prog-id=110 op=UNLOAD Jan 28 01:21:21.267031 containerd[1849]: time="2026-01-28T01:21:21.266992981Z" level=info msg="received container exit event container_id:\"75ae5586186a29acb3fa5face99767761926623b2b06f45e8818c6fd2915b38c\" id:\"75ae5586186a29acb3fa5face99767761926623b2b06f45e8818c6fd2915b38c\" pid:3010 exit_status:1 exited_at:{seconds:1769563281 nanos:266517369}" Jan 28 01:21:21.274796 containerd[1849]: time="2026-01-28T01:21:21.274700014Z" level=info msg="StartContainer for \"6531d885b8180ba88cf3c6eac9bb9140719668abe46e20d9745c82a20d8255a4\" returns successfully" Jan 28 01:21:21.311162 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75ae5586186a29acb3fa5face99767761926623b2b06f45e8818c6fd2915b38c-rootfs.mount: Deactivated successfully. Jan 28 01:21:22.043975 kubelet[3202]: I0128 01:21:22.043930 3202 scope.go:117] "RemoveContainer" containerID="75ae5586186a29acb3fa5face99767761926623b2b06f45e8818c6fd2915b38c" Jan 28 01:21:22.046608 containerd[1849]: time="2026-01-28T01:21:22.046559130Z" level=info msg="CreateContainer within sandbox \"be908f3cd1b03f482cffcf3092c90d5054ebe13ef117b1dbc6e66ca5d848c588\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 28 01:21:22.077768 containerd[1849]: time="2026-01-28T01:21:22.073637976Z" level=info msg="Container a5a7bab369a7a3718ebde927aab19a279e434bb6ec5bb50043f40b5a24cf9faa: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:21:22.076169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2793846365.mount: Deactivated successfully. Jan 28 01:21:22.081459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2752456127.mount: Deactivated successfully. Jan 28 01:21:22.092770 containerd[1849]: time="2026-01-28T01:21:22.092636705Z" level=info msg="CreateContainer within sandbox \"be908f3cd1b03f482cffcf3092c90d5054ebe13ef117b1dbc6e66ca5d848c588\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a5a7bab369a7a3718ebde927aab19a279e434bb6ec5bb50043f40b5a24cf9faa\"" Jan 28 01:21:22.093357 containerd[1849]: time="2026-01-28T01:21:22.093307361Z" level=info msg="StartContainer for \"a5a7bab369a7a3718ebde927aab19a279e434bb6ec5bb50043f40b5a24cf9faa\"" Jan 28 01:21:22.094777 containerd[1849]: time="2026-01-28T01:21:22.094569001Z" level=info msg="connecting to shim a5a7bab369a7a3718ebde927aab19a279e434bb6ec5bb50043f40b5a24cf9faa" address="unix:///run/containerd/s/78f097f1733ab74090530c028c3792fcf76c86e2718ec9bb0c9e00ce327e11f8" protocol=ttrpc version=3 Jan 28 01:21:22.124025 systemd[1]: Started cri-containerd-a5a7bab369a7a3718ebde927aab19a279e434bb6ec5bb50043f40b5a24cf9faa.scope - libcontainer container a5a7bab369a7a3718ebde927aab19a279e434bb6ec5bb50043f40b5a24cf9faa. Jan 28 01:21:22.142000 audit: BPF prog-id=274 op=LOAD Jan 28 01:21:22.142000 audit: BPF prog-id=275 op=LOAD Jan 28 01:21:22.142000 audit[5915]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2873 pid=5915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:21:22.142000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135613762616233363961376133373138656264653932376161623139 Jan 28 01:21:22.142000 audit: BPF prog-id=275 op=UNLOAD Jan 28 01:21:22.142000 audit[5915]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2873 pid=5915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:21:22.142000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135613762616233363961376133373138656264653932376161623139 Jan 28 01:21:22.143000 audit: BPF prog-id=276 op=LOAD Jan 28 01:21:22.143000 audit[5915]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2873 pid=5915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:21:22.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135613762616233363961376133373138656264653932376161623139 Jan 28 01:21:22.143000 audit: BPF prog-id=277 op=LOAD Jan 28 01:21:22.143000 audit[5915]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2873 pid=5915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:21:22.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135613762616233363961376133373138656264653932376161623139 Jan 28 01:21:22.143000 audit: BPF prog-id=277 op=UNLOAD Jan 28 01:21:22.143000 audit[5915]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2873 pid=5915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:21:22.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135613762616233363961376133373138656264653932376161623139 Jan 28 01:21:22.143000 audit: BPF prog-id=276 op=UNLOAD Jan 28 01:21:22.143000 audit[5915]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2873 pid=5915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:21:22.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135613762616233363961376133373138656264653932376161623139 Jan 28 01:21:22.143000 audit: BPF prog-id=278 op=LOAD Jan 28 01:21:22.143000 audit[5915]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2873 pid=5915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:21:22.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135613762616233363961376133373138656264653932376161623139 Jan 28 01:21:22.196589 containerd[1849]: time="2026-01-28T01:21:22.196536832Z" level=info msg="StartContainer for \"a5a7bab369a7a3718ebde927aab19a279e434bb6ec5bb50043f40b5a24cf9faa\" returns successfully" Jan 28 01:21:22.204375 kubelet[3202]: E0128 01:21:22.204296 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dbbbff994-sqgh4" podUID="c988f11c-6f16-4100-9308-ea1983457126" Jan 28 01:21:22.206013 kubelet[3202]: E0128 01:21:22.205967 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-799d7c8d49-2vrfp" podUID="9accd949-c4fa-4ce7-b3a8-524deb448a06" Jan 28 01:21:23.761680 update_engine[1835]: I20260128 01:21:23.761610 1835 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:21:23.762918 update_engine[1835]: I20260128 01:21:23.761713 1835 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:21:23.762918 update_engine[1835]: I20260128 01:21:23.762164 1835 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:21:23.763411 update_engine[1835]: E20260128 01:21:23.763254 1835 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 28 01:21:23.763411 update_engine[1835]: I20260128 01:21:23.763348 1835 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 28 01:21:24.204267 kubelet[3202]: E0128 01:21:24.204189 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5fjx2" podUID="6c6224c1-45a4-4e67-9483-34412dd5913e" Jan 28 01:21:24.204975 kubelet[3202]: E0128 01:21:24.204274 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pcsbp" podUID="cbd1002c-7263-43fc-8b17-789e41b44261" Jan 28 01:21:24.602423 systemd[1]: cri-containerd-a613b8551af9b0fc562aaf9f77513d2176c42591cc38b6675f7e94c55d8a98de.scope: Deactivated successfully. Jan 28 01:21:24.603000 audit: BPF prog-id=279 op=LOAD Jan 28 01:21:24.603000 audit: BPF prog-id=100 op=UNLOAD Jan 28 01:21:24.603440 systemd[1]: cri-containerd-a613b8551af9b0fc562aaf9f77513d2176c42591cc38b6675f7e94c55d8a98de.scope: Consumed 2.967s CPU time, 39.3M memory peak, 38M read from disk. Jan 28 01:21:24.608502 containerd[1849]: time="2026-01-28T01:21:24.608463979Z" level=info msg="received container exit event container_id:\"a613b8551af9b0fc562aaf9f77513d2176c42591cc38b6675f7e94c55d8a98de\" id:\"a613b8551af9b0fc562aaf9f77513d2176c42591cc38b6675f7e94c55d8a98de\" pid:3044 exit_status:1 exited_at:{seconds:1769563284 nanos:608083784}" Jan 28 01:21:24.609000 audit: BPF prog-id=115 op=UNLOAD Jan 28 01:21:24.609000 audit: BPF prog-id=119 op=UNLOAD Jan 28 01:21:24.646520 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a613b8551af9b0fc562aaf9f77513d2176c42591cc38b6675f7e94c55d8a98de-rootfs.mount: Deactivated successfully. Jan 28 01:21:25.055596 kubelet[3202]: I0128 01:21:25.055569 3202 scope.go:117] "RemoveContainer" containerID="a613b8551af9b0fc562aaf9f77513d2176c42591cc38b6675f7e94c55d8a98de" Jan 28 01:21:25.061683 containerd[1849]: time="2026-01-28T01:21:25.061625128Z" level=info msg="CreateContainer within sandbox \"5c697e7fc710018cecb07cd6594cde0b67adc367f10f885baf1728295be6bb8f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 28 01:21:25.080074 containerd[1849]: time="2026-01-28T01:21:25.078309622Z" level=info msg="Container fa61bc7b1863d0584a8dddc005db8c7b141523093cc074dbed448b4af13cd1d1: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:21:25.093403 containerd[1849]: time="2026-01-28T01:21:25.093353604Z" level=info msg="CreateContainer within sandbox \"5c697e7fc710018cecb07cd6594cde0b67adc367f10f885baf1728295be6bb8f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"fa61bc7b1863d0584a8dddc005db8c7b141523093cc074dbed448b4af13cd1d1\"" Jan 28 01:21:25.093890 containerd[1849]: time="2026-01-28T01:21:25.093845965Z" level=info msg="StartContainer for \"fa61bc7b1863d0584a8dddc005db8c7b141523093cc074dbed448b4af13cd1d1\"" Jan 28 01:21:25.094922 containerd[1849]: time="2026-01-28T01:21:25.094898003Z" level=info msg="connecting to shim fa61bc7b1863d0584a8dddc005db8c7b141523093cc074dbed448b4af13cd1d1" address="unix:///run/containerd/s/78dc605dbf2df4b37b9dbd034e5985dc3fa3a0b1bbc6a9938b52f9f988d1c149" protocol=ttrpc version=3 Jan 28 01:21:25.118009 systemd[1]: Started cri-containerd-fa61bc7b1863d0584a8dddc005db8c7b141523093cc074dbed448b4af13cd1d1.scope - libcontainer container fa61bc7b1863d0584a8dddc005db8c7b141523093cc074dbed448b4af13cd1d1. Jan 28 01:21:25.130000 audit: BPF prog-id=280 op=LOAD Jan 28 01:21:25.131000 audit: BPF prog-id=281 op=LOAD Jan 28 01:21:25.131000 audit[5960]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=2869 pid=5960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:21:25.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661363162633762313836336430353834613864646463303035646238 Jan 28 01:21:25.131000 audit: BPF prog-id=281 op=UNLOAD Jan 28 01:21:25.131000 audit[5960]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2869 pid=5960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:21:25.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661363162633762313836336430353834613864646463303035646238 Jan 28 01:21:25.131000 audit: BPF prog-id=282 op=LOAD Jan 28 01:21:25.131000 audit[5960]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=2869 pid=5960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:21:25.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661363162633762313836336430353834613864646463303035646238 Jan 28 01:21:25.131000 audit: BPF prog-id=283 op=LOAD Jan 28 01:21:25.131000 audit[5960]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=2869 pid=5960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:21:25.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661363162633762313836336430353834613864646463303035646238 Jan 28 01:21:25.131000 audit: BPF prog-id=283 op=UNLOAD Jan 28 01:21:25.131000 audit[5960]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2869 pid=5960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:21:25.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661363162633762313836336430353834613864646463303035646238 Jan 28 01:21:25.131000 audit: BPF prog-id=282 op=UNLOAD Jan 28 01:21:25.131000 audit[5960]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2869 pid=5960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:21:25.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661363162633762313836336430353834613864646463303035646238 Jan 28 01:21:25.131000 audit: BPF prog-id=284 op=LOAD Jan 28 01:21:25.131000 audit[5960]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=2869 pid=5960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:21:25.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661363162633762313836336430353834613864646463303035646238 Jan 28 01:21:25.170972 containerd[1849]: time="2026-01-28T01:21:25.170930033Z" level=info msg="StartContainer for \"fa61bc7b1863d0584a8dddc005db8c7b141523093cc074dbed448b4af13cd1d1\" returns successfully" Jan 28 01:21:28.202663 kubelet[3202]: E0128 01:21:28.202598 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74bc45499-2znnx" podUID="d6acbfd4-88d6-4133-9434-3bfec2c327d4" Jan 28 01:21:29.202342 kubelet[3202]: E0128 01:21:29.202300 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597546f57d-dtd92" podUID="77c5b13e-b6c7-4da4-8d69-cf95701836c8" Jan 28 01:21:29.866680 kubelet[3202]: E0128 01:21:29.866278 3202 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-26?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 01:21:32.210600 kubelet[3202]: E0128 01:21:32.210539 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74bc45499-4hxx4" podUID="325cd625-25dd-4d22-8523-67e469e6d0e9" Jan 28 01:21:33.023772 systemd[1]: cri-containerd-6531d885b8180ba88cf3c6eac9bb9140719668abe46e20d9745c82a20d8255a4.scope: Deactivated successfully. Jan 28 01:21:33.024804 systemd[1]: cri-containerd-6531d885b8180ba88cf3c6eac9bb9140719668abe46e20d9745c82a20d8255a4.scope: Consumed 346ms CPU time, 71.6M memory peak, 32.5M read from disk. Jan 28 01:21:33.025487 containerd[1849]: time="2026-01-28T01:21:33.024984898Z" level=info msg="received container exit event container_id:\"6531d885b8180ba88cf3c6eac9bb9140719668abe46e20d9745c82a20d8255a4\" id:\"6531d885b8180ba88cf3c6eac9bb9140719668abe46e20d9745c82a20d8255a4\" pid:5884 exit_status:1 exited_at:{seconds:1769563293 nanos:24357525}" Jan 28 01:21:33.030169 kernel: kauditd_printk_skb: 66 callbacks suppressed Jan 28 01:21:33.030275 kernel: audit: type=1334 audit(1769563293.026:971): prog-id=268 op=UNLOAD Jan 28 01:21:33.026000 audit: BPF prog-id=268 op=UNLOAD Jan 28 01:21:33.032265 kernel: audit: type=1334 audit(1769563293.026:972): prog-id=272 op=UNLOAD Jan 28 01:21:33.026000 audit: BPF prog-id=272 op=UNLOAD Jan 28 01:21:33.058354 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6531d885b8180ba88cf3c6eac9bb9140719668abe46e20d9745c82a20d8255a4-rootfs.mount: Deactivated successfully. Jan 28 01:21:33.761067 update_engine[1835]: I20260128 01:21:33.760993 1835 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:21:33.761512 update_engine[1835]: I20260128 01:21:33.761085 1835 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:21:33.761512 update_engine[1835]: I20260128 01:21:33.761423 1835 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:21:33.762509 update_engine[1835]: E20260128 01:21:33.762466 1835 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 28 01:21:33.762615 update_engine[1835]: I20260128 01:21:33.762540 1835 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 28 01:21:33.762615 update_engine[1835]: I20260128 01:21:33.762550 1835 omaha_request_action.cc:617] Omaha request response: Jan 28 01:21:33.762672 update_engine[1835]: E20260128 01:21:33.762617 1835 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 28 01:21:33.762672 update_engine[1835]: I20260128 01:21:33.762642 1835 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 28 01:21:33.762672 update_engine[1835]: I20260128 01:21:33.762647 1835 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 28 01:21:33.762672 update_engine[1835]: I20260128 01:21:33.762652 1835 update_attempter.cc:306] Processing Done. Jan 28 01:21:33.762672 update_engine[1835]: E20260128 01:21:33.762665 1835 update_attempter.cc:619] Update failed. Jan 28 01:21:33.762672 update_engine[1835]: I20260128 01:21:33.762670 1835 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 28 01:21:33.762829 update_engine[1835]: I20260128 01:21:33.762676 1835 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 28 01:21:33.762829 update_engine[1835]: I20260128 01:21:33.762681 1835 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 28 01:21:33.762829 update_engine[1835]: I20260128 01:21:33.762763 1835 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 28 01:21:33.762829 update_engine[1835]: I20260128 01:21:33.762784 1835 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 28 01:21:33.762829 update_engine[1835]: I20260128 01:21:33.762789 1835 omaha_request_action.cc:272] Request: Jan 28 01:21:33.762829 update_engine[1835]: Jan 28 01:21:33.762829 update_engine[1835]: Jan 28 01:21:33.762829 update_engine[1835]: Jan 28 01:21:33.762829 update_engine[1835]: Jan 28 01:21:33.762829 update_engine[1835]: Jan 28 01:21:33.762829 update_engine[1835]: Jan 28 01:21:33.762829 update_engine[1835]: I20260128 01:21:33.762794 1835 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:21:33.762829 update_engine[1835]: I20260128 01:21:33.762812 1835 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:21:33.763114 update_engine[1835]: I20260128 01:21:33.763062 1835 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:21:33.763482 locksmithd[1905]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 28 01:21:33.764013 update_engine[1835]: E20260128 01:21:33.763984 1835 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 28 01:21:33.764064 update_engine[1835]: I20260128 01:21:33.764043 1835 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 28 01:21:33.764064 update_engine[1835]: I20260128 01:21:33.764051 1835 omaha_request_action.cc:617] Omaha request response: Jan 28 01:21:33.764064 update_engine[1835]: I20260128 01:21:33.764057 1835 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 28 01:21:33.764064 update_engine[1835]: I20260128 01:21:33.764062 1835 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 28 01:21:33.764324 update_engine[1835]: I20260128 01:21:33.764067 1835 update_attempter.cc:306] Processing Done. Jan 28 01:21:33.764324 update_engine[1835]: I20260128 01:21:33.764072 1835 update_attempter.cc:310] Error event sent. Jan 28 01:21:33.764324 update_engine[1835]: I20260128 01:21:33.764091 1835 update_check_scheduler.cc:74] Next update check in 49m34s Jan 28 01:21:33.764409 locksmithd[1905]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 28 01:21:34.092851 kubelet[3202]: I0128 01:21:34.092017 3202 scope.go:117] "RemoveContainer" containerID="ea9363c3d913de59a074bb70af6d08931c92bdabf737955310fb0d2a3ddd64f7" Jan 28 01:21:34.092851 kubelet[3202]: I0128 01:21:34.092428 3202 scope.go:117] "RemoveContainer" containerID="6531d885b8180ba88cf3c6eac9bb9140719668abe46e20d9745c82a20d8255a4" Jan 28 01:21:34.103153 kubelet[3202]: E0128 01:21:34.103073 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-t2k5j_tigera-operator(46f4cc39-4770-42f7-a369-5e43b14a884c)\"" pod="tigera-operator/tigera-operator-7dcd859c48-t2k5j" podUID="46f4cc39-4770-42f7-a369-5e43b14a884c" Jan 28 01:21:34.204994 kubelet[3202]: E0128 01:21:34.204948 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dbbbff994-sqgh4" podUID="c988f11c-6f16-4100-9308-ea1983457126" Jan 28 01:21:34.206360 kubelet[3202]: E0128 01:21:34.205817 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-799d7c8d49-2vrfp" podUID="9accd949-c4fa-4ce7-b3a8-524deb448a06" Jan 28 01:21:34.216518 containerd[1849]: time="2026-01-28T01:21:34.216472068Z" level=info msg="RemoveContainer for \"ea9363c3d913de59a074bb70af6d08931c92bdabf737955310fb0d2a3ddd64f7\"" Jan 28 01:21:34.270857 containerd[1849]: time="2026-01-28T01:21:34.270800631Z" level=info msg="RemoveContainer for \"ea9363c3d913de59a074bb70af6d08931c92bdabf737955310fb0d2a3ddd64f7\" returns successfully" Jan 28 01:21:35.202787 kubelet[3202]: E0128 01:21:35.202721 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pcsbp" podUID="cbd1002c-7263-43fc-8b17-789e41b44261" Jan 28 01:21:38.205316 kubelet[3202]: E0128 01:21:38.205262 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5fjx2" podUID="6c6224c1-45a4-4e67-9483-34412dd5913e" Jan 28 01:21:39.876471 kubelet[3202]: E0128 01:21:39.876322 3202 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-26?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 01:21:40.202486 kubelet[3202]: E0128 01:21:40.202415 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74bc45499-2znnx" podUID="d6acbfd4-88d6-4133-9434-3bfec2c327d4" Jan 28 01:21:42.202447 kubelet[3202]: E0128 01:21:42.202194 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597546f57d-dtd92" podUID="77c5b13e-b6c7-4da4-8d69-cf95701836c8" Jan 28 01:21:43.203042 kubelet[3202]: E0128 01:21:43.203001 3202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74bc45499-4hxx4" podUID="325cd625-25dd-4d22-8523-67e469e6d0e9"