Jan 22 00:41:04.019727 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 21 22:02:49 -00 2026 Jan 22 00:41:04.019765 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2c7ce323fe43e7b63a59c25601f0c418cba5a1d902eeaa4bfcebc579e79e52d2 Jan 22 00:41:04.019784 kernel: BIOS-provided physical RAM map: Jan 22 00:41:04.019811 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 22 00:41:04.019822 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jan 22 00:41:04.019834 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jan 22 00:41:04.019848 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 22 00:41:04.019860 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 22 00:41:04.019870 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 22 00:41:04.019883 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 22 00:41:04.019900 kernel: NX (Execute Disable) protection: active Jan 22 00:41:04.019911 kernel: APIC: Static calls initialized Jan 22 00:41:04.019924 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Jan 22 00:41:04.019937 kernel: extended physical RAM map: Jan 22 00:41:04.019952 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 22 00:41:04.019994 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Jan 22 00:41:04.020008 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Jan 22 00:41:04.020032 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Jan 22 00:41:04.020047 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jan 22 00:41:04.020061 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 22 00:41:04.020075 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 22 00:41:04.020088 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 22 00:41:04.020185 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 22 00:41:04.020194 kernel: efi: EFI v2.7 by EDK II Jan 22 00:41:04.020202 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Jan 22 00:41:04.020214 kernel: secureboot: Secure boot disabled Jan 22 00:41:04.020222 kernel: SMBIOS 2.7 present. Jan 22 00:41:04.020230 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 22 00:41:04.020238 kernel: DMI: Memory slots populated: 1/1 Jan 22 00:41:04.020245 kernel: Hypervisor detected: KVM Jan 22 00:41:04.020253 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 22 00:41:04.020261 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 22 00:41:04.020269 kernel: kvm-clock: using sched offset of 6438943515 cycles Jan 22 00:41:04.020278 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 22 00:41:04.020287 kernel: tsc: Detected 2499.998 MHz processor Jan 22 00:41:04.020298 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 22 00:41:04.020307 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 22 00:41:04.020315 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 22 00:41:04.020323 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 22 00:41:04.020332 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 22 00:41:04.020344 kernel: Using GB pages for direct mapping Jan 22 00:41:04.020355 kernel: ACPI: Early table checksum verification disabled Jan 22 00:41:04.020364 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jan 22 00:41:04.020373 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jan 22 00:41:04.020382 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 22 00:41:04.020391 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 22 00:41:04.020402 kernel: ACPI: FACS 0x00000000789D0000 000040 Jan 22 00:41:04.020410 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 22 00:41:04.020419 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 22 00:41:04.020427 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 22 00:41:04.020436 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 22 00:41:04.020445 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 22 00:41:04.020454 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 22 00:41:04.020464 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 22 00:41:04.020473 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jan 22 00:41:04.020482 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jan 22 00:41:04.020491 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jan 22 00:41:04.020499 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jan 22 00:41:04.020508 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jan 22 00:41:04.020517 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jan 22 00:41:04.020525 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jan 22 00:41:04.020536 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jan 22 00:41:04.020545 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jan 22 00:41:04.020554 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jan 22 00:41:04.020562 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jan 22 00:41:04.020571 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jan 22 00:41:04.020579 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 22 00:41:04.020588 kernel: NUMA: Initialized distance table, cnt=1 Jan 22 00:41:04.020599 kernel: NODE_DATA(0) allocated [mem 0x7a8eedc0-0x7a8f5fff] Jan 22 00:41:04.020608 kernel: Zone ranges: Jan 22 00:41:04.020616 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 22 00:41:04.020625 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jan 22 00:41:04.020634 kernel: Normal empty Jan 22 00:41:04.020643 kernel: Device empty Jan 22 00:41:04.020651 kernel: Movable zone start for each node Jan 22 00:41:04.020660 kernel: Early memory node ranges Jan 22 00:41:04.020670 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 22 00:41:04.020679 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jan 22 00:41:04.020688 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jan 22 00:41:04.020696 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jan 22 00:41:04.020705 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 22 00:41:04.020714 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 22 00:41:04.020723 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 22 00:41:04.020733 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jan 22 00:41:04.020742 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 22 00:41:04.020751 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 22 00:41:04.020760 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 22 00:41:04.020768 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 22 00:41:04.020777 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 22 00:41:04.020809 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 22 00:41:04.020823 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 22 00:41:04.020994 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 22 00:41:04.021003 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 22 00:41:04.021012 kernel: TSC deadline timer available Jan 22 00:41:04.021021 kernel: CPU topo: Max. logical packages: 1 Jan 22 00:41:04.021030 kernel: CPU topo: Max. logical dies: 1 Jan 22 00:41:04.021038 kernel: CPU topo: Max. dies per package: 1 Jan 22 00:41:04.021047 kernel: CPU topo: Max. threads per core: 2 Jan 22 00:41:04.021059 kernel: CPU topo: Num. cores per package: 1 Jan 22 00:41:04.021067 kernel: CPU topo: Num. threads per package: 2 Jan 22 00:41:04.021076 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 22 00:41:04.021084 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 22 00:41:04.021093 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jan 22 00:41:04.021102 kernel: Booting paravirtualized kernel on KVM Jan 22 00:41:04.021111 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 22 00:41:04.021120 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 22 00:41:04.021131 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 22 00:41:04.021140 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 22 00:41:04.021148 kernel: pcpu-alloc: [0] 0 1 Jan 22 00:41:04.021157 kernel: kvm-guest: PV spinlocks enabled Jan 22 00:41:04.021166 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 22 00:41:04.021177 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2c7ce323fe43e7b63a59c25601f0c418cba5a1d902eeaa4bfcebc579e79e52d2 Jan 22 00:41:04.021189 kernel: random: crng init done Jan 22 00:41:04.021197 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 22 00:41:04.021206 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 22 00:41:04.021215 kernel: Fallback order for Node 0: 0 Jan 22 00:41:04.021224 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Jan 22 00:41:04.021233 kernel: Policy zone: DMA32 Jan 22 00:41:04.021251 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 22 00:41:04.021261 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 22 00:41:04.021270 kernel: Kernel/User page tables isolation: enabled Jan 22 00:41:04.021281 kernel: ftrace: allocating 40097 entries in 157 pages Jan 22 00:41:04.021291 kernel: ftrace: allocated 157 pages with 5 groups Jan 22 00:41:04.021300 kernel: Dynamic Preempt: voluntary Jan 22 00:41:04.021309 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 22 00:41:04.021319 kernel: rcu: RCU event tracing is enabled. Jan 22 00:41:04.021328 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 22 00:41:04.021338 kernel: Trampoline variant of Tasks RCU enabled. Jan 22 00:41:04.021349 kernel: Rude variant of Tasks RCU enabled. Jan 22 00:41:04.021359 kernel: Tracing variant of Tasks RCU enabled. Jan 22 00:41:04.021376 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 22 00:41:04.021385 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 22 00:41:04.021397 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 22 00:41:04.021409 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 22 00:41:04.021418 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 22 00:41:04.021427 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 22 00:41:04.021437 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 22 00:41:04.021446 kernel: Console: colour dummy device 80x25 Jan 22 00:41:04.021455 kernel: printk: legacy console [tty0] enabled Jan 22 00:41:04.021464 kernel: printk: legacy console [ttyS0] enabled Jan 22 00:41:04.021476 kernel: ACPI: Core revision 20240827 Jan 22 00:41:04.021485 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 22 00:41:04.021495 kernel: APIC: Switch to symmetric I/O mode setup Jan 22 00:41:04.021504 kernel: x2apic enabled Jan 22 00:41:04.021513 kernel: APIC: Switched APIC routing to: physical x2apic Jan 22 00:41:04.021523 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 22 00:41:04.021532 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 22 00:41:04.021543 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 22 00:41:04.021553 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jan 22 00:41:04.021562 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 22 00:41:04.021571 kernel: Spectre V2 : Mitigation: Retpolines Jan 22 00:41:04.021579 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 22 00:41:04.021588 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 22 00:41:04.021597 kernel: RETBleed: Vulnerable Jan 22 00:41:04.021606 kernel: Speculative Store Bypass: Vulnerable Jan 22 00:41:04.021615 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 22 00:41:04.021624 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 22 00:41:04.021635 kernel: GDS: Unknown: Dependent on hypervisor status Jan 22 00:41:04.021644 kernel: active return thunk: its_return_thunk Jan 22 00:41:04.021652 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 22 00:41:04.021661 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 22 00:41:04.021670 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 22 00:41:04.021679 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 22 00:41:04.021688 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 22 00:41:04.021697 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 22 00:41:04.021713 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 22 00:41:04.021729 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 22 00:41:04.021742 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 22 00:41:04.021755 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 22 00:41:04.021768 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 22 00:41:04.021782 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 22 00:41:04.021817 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 22 00:41:04.021843 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 22 00:41:04.021858 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 22 00:41:04.021867 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 22 00:41:04.021876 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 22 00:41:04.021885 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 22 00:41:04.021898 kernel: Freeing SMP alternatives memory: 32K Jan 22 00:41:04.021907 kernel: pid_max: default: 32768 minimum: 301 Jan 22 00:41:04.021916 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 22 00:41:04.021925 kernel: landlock: Up and running. Jan 22 00:41:04.021934 kernel: SELinux: Initializing. Jan 22 00:41:04.021943 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 22 00:41:04.021952 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 22 00:41:04.021961 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x4) Jan 22 00:41:04.021970 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 22 00:41:04.021979 kernel: signal: max sigframe size: 3632 Jan 22 00:41:04.022104 kernel: rcu: Hierarchical SRCU implementation. Jan 22 00:41:04.022116 kernel: rcu: Max phase no-delay instances is 400. Jan 22 00:41:04.022126 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 22 00:41:04.022135 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 22 00:41:04.022145 kernel: smp: Bringing up secondary CPUs ... Jan 22 00:41:04.022154 kernel: smpboot: x86: Booting SMP configuration: Jan 22 00:41:04.022164 kernel: .... node #0, CPUs: #1 Jan 22 00:41:04.022174 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 22 00:41:04.022187 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 22 00:41:04.022196 kernel: smp: Brought up 1 node, 2 CPUs Jan 22 00:41:04.022206 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 22 00:41:04.022215 kernel: Memory: 1926484K/2037804K available (14336K kernel code, 2445K rwdata, 29896K rodata, 15436K init, 2604K bss, 106756K reserved, 0K cma-reserved) Jan 22 00:41:04.022225 kernel: devtmpfs: initialized Jan 22 00:41:04.022234 kernel: x86/mm: Memory block size: 128MB Jan 22 00:41:04.022246 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jan 22 00:41:04.022256 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 22 00:41:04.022265 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 22 00:41:04.022274 kernel: pinctrl core: initialized pinctrl subsystem Jan 22 00:41:04.022284 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 22 00:41:04.022293 kernel: audit: initializing netlink subsys (disabled) Jan 22 00:41:04.022303 kernel: audit: type=2000 audit(1769042459.251:1): state=initialized audit_enabled=0 res=1 Jan 22 00:41:04.022314 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 22 00:41:04.022324 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 22 00:41:04.022333 kernel: cpuidle: using governor menu Jan 22 00:41:04.022342 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 22 00:41:04.022351 kernel: dca service started, version 1.12.1 Jan 22 00:41:04.022360 kernel: PCI: Using configuration type 1 for base access Jan 22 00:41:04.022370 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 22 00:41:04.022381 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 22 00:41:04.022391 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 22 00:41:04.022400 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 22 00:41:04.022410 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 22 00:41:04.022419 kernel: ACPI: Added _OSI(Module Device) Jan 22 00:41:04.022429 kernel: ACPI: Added _OSI(Processor Device) Jan 22 00:41:04.022438 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 22 00:41:04.022449 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 22 00:41:04.022458 kernel: ACPI: Interpreter enabled Jan 22 00:41:04.022467 kernel: ACPI: PM: (supports S0 S5) Jan 22 00:41:04.022477 kernel: ACPI: Using IOAPIC for interrupt routing Jan 22 00:41:04.022486 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 22 00:41:04.022495 kernel: PCI: Using E820 reservations for host bridge windows Jan 22 00:41:04.022505 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 22 00:41:04.022514 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 22 00:41:04.022752 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 22 00:41:04.022938 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 22 00:41:04.023078 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 22 00:41:04.023090 kernel: acpiphp: Slot [3] registered Jan 22 00:41:04.023100 kernel: acpiphp: Slot [4] registered Jan 22 00:41:04.023114 kernel: acpiphp: Slot [5] registered Jan 22 00:41:04.023123 kernel: acpiphp: Slot [6] registered Jan 22 00:41:04.023132 kernel: acpiphp: Slot [7] registered Jan 22 00:41:04.023141 kernel: acpiphp: Slot [8] registered Jan 22 00:41:04.023151 kernel: acpiphp: Slot [9] registered Jan 22 00:41:04.023160 kernel: acpiphp: Slot [10] registered Jan 22 00:41:04.023170 kernel: acpiphp: Slot [11] registered Jan 22 00:41:04.023179 kernel: acpiphp: Slot [12] registered Jan 22 00:41:04.023190 kernel: acpiphp: Slot [13] registered Jan 22 00:41:04.023200 kernel: acpiphp: Slot [14] registered Jan 22 00:41:04.023209 kernel: acpiphp: Slot [15] registered Jan 22 00:41:04.023219 kernel: acpiphp: Slot [16] registered Jan 22 00:41:04.023228 kernel: acpiphp: Slot [17] registered Jan 22 00:41:04.023237 kernel: acpiphp: Slot [18] registered Jan 22 00:41:04.023246 kernel: acpiphp: Slot [19] registered Jan 22 00:41:04.023258 kernel: acpiphp: Slot [20] registered Jan 22 00:41:04.023267 kernel: acpiphp: Slot [21] registered Jan 22 00:41:04.023277 kernel: acpiphp: Slot [22] registered Jan 22 00:41:04.023286 kernel: acpiphp: Slot [23] registered Jan 22 00:41:04.023295 kernel: acpiphp: Slot [24] registered Jan 22 00:41:04.023304 kernel: acpiphp: Slot [25] registered Jan 22 00:41:04.023313 kernel: acpiphp: Slot [26] registered Jan 22 00:41:04.023323 kernel: acpiphp: Slot [27] registered Jan 22 00:41:04.023334 kernel: acpiphp: Slot [28] registered Jan 22 00:41:04.023344 kernel: acpiphp: Slot [29] registered Jan 22 00:41:04.023353 kernel: acpiphp: Slot [30] registered Jan 22 00:41:04.023362 kernel: acpiphp: Slot [31] registered Jan 22 00:41:04.023371 kernel: PCI host bridge to bus 0000:00 Jan 22 00:41:04.023496 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 22 00:41:04.023612 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 22 00:41:04.023722 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 22 00:41:04.023867 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 22 00:41:04.023977 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jan 22 00:41:04.024086 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 22 00:41:04.024223 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jan 22 00:41:04.024356 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jan 22 00:41:04.024485 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Jan 22 00:41:04.024610 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 22 00:41:04.024735 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 22 00:41:04.024901 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 22 00:41:04.025028 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 22 00:41:04.025149 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 22 00:41:04.025271 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 22 00:41:04.025411 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 22 00:41:04.025725 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x180 took 11718 usecs Jan 22 00:41:04.025949 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Jan 22 00:41:04.026087 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Jan 22 00:41:04.026211 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 22 00:41:04.026334 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 22 00:41:04.026474 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Jan 22 00:41:04.026596 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Jan 22 00:41:04.026726 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Jan 22 00:41:04.026898 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Jan 22 00:41:04.026912 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 22 00:41:04.026922 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 22 00:41:04.026932 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 22 00:41:04.026942 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 22 00:41:04.026951 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 22 00:41:04.026964 kernel: iommu: Default domain type: Translated Jan 22 00:41:04.026974 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 22 00:41:04.026983 kernel: efivars: Registered efivars operations Jan 22 00:41:04.026993 kernel: PCI: Using ACPI for IRQ routing Jan 22 00:41:04.027002 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 22 00:41:04.027012 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Jan 22 00:41:04.027022 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jan 22 00:41:04.027034 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jan 22 00:41:04.027154 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 22 00:41:04.027283 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 22 00:41:04.027405 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 22 00:41:04.027418 kernel: vgaarb: loaded Jan 22 00:41:04.027428 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 22 00:41:04.027437 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 22 00:41:04.027450 kernel: clocksource: Switched to clocksource kvm-clock Jan 22 00:41:04.027459 kernel: VFS: Disk quotas dquot_6.6.0 Jan 22 00:41:04.027469 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 22 00:41:04.027478 kernel: pnp: PnP ACPI init Jan 22 00:41:04.027488 kernel: pnp: PnP ACPI: found 5 devices Jan 22 00:41:04.027498 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 22 00:41:04.027508 kernel: NET: Registered PF_INET protocol family Jan 22 00:41:04.027520 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 22 00:41:04.027530 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 22 00:41:04.027539 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 22 00:41:04.027549 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 22 00:41:04.027558 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 22 00:41:04.027568 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 22 00:41:04.027578 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 22 00:41:04.027590 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 22 00:41:04.027600 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 22 00:41:04.027609 kernel: NET: Registered PF_XDP protocol family Jan 22 00:41:04.027725 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 22 00:41:04.027850 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 22 00:41:04.027962 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 22 00:41:04.028072 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 22 00:41:04.028185 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jan 22 00:41:04.028308 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 22 00:41:04.028321 kernel: PCI: CLS 0 bytes, default 64 Jan 22 00:41:04.028331 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 22 00:41:04.028340 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 22 00:41:04.028350 kernel: clocksource: Switched to clocksource tsc Jan 22 00:41:04.028362 kernel: Initialise system trusted keyrings Jan 22 00:41:04.028372 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 22 00:41:04.028381 kernel: Key type asymmetric registered Jan 22 00:41:04.028391 kernel: Asymmetric key parser 'x509' registered Jan 22 00:41:04.028400 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 22 00:41:04.028410 kernel: io scheduler mq-deadline registered Jan 22 00:41:04.028419 kernel: io scheduler kyber registered Jan 22 00:41:04.028431 kernel: io scheduler bfq registered Jan 22 00:41:04.028441 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 22 00:41:04.028450 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 22 00:41:04.028459 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 22 00:41:04.028469 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 22 00:41:04.028478 kernel: i8042: Warning: Keylock active Jan 22 00:41:04.028488 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 22 00:41:04.028497 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 22 00:41:04.028628 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 22 00:41:04.028744 kernel: rtc_cmos 00:00: registered as rtc0 Jan 22 00:41:04.028873 kernel: rtc_cmos 00:00: setting system clock to 2026-01-22T00:40:59 UTC (1769042459) Jan 22 00:41:04.029004 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 22 00:41:04.029019 kernel: intel_pstate: CPU model not supported Jan 22 00:41:04.029029 kernel: efifb: probing for efifb Jan 22 00:41:04.029042 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Jan 22 00:41:04.029052 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jan 22 00:41:04.029061 kernel: efifb: scrolling: redraw Jan 22 00:41:04.029071 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 22 00:41:04.029081 kernel: Console: switching to colour frame buffer device 100x37 Jan 22 00:41:04.029091 kernel: fb0: EFI VGA frame buffer device Jan 22 00:41:04.029101 kernel: pstore: Using crash dump compression: deflate Jan 22 00:41:04.029113 kernel: pstore: Registered efi_pstore as persistent store backend Jan 22 00:41:04.029123 kernel: NET: Registered PF_INET6 protocol family Jan 22 00:41:04.029133 kernel: Segment Routing with IPv6 Jan 22 00:41:04.029143 kernel: In-situ OAM (IOAM) with IPv6 Jan 22 00:41:04.029152 kernel: NET: Registered PF_PACKET protocol family Jan 22 00:41:04.029162 kernel: Key type dns_resolver registered Jan 22 00:41:04.029172 kernel: IPI shorthand broadcast: enabled Jan 22 00:41:04.029183 kernel: sched_clock: Marking stable (1596001806, 221237971)->(1928227765, -110987988) Jan 22 00:41:04.029194 kernel: registered taskstats version 1 Jan 22 00:41:04.029204 kernel: Loading compiled-in X.509 certificates Jan 22 00:41:04.029213 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 3c3e07c08e874e2a4bf964a0051bfd3618f8b847' Jan 22 00:41:04.029225 kernel: Demotion targets for Node 0: null Jan 22 00:41:04.029235 kernel: Key type .fscrypt registered Jan 22 00:41:04.029245 kernel: Key type fscrypt-provisioning registered Jan 22 00:41:04.029257 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 22 00:41:04.029266 kernel: ima: Allocated hash algorithm: sha1 Jan 22 00:41:04.029276 kernel: ima: No architecture policies found Jan 22 00:41:04.029286 kernel: clk: Disabling unused clocks Jan 22 00:41:04.029296 kernel: Freeing unused kernel image (initmem) memory: 15436K Jan 22 00:41:04.029309 kernel: Write protecting the kernel read-only data: 45056k Jan 22 00:41:04.029319 kernel: Freeing unused kernel image (rodata/data gap) memory: 824K Jan 22 00:41:04.029329 kernel: Run /init as init process Jan 22 00:41:04.029339 kernel: with arguments: Jan 22 00:41:04.029348 kernel: /init Jan 22 00:41:04.029358 kernel: with environment: Jan 22 00:41:04.029368 kernel: HOME=/ Jan 22 00:41:04.029379 kernel: TERM=linux Jan 22 00:41:04.029479 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 22 00:41:04.029495 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 22 00:41:04.029579 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 22 00:41:04.029592 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 22 00:41:04.029602 kernel: GPT:25804799 != 33554431 Jan 22 00:41:04.029615 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 22 00:41:04.029624 kernel: GPT:25804799 != 33554431 Jan 22 00:41:04.029634 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 22 00:41:04.029643 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 22 00:41:04.029653 kernel: SCSI subsystem initialized Jan 22 00:41:04.029663 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 22 00:41:04.029675 kernel: device-mapper: uevent: version 1.0.3 Jan 22 00:41:04.029685 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 22 00:41:04.029695 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 22 00:41:04.029704 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 22 00:41:04.029726 kernel: raid6: avx512x4 gen() 14392 MB/s Jan 22 00:41:04.029740 kernel: raid6: avx512x2 gen() 14992 MB/s Jan 22 00:41:04.029755 kernel: raid6: avx512x1 gen() 13391 MB/s Jan 22 00:41:04.029769 kernel: raid6: avx2x4 gen() 3982 MB/s Jan 22 00:41:04.029783 kernel: raid6: avx2x2 gen() 3823 MB/s Jan 22 00:41:04.029804 kernel: raid6: avx2x1 gen() 3634 MB/s Jan 22 00:41:04.029814 kernel: raid6: using algorithm avx512x2 gen() 14992 MB/s Jan 22 00:41:04.029824 kernel: raid6: .... xor() 16231 MB/s, rmw enabled Jan 22 00:41:04.029834 kernel: raid6: using avx512x2 recovery algorithm Jan 22 00:41:04.029844 kernel: xor: automatically using best checksumming function avx Jan 22 00:41:04.029853 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 22 00:41:04.029866 kernel: BTRFS: device fsid 79986906-7858-40a3-90f5-bda7e594a44c devid 1 transid 34 /dev/mapper/usr (254:0) scanned by mount (151) Jan 22 00:41:04.029876 kernel: BTRFS info (device dm-0): first mount of filesystem 79986906-7858-40a3-90f5-bda7e594a44c Jan 22 00:41:04.029886 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 22 00:41:04.029896 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 22 00:41:04.029906 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 22 00:41:04.029915 kernel: BTRFS info (device dm-0): enabling free space tree Jan 22 00:41:04.029925 kernel: loop: module loaded Jan 22 00:41:04.029937 kernel: loop0: detected capacity change from 0 to 100160 Jan 22 00:41:04.029947 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 22 00:41:04.029959 systemd[1]: Successfully made /usr/ read-only. Jan 22 00:41:04.029973 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 22 00:41:04.030143 systemd[1]: Detected virtualization amazon. Jan 22 00:41:04.030156 systemd[1]: Detected architecture x86-64. Jan 22 00:41:04.030170 systemd[1]: Running in initrd. Jan 22 00:41:04.030180 systemd[1]: No hostname configured, using default hostname. Jan 22 00:41:04.030191 systemd[1]: Hostname set to . Jan 22 00:41:04.030201 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 22 00:41:04.030212 systemd[1]: Queued start job for default target initrd.target. Jan 22 00:41:04.030222 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 22 00:41:04.030235 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 22 00:41:04.030245 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 22 00:41:04.030257 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 22 00:41:04.030267 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 22 00:41:04.030278 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 22 00:41:04.030289 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 22 00:41:04.030302 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 22 00:41:04.030313 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 22 00:41:04.030323 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 22 00:41:04.030334 systemd[1]: Reached target paths.target - Path Units. Jan 22 00:41:04.030344 systemd[1]: Reached target slices.target - Slice Units. Jan 22 00:41:04.030355 systemd[1]: Reached target swap.target - Swaps. Jan 22 00:41:04.030367 systemd[1]: Reached target timers.target - Timer Units. Jan 22 00:41:04.030378 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 22 00:41:04.030388 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 22 00:41:04.030398 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 22 00:41:04.030409 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 22 00:41:04.030420 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 22 00:41:04.030430 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 22 00:41:04.030442 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 22 00:41:04.030453 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 22 00:41:04.030463 systemd[1]: Reached target sockets.target - Socket Units. Jan 22 00:41:04.030474 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 22 00:41:04.030485 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 22 00:41:04.030495 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 22 00:41:04.030506 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 22 00:41:04.030519 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 22 00:41:04.030529 systemd[1]: Starting systemd-fsck-usr.service... Jan 22 00:41:04.030539 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 22 00:41:04.030550 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 22 00:41:04.030563 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 22 00:41:04.030574 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 22 00:41:04.030585 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 22 00:41:04.030595 systemd[1]: Finished systemd-fsck-usr.service. Jan 22 00:41:04.030606 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 22 00:41:04.030648 systemd-journald[288]: Collecting audit messages is enabled. Jan 22 00:41:04.030676 systemd-journald[288]: Journal started Jan 22 00:41:04.030697 systemd-journald[288]: Runtime Journal (/run/log/journal/ec2511af12408087926c7b2b6cb32907) is 4.7M, max 38M, 33.2M free. Jan 22 00:41:04.040295 systemd[1]: Started systemd-journald.service - Journal Service. Jan 22 00:41:04.040372 kernel: audit: type=1130 audit(1769042464.032:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:04.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:04.041938 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 22 00:41:04.072062 systemd-tmpfiles[301]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 22 00:41:04.077186 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 22 00:41:04.091723 kernel: audit: type=1130 audit(1769042464.077:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:04.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:04.089637 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 22 00:41:04.094963 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 22 00:41:04.105756 kernel: audit: type=1130 audit(1769042464.096:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:04.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:04.126829 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 22 00:41:04.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:04.139223 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 22 00:41:04.146986 kernel: audit: type=1130 audit(1769042464.138:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:04.175026 kernel: Bridge firewalling registered Jan 22 00:41:04.174841 systemd-modules-load[290]: Inserted module 'br_netfilter' Jan 22 00:41:04.176300 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 22 00:41:04.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:04.184873 kernel: audit: type=1130 audit(1769042464.176:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:04.186944 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 22 00:41:04.196147 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 22 00:41:04.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:04.204813 kernel: audit: type=1130 audit(1769042464.195:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:04.206997 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 22 00:41:04.218527 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 22 00:41:04.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:04.227820 kernel: audit: type=1130 audit(1769042464.219:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:04.229042 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 22 00:41:04.221000 audit: BPF prog-id=6 op=LOAD Jan 22 00:41:04.233892 kernel: audit: type=1334 audit(1769042464.221:9): prog-id=6 op=LOAD Jan 22 00:41:04.246021 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 22 00:41:04.253983 kernel: audit: type=1130 audit(1769042464.245:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:04.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:04.258033 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 22 00:41:04.294409 dracut-cmdline[329]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2c7ce323fe43e7b63a59c25601f0c418cba5a1d902eeaa4bfcebc579e79e52d2 Jan 22 00:41:04.315249 systemd-resolved[317]: Positive Trust Anchors: Jan 22 00:41:04.315266 systemd-resolved[317]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 22 00:41:04.315271 systemd-resolved[317]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 22 00:41:04.315329 systemd-resolved[317]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 22 00:41:04.346334 systemd-resolved[317]: Defaulting to hostname 'linux'. Jan 22 00:41:04.348437 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 22 00:41:04.350016 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 22 00:41:04.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:04.358824 kernel: audit: type=1130 audit(1769042464.349:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:04.496841 kernel: Loading iSCSI transport class v2.0-870. Jan 22 00:41:04.597857 kernel: iscsi: registered transport (tcp) Jan 22 00:41:04.666045 kernel: iscsi: registered transport (qla4xxx) Jan 22 00:41:04.666120 kernel: QLogic iSCSI HBA Driver Jan 22 00:41:04.696476 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 22 00:41:04.714514 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 22 00:41:04.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:04.718404 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 22 00:41:04.767122 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 22 00:41:04.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:04.769620 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 22 00:41:04.773024 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 22 00:41:04.821507 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 22 00:41:04.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:04.823000 audit: BPF prog-id=7 op=LOAD Jan 22 00:41:04.823000 audit: BPF prog-id=8 op=LOAD Jan 22 00:41:04.825206 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 22 00:41:04.873234 systemd-udevd[574]: Using default interface naming scheme 'v257'. Jan 22 00:41:04.894112 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 22 00:41:04.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:04.899206 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 22 00:41:04.916018 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 22 00:41:04.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:04.919000 audit: BPF prog-id=9 op=LOAD Jan 22 00:41:04.921985 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 22 00:41:04.936926 dracut-pre-trigger[658]: rd.md=0: removing MD RAID activation Jan 22 00:41:04.980164 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 22 00:41:04.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:04.982768 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 22 00:41:04.995152 systemd-networkd[668]: lo: Link UP Jan 22 00:41:04.995166 systemd-networkd[668]: lo: Gained carrier Jan 22 00:41:04.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:04.996159 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 22 00:41:04.998364 systemd[1]: Reached target network.target - Network. Jan 22 00:41:05.058875 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 22 00:41:05.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:05.063594 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 22 00:41:05.185893 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 22 00:41:05.186177 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 22 00:41:05.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:05.187294 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 22 00:41:05.192208 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 22 00:41:05.198082 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 22 00:41:05.200306 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 22 00:41:05.206237 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 22 00:41:05.212813 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:2b:c2:27:00:6b Jan 22 00:41:05.217186 (udev-worker)[709]: Network interface NamePolicy= disabled on kernel command line. Jan 22 00:41:05.232370 systemd-networkd[668]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 22 00:41:05.232381 systemd-networkd[668]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 22 00:41:05.242661 systemd-networkd[668]: eth0: Link UP Jan 22 00:41:05.242934 systemd-networkd[668]: eth0: Gained carrier Jan 22 00:41:05.242954 systemd-networkd[668]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 22 00:41:05.259294 kernel: cryptd: max_cpu_qlen set to 1000 Jan 22 00:41:05.258905 systemd-networkd[668]: eth0: DHCPv4 address 172.31.26.54/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 22 00:41:05.276713 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 22 00:41:05.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:05.297727 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Jan 22 00:41:05.348822 kernel: nvme nvme0: using unchecked data buffer Jan 22 00:41:05.349159 kernel: AES CTR mode by8 optimization enabled Jan 22 00:41:05.484540 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 22 00:41:05.487965 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 22 00:41:05.507573 disk-uuid[829]: Primary Header is updated. Jan 22 00:41:05.507573 disk-uuid[829]: Secondary Entries is updated. Jan 22 00:41:05.507573 disk-uuid[829]: Secondary Header is updated. Jan 22 00:41:05.581985 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 22 00:41:05.610623 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 22 00:41:05.638603 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 22 00:41:05.943141 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 22 00:41:05.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:05.945026 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 22 00:41:05.945617 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 22 00:41:05.947046 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 22 00:41:05.949663 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 22 00:41:05.976404 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 22 00:41:05.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:06.440104 systemd-networkd[668]: eth0: Gained IPv6LL Jan 22 00:41:06.641670 disk-uuid[830]: Warning: The kernel is still using the old partition table. Jan 22 00:41:06.641670 disk-uuid[830]: The new table will be used at the next reboot or after you Jan 22 00:41:06.641670 disk-uuid[830]: run partprobe(8) or kpartx(8) Jan 22 00:41:06.641670 disk-uuid[830]: The operation has completed successfully. Jan 22 00:41:06.652659 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 22 00:41:06.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:06.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:06.652837 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 22 00:41:06.656009 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 22 00:41:06.700898 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1075) Jan 22 00:41:06.706169 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 04d4f92e-e2f4-4570-a15f-a84e10359254 Jan 22 00:41:06.706237 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 22 00:41:06.753055 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 22 00:41:06.753121 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 22 00:41:06.763883 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 04d4f92e-e2f4-4570-a15f-a84e10359254 Jan 22 00:41:06.764317 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 22 00:41:06.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:06.767452 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 22 00:41:08.009638 ignition[1094]: Ignition 2.22.0 Jan 22 00:41:08.009662 ignition[1094]: Stage: fetch-offline Jan 22 00:41:08.010008 ignition[1094]: no configs at "/usr/lib/ignition/base.d" Jan 22 00:41:08.010034 ignition[1094]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 22 00:41:08.010566 ignition[1094]: Ignition finished successfully Jan 22 00:41:08.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:08.012413 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 22 00:41:08.014995 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 22 00:41:08.045739 ignition[1101]: Ignition 2.22.0 Jan 22 00:41:08.045805 ignition[1101]: Stage: fetch Jan 22 00:41:08.046211 ignition[1101]: no configs at "/usr/lib/ignition/base.d" Jan 22 00:41:08.046223 ignition[1101]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 22 00:41:08.046344 ignition[1101]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 22 00:41:08.065538 ignition[1101]: PUT result: OK Jan 22 00:41:08.067935 ignition[1101]: parsed url from cmdline: "" Jan 22 00:41:08.067949 ignition[1101]: no config URL provided Jan 22 00:41:08.067959 ignition[1101]: reading system config file "/usr/lib/ignition/user.ign" Jan 22 00:41:08.067978 ignition[1101]: no config at "/usr/lib/ignition/user.ign" Jan 22 00:41:08.068033 ignition[1101]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 22 00:41:08.068720 ignition[1101]: PUT result: OK Jan 22 00:41:08.068804 ignition[1101]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 22 00:41:08.070071 ignition[1101]: GET result: OK Jan 22 00:41:08.070253 ignition[1101]: parsing config with SHA512: d99090a90a221b42b1fcea45308b97289b11ff6df1c0e1312285031a32b80200a4c54a9d6d7e11c3a53d0b9505533b3afa26137f81e43b0c4333332b8e1ac2aa Jan 22 00:41:08.079442 unknown[1101]: fetched base config from "system" Jan 22 00:41:08.080082 unknown[1101]: fetched base config from "system" Jan 22 00:41:08.080487 ignition[1101]: fetch: fetch complete Jan 22 00:41:08.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:08.080088 unknown[1101]: fetched user config from "aws" Jan 22 00:41:08.080492 ignition[1101]: fetch: fetch passed Jan 22 00:41:08.083385 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 22 00:41:08.080562 ignition[1101]: Ignition finished successfully Jan 22 00:41:08.086996 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 22 00:41:08.125853 ignition[1108]: Ignition 2.22.0 Jan 22 00:41:08.125876 ignition[1108]: Stage: kargs Jan 22 00:41:08.126269 ignition[1108]: no configs at "/usr/lib/ignition/base.d" Jan 22 00:41:08.126280 ignition[1108]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 22 00:41:08.126507 ignition[1108]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 22 00:41:08.127861 ignition[1108]: PUT result: OK Jan 22 00:41:08.132877 ignition[1108]: kargs: kargs passed Jan 22 00:41:08.132981 ignition[1108]: Ignition finished successfully Jan 22 00:41:08.135369 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 22 00:41:08.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:08.137321 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 22 00:41:08.172508 ignition[1114]: Ignition 2.22.0 Jan 22 00:41:08.172527 ignition[1114]: Stage: disks Jan 22 00:41:08.172979 ignition[1114]: no configs at "/usr/lib/ignition/base.d" Jan 22 00:41:08.172991 ignition[1114]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 22 00:41:08.173109 ignition[1114]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 22 00:41:08.174057 ignition[1114]: PUT result: OK Jan 22 00:41:08.176752 ignition[1114]: disks: disks passed Jan 22 00:41:08.176858 ignition[1114]: Ignition finished successfully Jan 22 00:41:08.178992 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 22 00:41:08.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:08.179725 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 22 00:41:08.180123 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 22 00:41:08.180665 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 22 00:41:08.181255 systemd[1]: Reached target sysinit.target - System Initialization. Jan 22 00:41:08.181968 systemd[1]: Reached target basic.target - Basic System. Jan 22 00:41:08.183995 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 22 00:41:08.296341 systemd-fsck[1123]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 22 00:41:08.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:08.301614 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 22 00:41:08.307514 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 22 00:41:08.550820 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 2fa3c08b-a48e-45e5-aeb3-7441bca9cf30 r/w with ordered data mode. Quota mode: none. Jan 22 00:41:08.551200 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 22 00:41:08.552389 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 22 00:41:08.613744 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 22 00:41:08.616738 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 22 00:41:08.619407 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 22 00:41:08.622386 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 22 00:41:08.622430 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 22 00:41:08.631459 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 22 00:41:08.634185 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 22 00:41:08.645874 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1142) Jan 22 00:41:08.653532 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 04d4f92e-e2f4-4570-a15f-a84e10359254 Jan 22 00:41:08.653961 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 22 00:41:08.663643 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 22 00:41:08.663721 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 22 00:41:08.665066 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 22 00:41:09.317292 initrd-setup-root[1167]: cut: /sysroot/etc/passwd: No such file or directory Jan 22 00:41:09.323748 initrd-setup-root[1174]: cut: /sysroot/etc/group: No such file or directory Jan 22 00:41:09.332283 initrd-setup-root[1181]: cut: /sysroot/etc/shadow: No such file or directory Jan 22 00:41:09.343829 initrd-setup-root[1188]: cut: /sysroot/etc/gshadow: No such file or directory Jan 22 00:41:09.622935 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 22 00:41:09.632565 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 22 00:41:09.632606 kernel: audit: type=1130 audit(1769042469.622:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:09.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:09.627106 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 22 00:41:09.635982 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 22 00:41:09.652977 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 22 00:41:09.656408 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 04d4f92e-e2f4-4570-a15f-a84e10359254 Jan 22 00:41:09.685026 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 22 00:41:09.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:09.693822 kernel: audit: type=1130 audit(1769042469.685:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:09.702367 ignition[1256]: INFO : Ignition 2.22.0 Jan 22 00:41:09.702367 ignition[1256]: INFO : Stage: mount Jan 22 00:41:09.703903 ignition[1256]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 22 00:41:09.703903 ignition[1256]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 22 00:41:09.703903 ignition[1256]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 22 00:41:09.703903 ignition[1256]: INFO : PUT result: OK Jan 22 00:41:09.706610 ignition[1256]: INFO : mount: mount passed Jan 22 00:41:09.707140 ignition[1256]: INFO : Ignition finished successfully Jan 22 00:41:09.708348 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 22 00:41:09.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:09.715822 kernel: audit: type=1130 audit(1769042469.707:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:09.716107 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 22 00:41:09.733398 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 22 00:41:09.763810 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1268) Jan 22 00:41:09.768218 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 04d4f92e-e2f4-4570-a15f-a84e10359254 Jan 22 00:41:09.770218 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 22 00:41:09.782349 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 22 00:41:09.782414 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 22 00:41:09.784664 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 22 00:41:09.822043 ignition[1285]: INFO : Ignition 2.22.0 Jan 22 00:41:09.822043 ignition[1285]: INFO : Stage: files Jan 22 00:41:09.823707 ignition[1285]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 22 00:41:09.823707 ignition[1285]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 22 00:41:09.823707 ignition[1285]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 22 00:41:09.825123 ignition[1285]: INFO : PUT result: OK Jan 22 00:41:09.826338 ignition[1285]: DEBUG : files: compiled without relabeling support, skipping Jan 22 00:41:09.827567 ignition[1285]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 22 00:41:09.827567 ignition[1285]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 22 00:41:09.832995 ignition[1285]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 22 00:41:09.834059 ignition[1285]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 22 00:41:09.834059 ignition[1285]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 22 00:41:09.833491 unknown[1285]: wrote ssh authorized keys file for user: core Jan 22 00:41:09.839434 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 22 00:41:09.840247 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 22 00:41:09.919245 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 22 00:41:10.118350 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 22 00:41:10.119747 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 22 00:41:10.119747 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 22 00:41:10.119747 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 22 00:41:10.119747 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 22 00:41:10.119747 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 22 00:41:10.119747 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 22 00:41:10.119747 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 22 00:41:10.119747 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 22 00:41:10.125366 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 22 00:41:10.125366 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 22 00:41:10.125366 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 22 00:41:10.128689 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 22 00:41:10.128689 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 22 00:41:10.128689 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 22 00:41:10.601236 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 22 00:41:11.113023 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 22 00:41:11.113023 ignition[1285]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 22 00:41:11.115423 ignition[1285]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 22 00:41:11.120434 ignition[1285]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 22 00:41:11.120434 ignition[1285]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 22 00:41:11.120434 ignition[1285]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 22 00:41:11.120434 ignition[1285]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 22 00:41:11.120434 ignition[1285]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 22 00:41:11.120434 ignition[1285]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 22 00:41:11.120434 ignition[1285]: INFO : files: files passed Jan 22 00:41:11.120434 ignition[1285]: INFO : Ignition finished successfully Jan 22 00:41:11.132669 kernel: audit: type=1130 audit(1769042471.124:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.124307 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 22 00:41:11.129002 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 22 00:41:11.136640 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 22 00:41:11.144272 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 22 00:41:11.151002 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 22 00:41:11.164638 kernel: audit: type=1130 audit(1769042471.150:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.164669 kernel: audit: type=1131 audit(1769042471.150:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.175066 initrd-setup-root-after-ignition[1317]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 22 00:41:11.175066 initrd-setup-root-after-ignition[1317]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 22 00:41:11.178203 initrd-setup-root-after-ignition[1321]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 22 00:41:11.180031 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 22 00:41:11.187233 kernel: audit: type=1130 audit(1769042471.179:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.180919 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 22 00:41:11.188992 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 22 00:41:11.245221 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 22 00:41:11.245372 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 22 00:41:11.260651 kernel: audit: type=1130 audit(1769042471.247:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.260693 kernel: audit: type=1131 audit(1769042471.247:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.248428 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 22 00:41:11.261187 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 22 00:41:11.262493 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 22 00:41:11.263868 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 22 00:41:11.289337 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 22 00:41:11.297319 kernel: audit: type=1130 audit(1769042471.289:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.298192 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 22 00:41:11.325836 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 22 00:41:11.326270 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 22 00:41:11.327013 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 22 00:41:11.328026 systemd[1]: Stopped target timers.target - Timer Units. Jan 22 00:41:11.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.328910 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 22 00:41:11.329154 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 22 00:41:11.330338 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 22 00:41:11.331414 systemd[1]: Stopped target basic.target - Basic System. Jan 22 00:41:11.332237 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 22 00:41:11.332986 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 22 00:41:11.333453 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 22 00:41:11.334509 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 22 00:41:11.335367 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 22 00:41:11.336198 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 22 00:41:11.337030 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 22 00:41:11.337866 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 22 00:41:11.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.338751 systemd[1]: Stopped target swap.target - Swaps. Jan 22 00:41:11.339974 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 22 00:41:11.340208 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 22 00:41:11.341311 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 22 00:41:11.342304 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 22 00:41:11.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.343003 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 22 00:41:11.343152 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 22 00:41:11.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.343850 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 22 00:41:11.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.344080 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 22 00:41:11.345135 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 22 00:41:11.345375 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 22 00:41:11.346254 systemd[1]: ignition-files.service: Deactivated successfully. Jan 22 00:41:11.346455 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 22 00:41:11.348880 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 22 00:41:11.353081 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 22 00:41:11.353963 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 22 00:41:11.354175 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 22 00:41:11.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.358064 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 22 00:41:11.358960 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 22 00:41:11.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.360402 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 22 00:41:11.361043 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 22 00:41:11.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.373172 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 22 00:41:11.373307 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 22 00:41:11.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.393930 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 22 00:41:11.395998 ignition[1341]: INFO : Ignition 2.22.0 Jan 22 00:41:11.395998 ignition[1341]: INFO : Stage: umount Jan 22 00:41:11.397536 ignition[1341]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 22 00:41:11.397536 ignition[1341]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 22 00:41:11.397536 ignition[1341]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 22 00:41:11.400088 ignition[1341]: INFO : PUT result: OK Jan 22 00:41:11.403191 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 22 00:41:11.403366 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 22 00:41:11.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.405376 ignition[1341]: INFO : umount: umount passed Jan 22 00:41:11.405376 ignition[1341]: INFO : Ignition finished successfully Jan 22 00:41:11.407001 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 22 00:41:11.407138 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 22 00:41:11.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.408435 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 22 00:41:11.408554 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 22 00:41:11.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.409450 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 22 00:41:11.409518 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 22 00:41:11.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.411000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.410930 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 22 00:41:11.410981 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 22 00:41:11.411327 systemd[1]: Stopped target network.target - Network. Jan 22 00:41:11.411600 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 22 00:41:11.411648 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 22 00:41:11.412025 systemd[1]: Stopped target paths.target - Path Units. Jan 22 00:41:11.412293 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 22 00:41:11.413866 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 22 00:41:11.414395 systemd[1]: Stopped target slices.target - Slice Units. Jan 22 00:41:11.415001 systemd[1]: Stopped target sockets.target - Socket Units. Jan 22 00:41:11.415623 systemd[1]: iscsid.socket: Deactivated successfully. Jan 22 00:41:11.415685 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 22 00:41:11.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.416536 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 22 00:41:11.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.416585 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 22 00:41:11.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.417184 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 22 00:41:11.417222 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 22 00:41:11.417956 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 22 00:41:11.418034 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 22 00:41:11.418627 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 22 00:41:11.418690 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 22 00:41:11.419349 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 22 00:41:11.419416 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 22 00:41:11.420240 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 22 00:41:11.424000 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 22 00:41:11.429501 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 22 00:41:11.429648 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 22 00:41:11.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.432646 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 22 00:41:11.432771 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 22 00:41:11.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.434000 audit: BPF prog-id=9 op=UNLOAD Jan 22 00:41:11.435000 audit: BPF prog-id=6 op=UNLOAD Jan 22 00:41:11.436197 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 22 00:41:11.437270 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 22 00:41:11.437323 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 22 00:41:11.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.439438 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 22 00:41:11.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.439816 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 22 00:41:11.439882 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 22 00:41:11.440308 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 22 00:41:11.440349 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 22 00:41:11.440714 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 22 00:41:11.440755 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 22 00:41:11.441151 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 22 00:41:11.458164 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 22 00:41:11.458878 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 22 00:41:11.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.460356 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 22 00:41:11.460424 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 22 00:41:11.462203 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 22 00:41:11.462248 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 22 00:41:11.463345 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 22 00:41:11.463831 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 22 00:41:11.464680 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 22 00:41:11.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.464732 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 22 00:41:11.466143 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 22 00:41:11.466196 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 22 00:41:11.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.469281 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 22 00:41:11.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.469938 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 22 00:41:11.470099 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 22 00:41:11.470623 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 22 00:41:11.470685 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 22 00:41:11.471085 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 22 00:41:11.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.471137 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 22 00:41:11.471481 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 22 00:41:11.471517 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 22 00:41:11.474648 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 22 00:41:11.474704 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 22 00:41:11.489199 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 22 00:41:11.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.491336 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 22 00:41:11.495083 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 22 00:41:11.495658 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 22 00:41:11.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:11.497091 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 22 00:41:11.499122 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 22 00:41:11.526584 systemd[1]: Switching root. Jan 22 00:41:11.575501 systemd-journald[288]: Journal stopped Jan 22 00:41:13.221555 systemd-journald[288]: Received SIGTERM from PID 1 (systemd). Jan 22 00:41:13.221652 kernel: SELinux: policy capability network_peer_controls=1 Jan 22 00:41:13.221689 kernel: SELinux: policy capability open_perms=1 Jan 22 00:41:13.221723 kernel: SELinux: policy capability extended_socket_class=1 Jan 22 00:41:13.221742 kernel: SELinux: policy capability always_check_network=0 Jan 22 00:41:13.221762 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 22 00:41:13.224817 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 22 00:41:13.224882 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 22 00:41:13.224972 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 22 00:41:13.225102 kernel: SELinux: policy capability userspace_initial_context=0 Jan 22 00:41:13.225127 systemd[1]: Successfully loaded SELinux policy in 82.657ms. Jan 22 00:41:13.225152 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 18.366ms. Jan 22 00:41:13.225177 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 22 00:41:13.225201 systemd[1]: Detected virtualization amazon. Jan 22 00:41:13.225222 systemd[1]: Detected architecture x86-64. Jan 22 00:41:13.225248 systemd[1]: Detected first boot. Jan 22 00:41:13.225271 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 22 00:41:13.225294 zram_generator::config[1385]: No configuration found. Jan 22 00:41:13.225319 kernel: Guest personality initialized and is inactive Jan 22 00:41:13.225342 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 22 00:41:13.225368 kernel: Initialized host personality Jan 22 00:41:13.225392 kernel: NET: Registered PF_VSOCK protocol family Jan 22 00:41:13.225414 systemd[1]: Populated /etc with preset unit settings. Jan 22 00:41:13.225437 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 22 00:41:13.225461 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 22 00:41:13.225485 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 22 00:41:13.225514 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 22 00:41:13.225537 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 22 00:41:13.225563 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 22 00:41:13.225586 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 22 00:41:13.225609 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 22 00:41:13.225632 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 22 00:41:13.225666 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 22 00:41:13.225685 systemd[1]: Created slice user.slice - User and Session Slice. Jan 22 00:41:13.225706 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 22 00:41:13.225727 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 22 00:41:13.225750 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 22 00:41:13.225774 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 22 00:41:13.225817 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 22 00:41:13.225841 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 22 00:41:13.225864 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 22 00:41:13.225892 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 22 00:41:13.225916 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 22 00:41:13.225938 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 22 00:41:13.225960 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 22 00:41:13.225983 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 22 00:41:13.226005 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 22 00:41:13.226032 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 22 00:41:13.226054 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 22 00:41:13.226078 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 22 00:41:13.226099 systemd[1]: Reached target slices.target - Slice Units. Jan 22 00:41:13.226122 systemd[1]: Reached target swap.target - Swaps. Jan 22 00:41:13.226144 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 22 00:41:13.226167 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 22 00:41:13.226189 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 22 00:41:13.226226 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 22 00:41:13.226248 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 22 00:41:13.226272 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 22 00:41:13.226294 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 22 00:41:13.226315 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 22 00:41:13.226338 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 22 00:41:13.226360 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 22 00:41:13.226385 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 22 00:41:13.226405 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 22 00:41:13.226426 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 22 00:41:13.226447 systemd[1]: Mounting media.mount - External Media Directory... Jan 22 00:41:13.226468 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 00:41:13.226491 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 22 00:41:13.226511 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 22 00:41:13.226535 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 22 00:41:13.226555 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 22 00:41:13.226576 systemd[1]: Reached target machines.target - Containers. Jan 22 00:41:13.226598 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 22 00:41:13.226618 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 22 00:41:13.226639 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 22 00:41:13.226663 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 22 00:41:13.226690 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 22 00:41:13.226713 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 22 00:41:13.226735 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 22 00:41:13.226759 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 22 00:41:13.226782 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 22 00:41:13.228863 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 22 00:41:13.228894 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 22 00:41:13.228916 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 22 00:41:13.228936 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 22 00:41:13.228957 systemd[1]: Stopped systemd-fsck-usr.service. Jan 22 00:41:13.228979 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 22 00:41:13.229000 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 22 00:41:13.229029 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 22 00:41:13.229060 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 22 00:41:13.229084 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 22 00:41:13.229105 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 22 00:41:13.229126 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 22 00:41:13.229148 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 00:41:13.229168 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 22 00:41:13.229189 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 22 00:41:13.229214 systemd[1]: Mounted media.mount - External Media Directory. Jan 22 00:41:13.229234 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 22 00:41:13.229255 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 22 00:41:13.229277 kernel: ACPI: bus type drm_connector registered Jan 22 00:41:13.229299 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 22 00:41:13.229321 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 22 00:41:13.229347 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 22 00:41:13.229368 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 22 00:41:13.229389 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 22 00:41:13.229410 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 22 00:41:13.229432 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 22 00:41:13.229452 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 22 00:41:13.229474 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 22 00:41:13.229493 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 22 00:41:13.229514 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 22 00:41:13.229534 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 22 00:41:13.229574 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 22 00:41:13.229595 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 22 00:41:13.229616 kernel: fuse: init (API version 7.41) Jan 22 00:41:13.229641 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 22 00:41:13.229671 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 22 00:41:13.229692 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 22 00:41:13.229715 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 22 00:41:13.229739 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 22 00:41:13.229761 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 22 00:41:13.229852 systemd-journald[1469]: Collecting audit messages is enabled. Jan 22 00:41:13.229893 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 22 00:41:13.229916 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 22 00:41:13.229942 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 22 00:41:13.229965 systemd-journald[1469]: Journal started Jan 22 00:41:13.230003 systemd-journald[1469]: Runtime Journal (/run/log/journal/ec2511af12408087926c7b2b6cb32907) is 4.7M, max 38M, 33.2M free. Jan 22 00:41:12.805000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 22 00:41:12.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:12.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:13.001000 audit: BPF prog-id=14 op=UNLOAD Jan 22 00:41:13.001000 audit: BPF prog-id=13 op=UNLOAD Jan 22 00:41:13.004000 audit: BPF prog-id=15 op=LOAD Jan 22 00:41:13.004000 audit: BPF prog-id=16 op=LOAD Jan 22 00:41:13.004000 audit: BPF prog-id=17 op=LOAD Jan 22 00:41:13.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:13.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:13.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:13.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:13.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:13.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:13.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:13.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:13.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:13.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:13.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:13.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:13.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:13.215000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 22 00:41:13.215000 audit[1469]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffe4bf66940 a2=4000 a3=0 items=0 ppid=1 pid=1469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:13.215000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 22 00:41:12.708208 systemd[1]: Queued start job for default target multi-user.target. Jan 22 00:41:12.732300 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 22 00:41:12.732920 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 22 00:41:13.248887 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 22 00:41:13.248980 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 22 00:41:13.256813 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 22 00:41:13.287186 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 22 00:41:13.287782 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 22 00:41:13.287848 systemd[1]: Started systemd-journald.service - Journal Service. Jan 22 00:41:13.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:13.302157 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 22 00:41:13.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:13.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:13.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:13.311257 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 22 00:41:13.312578 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 22 00:41:13.314211 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 22 00:41:13.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:13.318190 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 22 00:41:13.323008 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 22 00:41:13.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:13.324893 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 22 00:41:13.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:13.341834 kernel: loop1: detected capacity change from 0 to 224512 Jan 22 00:41:13.356644 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 22 00:41:13.358615 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 22 00:41:13.367251 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 22 00:41:13.372095 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 22 00:41:13.399037 systemd-journald[1469]: Time spent on flushing to /var/log/journal/ec2511af12408087926c7b2b6cb32907 is 90.949ms for 1144 entries. Jan 22 00:41:13.399037 systemd-journald[1469]: System Journal (/var/log/journal/ec2511af12408087926c7b2b6cb32907) is 8M, max 588.1M, 580.1M free. Jan 22 00:41:13.527164 systemd-journald[1469]: Received client request to flush runtime journal. Jan 22 00:41:13.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:13.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:13.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:13.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:13.516000 audit: BPF prog-id=18 op=LOAD Jan 22 00:41:13.516000 audit: BPF prog-id=19 op=LOAD Jan 22 00:41:13.516000 audit: BPF prog-id=20 op=LOAD Jan 22 00:41:13.523000 audit: BPF prog-id=21 op=LOAD Jan 22 00:41:13.408155 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 22 00:41:13.424357 systemd-tmpfiles[1501]: ACLs are not supported, ignoring. Jan 22 00:41:13.424381 systemd-tmpfiles[1501]: ACLs are not supported, ignoring. Jan 22 00:41:13.434761 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 22 00:41:13.438102 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 22 00:41:13.483499 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 22 00:41:13.514467 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 22 00:41:13.522017 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 22 00:41:13.526079 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 22 00:41:13.533194 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 22 00:41:13.534623 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 22 00:41:13.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:13.536362 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 22 00:41:13.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:13.555000 audit: BPF prog-id=22 op=LOAD Jan 22 00:41:13.555000 audit: BPF prog-id=23 op=LOAD Jan 22 00:41:13.555000 audit: BPF prog-id=24 op=LOAD Jan 22 00:41:13.559000 audit: BPF prog-id=25 op=LOAD Jan 22 00:41:13.559000 audit: BPF prog-id=26 op=LOAD Jan 22 00:41:13.559000 audit: BPF prog-id=27 op=LOAD Jan 22 00:41:13.557431 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 22 00:41:13.563072 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 22 00:41:13.579084 systemd-tmpfiles[1536]: ACLs are not supported, ignoring. Jan 22 00:41:13.579513 systemd-tmpfiles[1536]: ACLs are not supported, ignoring. Jan 22 00:41:13.591769 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 22 00:41:13.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:13.668144 systemd-nsresourced[1541]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 22 00:41:13.669437 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 22 00:41:13.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:13.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:13.683216 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 22 00:41:13.711099 kernel: loop2: detected capacity change from 0 to 119256 Jan 22 00:41:13.737287 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 22 00:41:13.742952 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 22 00:41:13.762941 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 22 00:41:13.835727 kernel: loop3: detected capacity change from 0 to 111544 Jan 22 00:41:13.847994 systemd-resolved[1535]: Positive Trust Anchors: Jan 22 00:41:13.848015 systemd-resolved[1535]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 22 00:41:13.848020 systemd-resolved[1535]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 22 00:41:13.848071 systemd-resolved[1535]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 22 00:41:13.849492 systemd-oomd[1534]: No swap; memory pressure usage will be degraded Jan 22 00:41:13.850253 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 22 00:41:13.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:13.858655 systemd-resolved[1535]: Defaulting to hostname 'linux'. Jan 22 00:41:13.861541 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 22 00:41:13.862351 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 22 00:41:13.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:13.878841 kernel: loop4: detected capacity change from 0 to 73200 Jan 22 00:41:14.167848 kernel: loop5: detected capacity change from 0 to 224512 Jan 22 00:41:14.194821 kernel: loop6: detected capacity change from 0 to 119256 Jan 22 00:41:14.212834 kernel: loop7: detected capacity change from 0 to 111544 Jan 22 00:41:14.245865 kernel: loop1: detected capacity change from 0 to 73200 Jan 22 00:41:14.257331 (sd-merge)[1566]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-ami.raw'. Jan 22 00:41:14.264371 (sd-merge)[1566]: Merged extensions into '/usr'. Jan 22 00:41:14.270978 systemd[1]: Reload requested from client PID 1500 ('systemd-sysext') (unit systemd-sysext.service)... Jan 22 00:41:14.271323 systemd[1]: Reloading... Jan 22 00:41:14.376837 zram_generator::config[1593]: No configuration found. Jan 22 00:41:14.612569 systemd[1]: Reloading finished in 340 ms. Jan 22 00:41:14.643196 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 22 00:41:14.644833 kernel: kauditd_printk_skb: 106 callbacks suppressed Jan 22 00:41:14.644912 kernel: audit: type=1130 audit(1769042474.642:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:14.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:14.644083 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 22 00:41:14.652813 kernel: audit: type=1130 audit(1769042474.648:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:14.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:14.661405 systemd[1]: Starting ensure-sysext.service... Jan 22 00:41:14.664960 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 22 00:41:14.665000 audit: BPF prog-id=8 op=UNLOAD Jan 22 00:41:14.670882 kernel: audit: type=1334 audit(1769042474.665:151): prog-id=8 op=UNLOAD Jan 22 00:41:14.670970 kernel: audit: type=1334 audit(1769042474.665:152): prog-id=7 op=UNLOAD Jan 22 00:41:14.665000 audit: BPF prog-id=7 op=UNLOAD Jan 22 00:41:14.671960 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 22 00:41:14.674858 kernel: audit: type=1334 audit(1769042474.668:153): prog-id=28 op=LOAD Jan 22 00:41:14.668000 audit: BPF prog-id=28 op=LOAD Jan 22 00:41:14.668000 audit: BPF prog-id=29 op=LOAD Jan 22 00:41:14.678171 kernel: audit: type=1334 audit(1769042474.668:154): prog-id=29 op=LOAD Jan 22 00:41:14.673000 audit: BPF prog-id=30 op=LOAD Jan 22 00:41:14.681853 kernel: audit: type=1334 audit(1769042474.673:155): prog-id=30 op=LOAD Jan 22 00:41:14.673000 audit: BPF prog-id=18 op=UNLOAD Jan 22 00:41:14.684822 kernel: audit: type=1334 audit(1769042474.673:156): prog-id=18 op=UNLOAD Jan 22 00:41:14.673000 audit: BPF prog-id=31 op=LOAD Jan 22 00:41:14.673000 audit: BPF prog-id=32 op=LOAD Jan 22 00:41:14.691931 kernel: audit: type=1334 audit(1769042474.673:157): prog-id=31 op=LOAD Jan 22 00:41:14.692026 kernel: audit: type=1334 audit(1769042474.673:158): prog-id=32 op=LOAD Jan 22 00:41:14.673000 audit: BPF prog-id=19 op=UNLOAD Jan 22 00:41:14.673000 audit: BPF prog-id=20 op=UNLOAD Jan 22 00:41:14.676000 audit: BPF prog-id=33 op=LOAD Jan 22 00:41:14.676000 audit: BPF prog-id=22 op=UNLOAD Jan 22 00:41:14.678000 audit: BPF prog-id=34 op=LOAD Jan 22 00:41:14.678000 audit: BPF prog-id=35 op=LOAD Jan 22 00:41:14.678000 audit: BPF prog-id=23 op=UNLOAD Jan 22 00:41:14.678000 audit: BPF prog-id=24 op=UNLOAD Jan 22 00:41:14.681000 audit: BPF prog-id=36 op=LOAD Jan 22 00:41:14.681000 audit: BPF prog-id=15 op=UNLOAD Jan 22 00:41:14.681000 audit: BPF prog-id=37 op=LOAD Jan 22 00:41:14.681000 audit: BPF prog-id=38 op=LOAD Jan 22 00:41:14.681000 audit: BPF prog-id=16 op=UNLOAD Jan 22 00:41:14.681000 audit: BPF prog-id=17 op=UNLOAD Jan 22 00:41:14.683000 audit: BPF prog-id=39 op=LOAD Jan 22 00:41:14.683000 audit: BPF prog-id=21 op=UNLOAD Jan 22 00:41:14.684000 audit: BPF prog-id=40 op=LOAD Jan 22 00:41:14.684000 audit: BPF prog-id=25 op=UNLOAD Jan 22 00:41:14.684000 audit: BPF prog-id=41 op=LOAD Jan 22 00:41:14.684000 audit: BPF prog-id=42 op=LOAD Jan 22 00:41:14.684000 audit: BPF prog-id=26 op=UNLOAD Jan 22 00:41:14.684000 audit: BPF prog-id=27 op=UNLOAD Jan 22 00:41:14.702357 systemd[1]: Reload requested from client PID 1648 ('systemctl') (unit ensure-sysext.service)... Jan 22 00:41:14.702377 systemd[1]: Reloading... Jan 22 00:41:14.728338 systemd-tmpfiles[1649]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 22 00:41:14.728873 systemd-tmpfiles[1649]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 22 00:41:14.729288 systemd-tmpfiles[1649]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 22 00:41:14.733860 systemd-udevd[1650]: Using default interface naming scheme 'v257'. Jan 22 00:41:14.736780 systemd-tmpfiles[1649]: ACLs are not supported, ignoring. Jan 22 00:41:14.737033 systemd-tmpfiles[1649]: ACLs are not supported, ignoring. Jan 22 00:41:14.744590 systemd-tmpfiles[1649]: Detected autofs mount point /boot during canonicalization of boot. Jan 22 00:41:14.745371 systemd-tmpfiles[1649]: Skipping /boot Jan 22 00:41:14.755066 systemd-tmpfiles[1649]: Detected autofs mount point /boot during canonicalization of boot. Jan 22 00:41:14.755079 systemd-tmpfiles[1649]: Skipping /boot Jan 22 00:41:14.794850 zram_generator::config[1678]: No configuration found. Jan 22 00:41:14.878030 (udev-worker)[1691]: Network interface NamePolicy= disabled on kernel command line. Jan 22 00:41:14.946837 kernel: mousedev: PS/2 mouse device common for all mice Jan 22 00:41:14.982827 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 22 00:41:14.990203 kernel: ACPI: button: Power Button [PWRF] Jan 22 00:41:14.990308 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 22 00:41:14.999893 kernel: ACPI: button: Sleep Button [SLPF] Jan 22 00:41:15.006887 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 22 00:41:15.346856 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 22 00:41:15.347754 systemd[1]: Reloading finished in 644 ms. Jan 22 00:41:15.358075 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 22 00:41:15.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:15.365562 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 22 00:41:15.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:15.370000 audit: BPF prog-id=43 op=LOAD Jan 22 00:41:15.370000 audit: BPF prog-id=44 op=LOAD Jan 22 00:41:15.370000 audit: BPF prog-id=28 op=UNLOAD Jan 22 00:41:15.370000 audit: BPF prog-id=29 op=UNLOAD Jan 22 00:41:15.371000 audit: BPF prog-id=45 op=LOAD Jan 22 00:41:15.373000 audit: BPF prog-id=36 op=UNLOAD Jan 22 00:41:15.373000 audit: BPF prog-id=46 op=LOAD Jan 22 00:41:15.373000 audit: BPF prog-id=47 op=LOAD Jan 22 00:41:15.373000 audit: BPF prog-id=37 op=UNLOAD Jan 22 00:41:15.373000 audit: BPF prog-id=38 op=UNLOAD Jan 22 00:41:15.374000 audit: BPF prog-id=48 op=LOAD Jan 22 00:41:15.374000 audit: BPF prog-id=30 op=UNLOAD Jan 22 00:41:15.374000 audit: BPF prog-id=49 op=LOAD Jan 22 00:41:15.374000 audit: BPF prog-id=50 op=LOAD Jan 22 00:41:15.374000 audit: BPF prog-id=31 op=UNLOAD Jan 22 00:41:15.374000 audit: BPF prog-id=32 op=UNLOAD Jan 22 00:41:15.375000 audit: BPF prog-id=51 op=LOAD Jan 22 00:41:15.375000 audit: BPF prog-id=33 op=UNLOAD Jan 22 00:41:15.375000 audit: BPF prog-id=52 op=LOAD Jan 22 00:41:15.375000 audit: BPF prog-id=53 op=LOAD Jan 22 00:41:15.375000 audit: BPF prog-id=34 op=UNLOAD Jan 22 00:41:15.375000 audit: BPF prog-id=35 op=UNLOAD Jan 22 00:41:15.376000 audit: BPF prog-id=54 op=LOAD Jan 22 00:41:15.376000 audit: BPF prog-id=40 op=UNLOAD Jan 22 00:41:15.376000 audit: BPF prog-id=55 op=LOAD Jan 22 00:41:15.377000 audit: BPF prog-id=56 op=LOAD Jan 22 00:41:15.377000 audit: BPF prog-id=41 op=UNLOAD Jan 22 00:41:15.377000 audit: BPF prog-id=42 op=UNLOAD Jan 22 00:41:15.379000 audit: BPF prog-id=57 op=LOAD Jan 22 00:41:15.379000 audit: BPF prog-id=39 op=UNLOAD Jan 22 00:41:15.443197 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 00:41:15.448408 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 22 00:41:15.452209 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 22 00:41:15.453061 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 22 00:41:15.456373 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 22 00:41:15.463053 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 22 00:41:15.471193 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 22 00:41:15.473114 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 22 00:41:15.473455 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 22 00:41:15.478228 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 22 00:41:15.478915 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 22 00:41:15.482397 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 22 00:41:15.484000 audit: BPF prog-id=58 op=LOAD Jan 22 00:41:15.493937 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 22 00:41:15.497413 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 22 00:41:15.498387 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 00:41:15.506897 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 00:41:15.507341 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 22 00:41:15.507638 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 22 00:41:15.508631 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 22 00:41:15.508814 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 22 00:41:15.509278 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 00:41:15.518587 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 00:41:15.519600 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 22 00:41:15.529227 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 22 00:41:15.531496 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 22 00:41:15.532865 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 22 00:41:15.533083 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 22 00:41:15.533351 systemd[1]: Reached target time-set.target - System Time Set. Jan 22 00:41:15.535587 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 00:41:15.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:15.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:15.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:15.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:15.542009 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 22 00:41:15.543172 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 22 00:41:15.549199 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 22 00:41:15.549494 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 22 00:41:15.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:15.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:15.568211 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 22 00:41:15.568535 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 22 00:41:15.571448 systemd[1]: Finished ensure-sysext.service. Jan 22 00:41:15.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:15.579266 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 22 00:41:15.580677 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 22 00:41:15.581084 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 22 00:41:15.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:15.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:15.583731 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 22 00:41:15.638000 audit[1862]: SYSTEM_BOOT pid=1862 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 22 00:41:15.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:15.651038 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 22 00:41:15.691955 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 22 00:41:15.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:15.702112 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 22 00:41:15.727471 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 22 00:41:15.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:15.764000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 22 00:41:15.764000 audit[1896]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffaa3f5070 a2=420 a3=0 items=0 ppid=1853 pid=1896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:15.764000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 22 00:41:15.766017 augenrules[1896]: No rules Jan 22 00:41:15.769166 systemd[1]: audit-rules.service: Deactivated successfully. Jan 22 00:41:15.769973 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 22 00:41:15.804856 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 22 00:41:15.818355 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 22 00:41:15.824232 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 22 00:41:15.863008 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 22 00:41:15.863638 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 22 00:41:15.867091 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 22 00:41:15.900519 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 22 00:41:15.960444 systemd-networkd[1860]: lo: Link UP Jan 22 00:41:15.960830 systemd-networkd[1860]: lo: Gained carrier Jan 22 00:41:15.963691 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 22 00:41:15.964898 systemd[1]: Reached target network.target - Network. Jan 22 00:41:15.966265 systemd-networkd[1860]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 22 00:41:15.966278 systemd-networkd[1860]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 22 00:41:15.969992 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 22 00:41:15.973125 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 22 00:41:15.980350 systemd-networkd[1860]: eth0: Link UP Jan 22 00:41:15.983985 systemd-networkd[1860]: eth0: Gained carrier Jan 22 00:41:15.984033 systemd-networkd[1860]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 22 00:41:15.994003 systemd-networkd[1860]: eth0: DHCPv4 address 172.31.26.54/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 22 00:41:16.031098 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 22 00:41:16.059456 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 22 00:41:16.417210 ldconfig[1858]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 22 00:41:16.428318 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 22 00:41:16.430718 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 22 00:41:16.460821 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 22 00:41:16.461984 systemd[1]: Reached target sysinit.target - System Initialization. Jan 22 00:41:16.462535 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 22 00:41:16.462993 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 22 00:41:16.463807 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 22 00:41:16.464403 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 22 00:41:16.464937 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 22 00:41:16.465337 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 22 00:41:16.465980 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 22 00:41:16.466375 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 22 00:41:16.466763 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 22 00:41:16.466827 systemd[1]: Reached target paths.target - Path Units. Jan 22 00:41:16.467190 systemd[1]: Reached target timers.target - Timer Units. Jan 22 00:41:16.469007 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 22 00:41:16.471051 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 22 00:41:16.473906 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 22 00:41:16.474487 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 22 00:41:16.474994 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 22 00:41:16.477551 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 22 00:41:16.478485 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 22 00:41:16.479716 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 22 00:41:16.481104 systemd[1]: Reached target sockets.target - Socket Units. Jan 22 00:41:16.481506 systemd[1]: Reached target basic.target - Basic System. Jan 22 00:41:16.482033 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 22 00:41:16.482080 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 22 00:41:16.483249 systemd[1]: Starting containerd.service - containerd container runtime... Jan 22 00:41:16.487987 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 22 00:41:16.491980 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 22 00:41:16.495982 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 22 00:41:16.503917 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 22 00:41:16.508830 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 22 00:41:16.509870 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 22 00:41:16.512617 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 22 00:41:16.521517 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 22 00:41:16.535493 systemd[1]: Started ntpd.service - Network Time Service. Jan 22 00:41:16.541782 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 22 00:41:16.546177 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 22 00:41:16.566647 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 22 00:41:16.576219 jq[1926]: false Jan 22 00:41:16.579156 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 22 00:41:16.586888 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 22 00:41:16.588890 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 22 00:41:16.589628 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 22 00:41:16.592829 google_oslogin_nss_cache[1928]: oslogin_cache_refresh[1928]: Refreshing passwd entry cache Jan 22 00:41:16.592092 oslogin_cache_refresh[1928]: Refreshing passwd entry cache Jan 22 00:41:16.595537 systemd[1]: Starting update-engine.service - Update Engine... Jan 22 00:41:16.618060 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 22 00:41:16.620039 google_oslogin_nss_cache[1928]: oslogin_cache_refresh[1928]: Failure getting users, quitting Jan 22 00:41:16.620131 google_oslogin_nss_cache[1928]: oslogin_cache_refresh[1928]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 22 00:41:16.620131 google_oslogin_nss_cache[1928]: oslogin_cache_refresh[1928]: Refreshing group entry cache Jan 22 00:41:16.620035 oslogin_cache_refresh[1928]: Failure getting users, quitting Jan 22 00:41:16.620058 oslogin_cache_refresh[1928]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 22 00:41:16.620114 oslogin_cache_refresh[1928]: Refreshing group entry cache Jan 22 00:41:16.624390 google_oslogin_nss_cache[1928]: oslogin_cache_refresh[1928]: Failure getting groups, quitting Jan 22 00:41:16.624386 oslogin_cache_refresh[1928]: Failure getting groups, quitting Jan 22 00:41:16.624545 google_oslogin_nss_cache[1928]: oslogin_cache_refresh[1928]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 22 00:41:16.624404 oslogin_cache_refresh[1928]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 22 00:41:16.636983 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 22 00:41:16.638593 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 22 00:41:16.639988 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 22 00:41:16.640426 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 22 00:41:16.641869 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 22 00:41:16.648063 jq[1944]: true Jan 22 00:41:16.678990 extend-filesystems[1927]: Found /dev/nvme0n1p6 Jan 22 00:41:16.688847 tar[1950]: linux-amd64/LICENSE Jan 22 00:41:16.698836 tar[1950]: linux-amd64/helm Jan 22 00:41:16.696984 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 22 00:41:16.697382 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 22 00:41:16.706823 extend-filesystems[1927]: Found /dev/nvme0n1p9 Jan 22 00:41:16.715869 extend-filesystems[1927]: Checking size of /dev/nvme0n1p9 Jan 22 00:41:16.716561 jq[1954]: true Jan 22 00:41:16.726782 systemd[1]: motdgen.service: Deactivated successfully. Jan 22 00:41:16.728455 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 22 00:41:16.751964 ntpd[1930]: ntpd 4.2.8p18@1.4062-o Wed Jan 21 21:31:41 UTC 2026 (1): Starting Jan 22 00:41:16.752896 ntpd[1930]: 22 Jan 00:41:16 ntpd[1930]: ntpd 4.2.8p18@1.4062-o Wed Jan 21 21:31:41 UTC 2026 (1): Starting Jan 22 00:41:16.752896 ntpd[1930]: 22 Jan 00:41:16 ntpd[1930]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 22 00:41:16.752896 ntpd[1930]: 22 Jan 00:41:16 ntpd[1930]: ---------------------------------------------------- Jan 22 00:41:16.752896 ntpd[1930]: 22 Jan 00:41:16 ntpd[1930]: ntp-4 is maintained by Network Time Foundation, Jan 22 00:41:16.752896 ntpd[1930]: 22 Jan 00:41:16 ntpd[1930]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 22 00:41:16.752896 ntpd[1930]: 22 Jan 00:41:16 ntpd[1930]: corporation. Support and training for ntp-4 are Jan 22 00:41:16.752896 ntpd[1930]: 22 Jan 00:41:16 ntpd[1930]: available at https://www.nwtime.org/support Jan 22 00:41:16.752896 ntpd[1930]: 22 Jan 00:41:16 ntpd[1930]: ---------------------------------------------------- Jan 22 00:41:16.752039 ntpd[1930]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 22 00:41:16.752049 ntpd[1930]: ---------------------------------------------------- Jan 22 00:41:16.752057 ntpd[1930]: ntp-4 is maintained by Network Time Foundation, Jan 22 00:41:16.752065 ntpd[1930]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 22 00:41:16.752074 ntpd[1930]: corporation. Support and training for ntp-4 are Jan 22 00:41:16.752083 ntpd[1930]: available at https://www.nwtime.org/support Jan 22 00:41:16.752092 ntpd[1930]: ---------------------------------------------------- Jan 22 00:41:16.761108 ntpd[1930]: proto: precision = 0.080 usec (-23) Jan 22 00:41:16.761474 ntpd[1930]: 22 Jan 00:41:16 ntpd[1930]: proto: precision = 0.080 usec (-23) Jan 22 00:41:16.774971 update_engine[1942]: I20260122 00:41:16.771158 1942 main.cc:92] Flatcar Update Engine starting Jan 22 00:41:16.780043 ntpd[1930]: 22 Jan 00:41:16 ntpd[1930]: basedate set to 2026-01-09 Jan 22 00:41:16.780043 ntpd[1930]: 22 Jan 00:41:16 ntpd[1930]: gps base set to 2026-01-11 (week 2401) Jan 22 00:41:16.780043 ntpd[1930]: 22 Jan 00:41:16 ntpd[1930]: Listen and drop on 0 v6wildcard [::]:123 Jan 22 00:41:16.780043 ntpd[1930]: 22 Jan 00:41:16 ntpd[1930]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 22 00:41:16.780043 ntpd[1930]: 22 Jan 00:41:16 ntpd[1930]: Listen normally on 2 lo 127.0.0.1:123 Jan 22 00:41:16.780043 ntpd[1930]: 22 Jan 00:41:16 ntpd[1930]: Listen normally on 3 eth0 172.31.26.54:123 Jan 22 00:41:16.780043 ntpd[1930]: 22 Jan 00:41:16 ntpd[1930]: Listen normally on 4 lo [::1]:123 Jan 22 00:41:16.780043 ntpd[1930]: 22 Jan 00:41:16 ntpd[1930]: bind(21) AF_INET6 [fe80::42b:c2ff:fe27:6b%2]:123 flags 0x811 failed: Cannot assign requested address Jan 22 00:41:16.780043 ntpd[1930]: 22 Jan 00:41:16 ntpd[1930]: unable to create socket on eth0 (5) for [fe80::42b:c2ff:fe27:6b%2]:123 Jan 22 00:41:16.773368 ntpd[1930]: basedate set to 2026-01-09 Jan 22 00:41:16.780418 coreos-metadata[1923]: Jan 22 00:41:16.771 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 22 00:41:16.773396 ntpd[1930]: gps base set to 2026-01-11 (week 2401) Jan 22 00:41:16.773544 ntpd[1930]: Listen and drop on 0 v6wildcard [::]:123 Jan 22 00:41:16.773577 ntpd[1930]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 22 00:41:16.773833 ntpd[1930]: Listen normally on 2 lo 127.0.0.1:123 Jan 22 00:41:16.782926 systemd-coredump[1984]: Process 1930 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Jan 22 00:41:16.790244 update_engine[1942]: I20260122 00:41:16.790033 1942 update_check_scheduler.cc:74] Next update check in 10m17s Jan 22 00:41:16.773867 ntpd[1930]: Listen normally on 3 eth0 172.31.26.54:123 Jan 22 00:41:16.786845 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 22 00:41:16.773896 ntpd[1930]: Listen normally on 4 lo [::1]:123 Jan 22 00:41:16.773927 ntpd[1930]: bind(21) AF_INET6 [fe80::42b:c2ff:fe27:6b%2]:123 flags 0x811 failed: Cannot assign requested address Jan 22 00:41:16.773955 ntpd[1930]: unable to create socket on eth0 (5) for [fe80::42b:c2ff:fe27:6b%2]:123 Jan 22 00:41:16.795364 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Jan 22 00:41:16.805328 coreos-metadata[1923]: Jan 22 00:41:16.794 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 22 00:41:16.805328 coreos-metadata[1923]: Jan 22 00:41:16.796 INFO Fetch successful Jan 22 00:41:16.805328 coreos-metadata[1923]: Jan 22 00:41:16.796 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 22 00:41:16.805328 coreos-metadata[1923]: Jan 22 00:41:16.797 INFO Fetch successful Jan 22 00:41:16.805328 coreos-metadata[1923]: Jan 22 00:41:16.797 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 22 00:41:16.805328 coreos-metadata[1923]: Jan 22 00:41:16.798 INFO Fetch successful Jan 22 00:41:16.805328 coreos-metadata[1923]: Jan 22 00:41:16.798 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 22 00:41:16.805328 coreos-metadata[1923]: Jan 22 00:41:16.800 INFO Fetch successful Jan 22 00:41:16.805328 coreos-metadata[1923]: Jan 22 00:41:16.800 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 22 00:41:16.805328 coreos-metadata[1923]: Jan 22 00:41:16.801 INFO Fetch failed with 404: resource not found Jan 22 00:41:16.805328 coreos-metadata[1923]: Jan 22 00:41:16.801 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 22 00:41:16.805328 coreos-metadata[1923]: Jan 22 00:41:16.802 INFO Fetch successful Jan 22 00:41:16.805328 coreos-metadata[1923]: Jan 22 00:41:16.802 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 22 00:41:16.805328 coreos-metadata[1923]: Jan 22 00:41:16.802 INFO Fetch successful Jan 22 00:41:16.805328 coreos-metadata[1923]: Jan 22 00:41:16.802 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 22 00:41:16.783696 dbus-daemon[1924]: [system] SELinux support is enabled Jan 22 00:41:16.797085 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 22 00:41:16.788300 dbus-daemon[1924]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1860 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 22 00:41:16.797134 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 22 00:41:16.798193 dbus-daemon[1924]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 22 00:41:16.797909 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 22 00:41:16.797933 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 22 00:41:16.803341 systemd[1]: Started systemd-coredump@0-1984-0.service - Process Core Dump (PID 1984/UID 0). Jan 22 00:41:16.805279 systemd[1]: Started update-engine.service - Update Engine. Jan 22 00:41:16.818868 coreos-metadata[1923]: Jan 22 00:41:16.815 INFO Fetch successful Jan 22 00:41:16.818868 coreos-metadata[1923]: Jan 22 00:41:16.815 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 22 00:41:16.818868 coreos-metadata[1923]: Jan 22 00:41:16.818 INFO Fetch successful Jan 22 00:41:16.818868 coreos-metadata[1923]: Jan 22 00:41:16.818 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 22 00:41:16.820489 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 22 00:41:16.825988 coreos-metadata[1923]: Jan 22 00:41:16.823 INFO Fetch successful Jan 22 00:41:16.833822 extend-filesystems[1927]: Resized partition /dev/nvme0n1p9 Jan 22 00:41:16.858948 extend-filesystems[1995]: resize2fs 1.47.3 (8-Jul-2025) Jan 22 00:41:16.876280 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 1617920 to 2604027 blocks Jan 22 00:41:16.895647 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 22 00:41:16.924299 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 2604027 Jan 22 00:41:16.958834 extend-filesystems[1995]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 22 00:41:16.958834 extend-filesystems[1995]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 22 00:41:16.958834 extend-filesystems[1995]: The filesystem on /dev/nvme0n1p9 is now 2604027 (4k) blocks long. Jan 22 00:41:16.958503 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 22 00:41:16.974698 extend-filesystems[1927]: Resized filesystem in /dev/nvme0n1p9 Jan 22 00:41:16.975558 sshd_keygen[1980]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 22 00:41:16.960203 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 22 00:41:16.983969 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 22 00:41:16.989419 bash[2012]: Updated "/home/core/.ssh/authorized_keys" Jan 22 00:41:16.991028 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 22 00:41:17.001178 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 22 00:41:17.007490 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 22 00:41:17.020371 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 22 00:41:17.029727 systemd[1]: Starting sshkeys.service... Jan 22 00:41:17.159393 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 22 00:41:17.164670 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 22 00:41:17.239333 systemd-logind[1939]: Watching system buttons on /dev/input/event2 (Power Button) Jan 22 00:41:17.239370 systemd-logind[1939]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 22 00:41:17.239394 systemd-logind[1939]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 22 00:41:17.241019 systemd-logind[1939]: New seat seat0. Jan 22 00:41:17.242045 systemd[1]: Started systemd-logind.service - User Login Management. Jan 22 00:41:17.343341 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 22 00:41:17.353328 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 22 00:41:17.360095 systemd[1]: Started sshd@0-172.31.26.54:22-68.220.241.50:58984.service - OpenSSH per-connection server daemon (68.220.241.50:58984). Jan 22 00:41:17.372751 coreos-metadata[2074]: Jan 22 00:41:17.372 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 22 00:41:17.374095 coreos-metadata[2074]: Jan 22 00:41:17.374 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 22 00:41:17.375059 coreos-metadata[2074]: Jan 22 00:41:17.374 INFO Fetch successful Jan 22 00:41:17.375186 coreos-metadata[2074]: Jan 22 00:41:17.375 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 22 00:41:17.376464 coreos-metadata[2074]: Jan 22 00:41:17.376 INFO Fetch successful Jan 22 00:41:17.379626 unknown[2074]: wrote ssh authorized keys file for user: core Jan 22 00:41:17.403532 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 22 00:41:17.418171 dbus-daemon[1924]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 22 00:41:17.429499 dbus-daemon[1924]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1989 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 22 00:41:17.440053 systemd[1]: Starting polkit.service - Authorization Manager... Jan 22 00:41:17.452730 update-ssh-keys[2120]: Updated "/home/core/.ssh/authorized_keys" Jan 22 00:41:17.448937 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 22 00:41:17.469886 systemd[1]: Finished sshkeys.service. Jan 22 00:41:17.476984 locksmithd[1991]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 22 00:41:17.487712 systemd[1]: issuegen.service: Deactivated successfully. Jan 22 00:41:17.488054 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 22 00:41:17.497256 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 22 00:41:17.610613 containerd[1953]: time="2026-01-22T00:41:17Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 22 00:41:17.617288 containerd[1953]: time="2026-01-22T00:41:17.616719405Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 22 00:41:17.639400 systemd-coredump[1986]: Process 1930 (ntpd) of user 0 dumped core. Module /bin/ntpd without build-id. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Stack trace of thread 1930: #0 0x000055906e7e6aeb n/a (/bin/ntpd + 0x68aeb) #1 0x000055906e78fcdf n/a (/bin/ntpd + 0x11cdf) #2 0x000055906e790575 n/a (/bin/ntpd + 0x12575) #3 0x000055906e78bd8a n/a (/bin/ntpd + 0xdd8a) #4 0x000055906e78d5d3 n/a (/bin/ntpd + 0xf5d3) #5 0x000055906e795fd1 n/a (/bin/ntpd + 0x17fd1) #6 0x000055906e786c2d n/a (/bin/ntpd + 0x8c2d) #7 0x00007f1eb259b16c n/a (libc.so.6 + 0x2716c) #8 0x00007f1eb259b229 __libc_start_main (libc.so.6 + 0x27229) #9 0x000055906e786c55 n/a (/bin/ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Jan 22 00:41:17.644900 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 22 00:41:17.646494 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Jan 22 00:41:17.646677 systemd[1]: ntpd.service: Failed with result 'core-dump'. Jan 22 00:41:17.652578 systemd[1]: systemd-coredump@0-1984-0.service: Deactivated successfully. Jan 22 00:41:17.663942 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 22 00:41:17.668223 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 22 00:41:17.669134 systemd[1]: Reached target getty.target - Login Prompts. Jan 22 00:41:17.701092 containerd[1953]: time="2026-01-22T00:41:17.700051210Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.567µs" Jan 22 00:41:17.701092 containerd[1953]: time="2026-01-22T00:41:17.700094840Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 22 00:41:17.701092 containerd[1953]: time="2026-01-22T00:41:17.700145469Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 22 00:41:17.701092 containerd[1953]: time="2026-01-22T00:41:17.700164112Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 22 00:41:17.701092 containerd[1953]: time="2026-01-22T00:41:17.700354935Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 22 00:41:17.701092 containerd[1953]: time="2026-01-22T00:41:17.700378155Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 22 00:41:17.701092 containerd[1953]: time="2026-01-22T00:41:17.700449621Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 22 00:41:17.701092 containerd[1953]: time="2026-01-22T00:41:17.700465304Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 22 00:41:17.701092 containerd[1953]: time="2026-01-22T00:41:17.700743947Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 22 00:41:17.701092 containerd[1953]: time="2026-01-22T00:41:17.700766032Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 22 00:41:17.705550 containerd[1953]: time="2026-01-22T00:41:17.700783852Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 22 00:41:17.705550 containerd[1953]: time="2026-01-22T00:41:17.704904809Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 22 00:41:17.705550 containerd[1953]: time="2026-01-22T00:41:17.705158183Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 22 00:41:17.705550 containerd[1953]: time="2026-01-22T00:41:17.705176929Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 22 00:41:17.705550 containerd[1953]: time="2026-01-22T00:41:17.705291514Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 22 00:41:17.705550 containerd[1953]: time="2026-01-22T00:41:17.705503488Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 22 00:41:17.705550 containerd[1953]: time="2026-01-22T00:41:17.705545128Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 22 00:41:17.705550 containerd[1953]: time="2026-01-22T00:41:17.705558451Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 22 00:41:17.705960 containerd[1953]: time="2026-01-22T00:41:17.705591727Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 22 00:41:17.707316 containerd[1953]: time="2026-01-22T00:41:17.706670420Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 22 00:41:17.707316 containerd[1953]: time="2026-01-22T00:41:17.706806955Z" level=info msg="metadata content store policy set" policy=shared Jan 22 00:41:17.725128 containerd[1953]: time="2026-01-22T00:41:17.722328938Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 22 00:41:17.725128 containerd[1953]: time="2026-01-22T00:41:17.722418769Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 22 00:41:17.725128 containerd[1953]: time="2026-01-22T00:41:17.722524003Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 22 00:41:17.725128 containerd[1953]: time="2026-01-22T00:41:17.722542872Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 22 00:41:17.725128 containerd[1953]: time="2026-01-22T00:41:17.722562666Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 22 00:41:17.725128 containerd[1953]: time="2026-01-22T00:41:17.722580543Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 22 00:41:17.725128 containerd[1953]: time="2026-01-22T00:41:17.722598026Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 22 00:41:17.725128 containerd[1953]: time="2026-01-22T00:41:17.722620052Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 22 00:41:17.725128 containerd[1953]: time="2026-01-22T00:41:17.722638103Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 22 00:41:17.725128 containerd[1953]: time="2026-01-22T00:41:17.722654849Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 22 00:41:17.725128 containerd[1953]: time="2026-01-22T00:41:17.722670318Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 22 00:41:17.725128 containerd[1953]: time="2026-01-22T00:41:17.722690584Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 22 00:41:17.725128 containerd[1953]: time="2026-01-22T00:41:17.722706316Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 22 00:41:17.725128 containerd[1953]: time="2026-01-22T00:41:17.722724403Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 22 00:41:17.729358 containerd[1953]: time="2026-01-22T00:41:17.725952066Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 22 00:41:17.729358 containerd[1953]: time="2026-01-22T00:41:17.726006868Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 22 00:41:17.729358 containerd[1953]: time="2026-01-22T00:41:17.726032440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 22 00:41:17.729358 containerd[1953]: time="2026-01-22T00:41:17.726057649Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 22 00:41:17.729358 containerd[1953]: time="2026-01-22T00:41:17.726075977Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 22 00:41:17.729358 containerd[1953]: time="2026-01-22T00:41:17.726091104Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 22 00:41:17.729358 containerd[1953]: time="2026-01-22T00:41:17.726110020Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 22 00:41:17.729358 containerd[1953]: time="2026-01-22T00:41:17.726129676Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 22 00:41:17.729358 containerd[1953]: time="2026-01-22T00:41:17.726147156Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 22 00:41:17.729358 containerd[1953]: time="2026-01-22T00:41:17.726163661Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 22 00:41:17.729358 containerd[1953]: time="2026-01-22T00:41:17.726179976Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 22 00:41:17.729358 containerd[1953]: time="2026-01-22T00:41:17.726217442Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 22 00:41:17.729358 containerd[1953]: time="2026-01-22T00:41:17.726278288Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 22 00:41:17.729358 containerd[1953]: time="2026-01-22T00:41:17.726296509Z" level=info msg="Start snapshots syncer" Jan 22 00:41:17.729358 containerd[1953]: time="2026-01-22T00:41:17.726318340Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 22 00:41:17.730065 containerd[1953]: time="2026-01-22T00:41:17.726688727Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 22 00:41:17.730065 containerd[1953]: time="2026-01-22T00:41:17.726773011Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 22 00:41:17.730235 containerd[1953]: time="2026-01-22T00:41:17.728945545Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 22 00:41:17.730235 containerd[1953]: time="2026-01-22T00:41:17.729154998Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 22 00:41:17.730235 containerd[1953]: time="2026-01-22T00:41:17.729191826Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 22 00:41:17.730235 containerd[1953]: time="2026-01-22T00:41:17.729211736Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 22 00:41:17.730235 containerd[1953]: time="2026-01-22T00:41:17.729227771Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 22 00:41:17.730235 containerd[1953]: time="2026-01-22T00:41:17.729248378Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 22 00:41:17.730235 containerd[1953]: time="2026-01-22T00:41:17.729263966Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 22 00:41:17.730235 containerd[1953]: time="2026-01-22T00:41:17.729280286Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 22 00:41:17.730235 containerd[1953]: time="2026-01-22T00:41:17.729295690Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 22 00:41:17.730235 containerd[1953]: time="2026-01-22T00:41:17.729312495Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 22 00:41:17.731831 containerd[1953]: time="2026-01-22T00:41:17.731326394Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 22 00:41:17.731831 containerd[1953]: time="2026-01-22T00:41:17.731432049Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 22 00:41:17.731831 containerd[1953]: time="2026-01-22T00:41:17.731449003Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 22 00:41:17.731831 containerd[1953]: time="2026-01-22T00:41:17.731464155Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 22 00:41:17.731831 containerd[1953]: time="2026-01-22T00:41:17.731476647Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 22 00:41:17.731831 containerd[1953]: time="2026-01-22T00:41:17.731491365Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 22 00:41:17.731831 containerd[1953]: time="2026-01-22T00:41:17.731510741Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 22 00:41:17.731831 containerd[1953]: time="2026-01-22T00:41:17.731536387Z" level=info msg="runtime interface created" Jan 22 00:41:17.731831 containerd[1953]: time="2026-01-22T00:41:17.731544909Z" level=info msg="created NRI interface" Jan 22 00:41:17.731831 containerd[1953]: time="2026-01-22T00:41:17.731557504Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 22 00:41:17.731831 containerd[1953]: time="2026-01-22T00:41:17.731581141Z" level=info msg="Connect containerd service" Jan 22 00:41:17.731831 containerd[1953]: time="2026-01-22T00:41:17.731618621Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 22 00:41:17.739832 containerd[1953]: time="2026-01-22T00:41:17.737426126Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 22 00:41:17.759343 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Jan 22 00:41:17.763005 systemd[1]: Started ntpd.service - Network Time Service. Jan 22 00:41:17.808697 ntpd[2165]: ntpd 4.2.8p18@1.4062-o Wed Jan 21 21:31:41 UTC 2026 (1): Starting Jan 22 00:41:17.819465 systemd-coredump[2167]: Process 2165 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Jan 22 00:41:17.836748 ntpd[2165]: 22 Jan 00:41:17 ntpd[2165]: ntpd 4.2.8p18@1.4062-o Wed Jan 21 21:31:41 UTC 2026 (1): Starting Jan 22 00:41:17.836748 ntpd[2165]: 22 Jan 00:41:17 ntpd[2165]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 22 00:41:17.836748 ntpd[2165]: 22 Jan 00:41:17 ntpd[2165]: ---------------------------------------------------- Jan 22 00:41:17.836748 ntpd[2165]: 22 Jan 00:41:17 ntpd[2165]: ntp-4 is maintained by Network Time Foundation, Jan 22 00:41:17.836748 ntpd[2165]: 22 Jan 00:41:17 ntpd[2165]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 22 00:41:17.836748 ntpd[2165]: 22 Jan 00:41:17 ntpd[2165]: corporation. Support and training for ntp-4 are Jan 22 00:41:17.836748 ntpd[2165]: 22 Jan 00:41:17 ntpd[2165]: available at https://www.nwtime.org/support Jan 22 00:41:17.836748 ntpd[2165]: 22 Jan 00:41:17 ntpd[2165]: ---------------------------------------------------- Jan 22 00:41:17.836748 ntpd[2165]: 22 Jan 00:41:17 ntpd[2165]: proto: precision = 0.083 usec (-23) Jan 22 00:41:17.836748 ntpd[2165]: 22 Jan 00:41:17 ntpd[2165]: basedate set to 2026-01-09 Jan 22 00:41:17.836748 ntpd[2165]: 22 Jan 00:41:17 ntpd[2165]: gps base set to 2026-01-11 (week 2401) Jan 22 00:41:17.836748 ntpd[2165]: 22 Jan 00:41:17 ntpd[2165]: Listen and drop on 0 v6wildcard [::]:123 Jan 22 00:41:17.836748 ntpd[2165]: 22 Jan 00:41:17 ntpd[2165]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 22 00:41:17.836748 ntpd[2165]: 22 Jan 00:41:17 ntpd[2165]: Listen normally on 2 lo 127.0.0.1:123 Jan 22 00:41:17.836748 ntpd[2165]: 22 Jan 00:41:17 ntpd[2165]: Listen normally on 3 eth0 172.31.26.54:123 Jan 22 00:41:17.836748 ntpd[2165]: 22 Jan 00:41:17 ntpd[2165]: Listen normally on 4 lo [::1]:123 Jan 22 00:41:17.836748 ntpd[2165]: 22 Jan 00:41:17 ntpd[2165]: bind(21) AF_INET6 [fe80::42b:c2ff:fe27:6b%2]:123 flags 0x811 failed: Cannot assign requested address Jan 22 00:41:17.836748 ntpd[2165]: 22 Jan 00:41:17 ntpd[2165]: unable to create socket on eth0 (5) for [fe80::42b:c2ff:fe27:6b%2]:123 Jan 22 00:41:17.808775 ntpd[2165]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 22 00:41:17.826757 systemd[1]: Started systemd-coredump@1-2167-0.service - Process Core Dump (PID 2167/UID 0). Jan 22 00:41:17.808804 ntpd[2165]: ---------------------------------------------------- Jan 22 00:41:17.808815 ntpd[2165]: ntp-4 is maintained by Network Time Foundation, Jan 22 00:41:17.808825 ntpd[2165]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 22 00:41:17.808835 ntpd[2165]: corporation. Support and training for ntp-4 are Jan 22 00:41:17.808844 ntpd[2165]: available at https://www.nwtime.org/support Jan 22 00:41:17.808854 ntpd[2165]: ---------------------------------------------------- Jan 22 00:41:17.809549 ntpd[2165]: proto: precision = 0.083 usec (-23) Jan 22 00:41:17.811446 ntpd[2165]: basedate set to 2026-01-09 Jan 22 00:41:17.811467 ntpd[2165]: gps base set to 2026-01-11 (week 2401) Jan 22 00:41:17.811569 ntpd[2165]: Listen and drop on 0 v6wildcard [::]:123 Jan 22 00:41:17.811598 ntpd[2165]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 22 00:41:17.812226 ntpd[2165]: Listen normally on 2 lo 127.0.0.1:123 Jan 22 00:41:17.812262 ntpd[2165]: Listen normally on 3 eth0 172.31.26.54:123 Jan 22 00:41:17.812293 ntpd[2165]: Listen normally on 4 lo [::1]:123 Jan 22 00:41:17.812325 ntpd[2165]: bind(21) AF_INET6 [fe80::42b:c2ff:fe27:6b%2]:123 flags 0x811 failed: Cannot assign requested address Jan 22 00:41:17.812348 ntpd[2165]: unable to create socket on eth0 (5) for [fe80::42b:c2ff:fe27:6b%2]:123 Jan 22 00:41:17.875612 polkitd[2124]: Started polkitd version 126 Jan 22 00:41:17.895469 polkitd[2124]: Loading rules from directory /etc/polkit-1/rules.d Jan 22 00:41:17.896923 systemd-networkd[1860]: eth0: Gained IPv6LL Jan 22 00:41:17.902393 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 22 00:41:17.903605 systemd[1]: Reached target network-online.target - Network is Online. Jan 22 00:41:17.909407 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 22 00:41:17.910635 polkitd[2124]: Loading rules from directory /run/polkit-1/rules.d Jan 22 00:41:17.910734 polkitd[2124]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 22 00:41:17.914902 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:41:17.922114 polkitd[2124]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 22 00:41:17.923969 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 22 00:41:17.922176 polkitd[2124]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 22 00:41:17.922226 polkitd[2124]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 22 00:41:17.934009 polkitd[2124]: Finished loading, compiling and executing 2 rules Jan 22 00:41:17.934726 systemd[1]: Started polkit.service - Authorization Manager. Jan 22 00:41:17.937484 dbus-daemon[1924]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 22 00:41:17.938282 polkitd[2124]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 22 00:41:17.960166 systemd-hostnamed[1989]: Hostname set to (transient) Jan 22 00:41:17.960937 systemd-resolved[1535]: System hostname changed to 'ip-172-31-26-54'. Jan 22 00:41:18.028472 sshd[2109]: Accepted publickey for core from 68.220.241.50 port 58984 ssh2: RSA SHA256:guZGX9gbNcoOyrr8VXliJQHZZzuPYZGvC0Dn+A+42nM Jan 22 00:41:18.036291 sshd-session[2109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:41:18.056137 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 22 00:41:18.061403 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 22 00:41:18.117277 systemd-logind[1939]: New session 1 of user core. Jan 22 00:41:18.137988 containerd[1953]: time="2026-01-22T00:41:18.128532604Z" level=info msg="Start subscribing containerd event" Jan 22 00:41:18.137988 containerd[1953]: time="2026-01-22T00:41:18.128589065Z" level=info msg="Start recovering state" Jan 22 00:41:18.137988 containerd[1953]: time="2026-01-22T00:41:18.128706906Z" level=info msg="Start event monitor" Jan 22 00:41:18.137988 containerd[1953]: time="2026-01-22T00:41:18.128726698Z" level=info msg="Start cni network conf syncer for default" Jan 22 00:41:18.137988 containerd[1953]: time="2026-01-22T00:41:18.128737711Z" level=info msg="Start streaming server" Jan 22 00:41:18.137988 containerd[1953]: time="2026-01-22T00:41:18.128748504Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 22 00:41:18.137988 containerd[1953]: time="2026-01-22T00:41:18.128759684Z" level=info msg="runtime interface starting up..." Jan 22 00:41:18.137988 containerd[1953]: time="2026-01-22T00:41:18.128768427Z" level=info msg="starting plugins..." Jan 22 00:41:18.137988 containerd[1953]: time="2026-01-22T00:41:18.128784529Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 22 00:41:18.137988 containerd[1953]: time="2026-01-22T00:41:18.129898157Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 22 00:41:18.137988 containerd[1953]: time="2026-01-22T00:41:18.130004727Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 22 00:41:18.137988 containerd[1953]: time="2026-01-22T00:41:18.130087662Z" level=info msg="containerd successfully booted in 0.521955s" Jan 22 00:41:18.130303 systemd[1]: Started containerd.service - containerd container runtime. Jan 22 00:41:18.141972 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 22 00:41:18.150699 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 22 00:41:18.158285 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 22 00:41:18.175359 (systemd)[2207]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 22 00:41:18.179994 systemd-coredump[2168]: Process 2165 (ntpd) of user 0 dumped core. Module /bin/ntpd without build-id. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Stack trace of thread 2165: #0 0x00005590c1292aeb n/a (/bin/ntpd + 0x68aeb) #1 0x00005590c123bcdf n/a (/bin/ntpd + 0x11cdf) #2 0x00005590c123c575 n/a (/bin/ntpd + 0x12575) #3 0x00005590c1237d8a n/a (/bin/ntpd + 0xdd8a) #4 0x00005590c12395d3 n/a (/bin/ntpd + 0xf5d3) #5 0x00005590c1241fd1 n/a (/bin/ntpd + 0x17fd1) #6 0x00005590c1232c2d n/a (/bin/ntpd + 0x8c2d) #7 0x00007f8d842c416c n/a (libc.so.6 + 0x2716c) #8 0x00007f8d842c4229 __libc_start_main (libc.so.6 + 0x27229) #9 0x00005590c1232c55 n/a (/bin/ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Jan 22 00:41:18.186111 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Jan 22 00:41:18.186294 systemd[1]: ntpd.service: Failed with result 'core-dump'. Jan 22 00:41:18.211972 amazon-ssm-agent[2179]: Initializing new seelog logger Jan 22 00:41:18.211972 amazon-ssm-agent[2179]: New Seelog Logger Creation Complete Jan 22 00:41:18.211972 amazon-ssm-agent[2179]: 2026/01/22 00:41:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 22 00:41:18.211972 amazon-ssm-agent[2179]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 22 00:41:18.211972 amazon-ssm-agent[2179]: 2026/01/22 00:41:18 processing appconfig overrides Jan 22 00:41:18.211972 amazon-ssm-agent[2179]: 2026/01/22 00:41:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 22 00:41:18.211972 amazon-ssm-agent[2179]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 22 00:41:18.211972 amazon-ssm-agent[2179]: 2026/01/22 00:41:18 processing appconfig overrides Jan 22 00:41:18.211972 amazon-ssm-agent[2179]: 2026/01/22 00:41:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 22 00:41:18.211972 amazon-ssm-agent[2179]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 22 00:41:18.211972 amazon-ssm-agent[2179]: 2026/01/22 00:41:18 processing appconfig overrides Jan 22 00:41:18.211972 amazon-ssm-agent[2179]: 2026-01-22 00:41:18.2015 INFO Proxy environment variables: Jan 22 00:41:18.192363 systemd-logind[1939]: New session c1 of user core. Jan 22 00:41:18.194006 systemd[1]: systemd-coredump@1-2167-0.service: Deactivated successfully. Jan 22 00:41:18.215890 amazon-ssm-agent[2179]: 2026/01/22 00:41:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 22 00:41:18.215890 amazon-ssm-agent[2179]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 22 00:41:18.215890 amazon-ssm-agent[2179]: 2026/01/22 00:41:18 processing appconfig overrides Jan 22 00:41:18.310873 amazon-ssm-agent[2179]: 2026-01-22 00:41:18.2023 INFO no_proxy: Jan 22 00:41:18.361181 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 2. Jan 22 00:41:18.367164 systemd[1]: Started ntpd.service - Network Time Service. Jan 22 00:41:18.409778 amazon-ssm-agent[2179]: 2026-01-22 00:41:18.2023 INFO https_proxy: Jan 22 00:41:18.443917 ntpd[2222]: ntpd 4.2.8p18@1.4062-o Wed Jan 21 21:31:41 UTC 2026 (1): Starting Jan 22 00:41:18.444468 ntpd[2222]: 22 Jan 00:41:18 ntpd[2222]: ntpd 4.2.8p18@1.4062-o Wed Jan 21 21:31:41 UTC 2026 (1): Starting Jan 22 00:41:18.444759 ntpd[2222]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 22 00:41:18.445005 ntpd[2222]: 22 Jan 00:41:18 ntpd[2222]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 22 00:41:18.445059 ntpd[2222]: ---------------------------------------------------- Jan 22 00:41:18.445125 ntpd[2222]: 22 Jan 00:41:18 ntpd[2222]: ---------------------------------------------------- Jan 22 00:41:18.445173 ntpd[2222]: ntp-4 is maintained by Network Time Foundation, Jan 22 00:41:18.445236 ntpd[2222]: 22 Jan 00:41:18 ntpd[2222]: ntp-4 is maintained by Network Time Foundation, Jan 22 00:41:18.445284 ntpd[2222]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 22 00:41:18.445343 ntpd[2222]: 22 Jan 00:41:18 ntpd[2222]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 22 00:41:18.445395 ntpd[2222]: corporation. Support and training for ntp-4 are Jan 22 00:41:18.445458 ntpd[2222]: 22 Jan 00:41:18 ntpd[2222]: corporation. Support and training for ntp-4 are Jan 22 00:41:18.445504 ntpd[2222]: available at https://www.nwtime.org/support Jan 22 00:41:18.445672 ntpd[2222]: 22 Jan 00:41:18 ntpd[2222]: available at https://www.nwtime.org/support Jan 22 00:41:18.445726 ntpd[2222]: ---------------------------------------------------- Jan 22 00:41:18.445814 ntpd[2222]: 22 Jan 00:41:18 ntpd[2222]: ---------------------------------------------------- Jan 22 00:41:18.446737 ntpd[2222]: proto: precision = 0.097 usec (-23) Jan 22 00:41:18.446894 ntpd[2222]: 22 Jan 00:41:18 ntpd[2222]: proto: precision = 0.097 usec (-23) Jan 22 00:41:18.447203 ntpd[2222]: basedate set to 2026-01-09 Jan 22 00:41:18.447280 ntpd[2222]: 22 Jan 00:41:18 ntpd[2222]: basedate set to 2026-01-09 Jan 22 00:41:18.447329 ntpd[2222]: gps base set to 2026-01-11 (week 2401) Jan 22 00:41:18.447394 ntpd[2222]: 22 Jan 00:41:18 ntpd[2222]: gps base set to 2026-01-11 (week 2401) Jan 22 00:41:18.447624 ntpd[2222]: Listen and drop on 0 v6wildcard [::]:123 Jan 22 00:41:18.447710 ntpd[2222]: 22 Jan 00:41:18 ntpd[2222]: Listen and drop on 0 v6wildcard [::]:123 Jan 22 00:41:18.447783 ntpd[2222]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 22 00:41:18.447886 ntpd[2222]: 22 Jan 00:41:18 ntpd[2222]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 22 00:41:18.448600 ntpd[2222]: Listen normally on 2 lo 127.0.0.1:123 Jan 22 00:41:18.449827 ntpd[2222]: 22 Jan 00:41:18 ntpd[2222]: Listen normally on 2 lo 127.0.0.1:123 Jan 22 00:41:18.449827 ntpd[2222]: 22 Jan 00:41:18 ntpd[2222]: Listen normally on 3 eth0 172.31.26.54:123 Jan 22 00:41:18.449827 ntpd[2222]: 22 Jan 00:41:18 ntpd[2222]: Listen normally on 4 lo [::1]:123 Jan 22 00:41:18.449827 ntpd[2222]: 22 Jan 00:41:18 ntpd[2222]: Listen normally on 5 eth0 [fe80::42b:c2ff:fe27:6b%2]:123 Jan 22 00:41:18.449827 ntpd[2222]: 22 Jan 00:41:18 ntpd[2222]: Listening on routing socket on fd #22 for interface updates Jan 22 00:41:18.448721 ntpd[2222]: Listen normally on 3 eth0 172.31.26.54:123 Jan 22 00:41:18.448753 ntpd[2222]: Listen normally on 4 lo [::1]:123 Jan 22 00:41:18.448780 ntpd[2222]: Listen normally on 5 eth0 [fe80::42b:c2ff:fe27:6b%2]:123 Jan 22 00:41:18.448833 ntpd[2222]: Listening on routing socket on fd #22 for interface updates Jan 22 00:41:18.452642 ntpd[2222]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 22 00:41:18.452759 ntpd[2222]: 22 Jan 00:41:18 ntpd[2222]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 22 00:41:18.452857 ntpd[2222]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 22 00:41:18.452928 ntpd[2222]: 22 Jan 00:41:18 ntpd[2222]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 22 00:41:18.510893 amazon-ssm-agent[2179]: 2026-01-22 00:41:18.2023 INFO http_proxy: Jan 22 00:41:18.528712 systemd[2207]: Queued start job for default target default.target. Jan 22 00:41:18.535740 systemd[2207]: Created slice app.slice - User Application Slice. Jan 22 00:41:18.536082 systemd[2207]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 22 00:41:18.536108 systemd[2207]: Reached target paths.target - Paths. Jan 22 00:41:18.536187 systemd[2207]: Reached target timers.target - Timers. Jan 22 00:41:18.539929 systemd[2207]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 22 00:41:18.543994 systemd[2207]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 22 00:41:18.586775 systemd[2207]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 22 00:41:18.587493 systemd[2207]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 22 00:41:18.588053 systemd[2207]: Reached target sockets.target - Sockets. Jan 22 00:41:18.588125 systemd[2207]: Reached target basic.target - Basic System. Jan 22 00:41:18.588175 systemd[2207]: Reached target default.target - Main User Target. Jan 22 00:41:18.588214 systemd[2207]: Startup finished in 375ms. Jan 22 00:41:18.589215 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 22 00:41:18.600009 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 22 00:41:18.608302 amazon-ssm-agent[2179]: 2026-01-22 00:41:18.2025 INFO Checking if agent identity type OnPrem can be assumed Jan 22 00:41:18.629054 tar[1950]: linux-amd64/README.md Jan 22 00:41:18.637363 amazon-ssm-agent[2179]: 2026/01/22 00:41:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 22 00:41:18.637363 amazon-ssm-agent[2179]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 22 00:41:18.637508 amazon-ssm-agent[2179]: 2026/01/22 00:41:18 processing appconfig overrides Jan 22 00:41:18.650136 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 22 00:41:18.672333 amazon-ssm-agent[2179]: 2026-01-22 00:41:18.2027 INFO Checking if agent identity type EC2 can be assumed Jan 22 00:41:18.672333 amazon-ssm-agent[2179]: 2026-01-22 00:41:18.3563 INFO Agent will take identity from EC2 Jan 22 00:41:18.672333 amazon-ssm-agent[2179]: 2026-01-22 00:41:18.3577 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jan 22 00:41:18.672333 amazon-ssm-agent[2179]: 2026-01-22 00:41:18.3578 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 22 00:41:18.672595 amazon-ssm-agent[2179]: 2026-01-22 00:41:18.3578 INFO [amazon-ssm-agent] Starting Core Agent Jan 22 00:41:18.672595 amazon-ssm-agent[2179]: 2026-01-22 00:41:18.3578 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jan 22 00:41:18.672595 amazon-ssm-agent[2179]: 2026-01-22 00:41:18.3578 INFO [Registrar] Starting registrar module Jan 22 00:41:18.672595 amazon-ssm-agent[2179]: 2026-01-22 00:41:18.3648 INFO [EC2Identity] Checking disk for registration info Jan 22 00:41:18.672595 amazon-ssm-agent[2179]: 2026-01-22 00:41:18.3649 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jan 22 00:41:18.672595 amazon-ssm-agent[2179]: 2026-01-22 00:41:18.3649 INFO [EC2Identity] Generating registration keypair Jan 22 00:41:18.672595 amazon-ssm-agent[2179]: 2026-01-22 00:41:18.5684 INFO [EC2Identity] Checking write access before registering Jan 22 00:41:18.672595 amazon-ssm-agent[2179]: 2026-01-22 00:41:18.5771 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jan 22 00:41:18.672595 amazon-ssm-agent[2179]: 2026-01-22 00:41:18.6371 INFO [EC2Identity] EC2 registration was successful. Jan 22 00:41:18.672595 amazon-ssm-agent[2179]: 2026-01-22 00:41:18.6371 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jan 22 00:41:18.672595 amazon-ssm-agent[2179]: 2026-01-22 00:41:18.6372 INFO [CredentialRefresher] credentialRefresher has started Jan 22 00:41:18.672595 amazon-ssm-agent[2179]: 2026-01-22 00:41:18.6372 INFO [CredentialRefresher] Starting credentials refresher loop Jan 22 00:41:18.672595 amazon-ssm-agent[2179]: 2026-01-22 00:41:18.6720 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 22 00:41:18.672595 amazon-ssm-agent[2179]: 2026-01-22 00:41:18.6722 INFO [CredentialRefresher] Credentials ready Jan 22 00:41:18.707570 amazon-ssm-agent[2179]: 2026-01-22 00:41:18.6724 INFO [CredentialRefresher] Next credential rotation will be in 29.99999266875 minutes Jan 22 00:41:18.848947 systemd[1]: Started sshd@1-172.31.26.54:22-68.220.241.50:58990.service - OpenSSH per-connection server daemon (68.220.241.50:58990). Jan 22 00:41:19.279381 sshd[2237]: Accepted publickey for core from 68.220.241.50 port 58990 ssh2: RSA SHA256:guZGX9gbNcoOyrr8VXliJQHZZzuPYZGvC0Dn+A+42nM Jan 22 00:41:19.281423 sshd-session[2237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:41:19.288607 systemd-logind[1939]: New session 2 of user core. Jan 22 00:41:19.299069 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 22 00:41:19.513361 sshd[2240]: Connection closed by 68.220.241.50 port 58990 Jan 22 00:41:19.514981 sshd-session[2237]: pam_unix(sshd:session): session closed for user core Jan 22 00:41:19.519716 systemd-logind[1939]: Session 2 logged out. Waiting for processes to exit. Jan 22 00:41:19.520567 systemd[1]: sshd@1-172.31.26.54:22-68.220.241.50:58990.service: Deactivated successfully. Jan 22 00:41:19.523660 systemd[1]: session-2.scope: Deactivated successfully. Jan 22 00:41:19.526695 systemd-logind[1939]: Removed session 2. Jan 22 00:41:19.605483 systemd[1]: Started sshd@2-172.31.26.54:22-68.220.241.50:59000.service - OpenSSH per-connection server daemon (68.220.241.50:59000). Jan 22 00:41:19.686141 amazon-ssm-agent[2179]: 2026-01-22 00:41:19.6859 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 22 00:41:19.787476 amazon-ssm-agent[2179]: 2026-01-22 00:41:19.6881 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2251) started Jan 22 00:41:19.888465 amazon-ssm-agent[2179]: 2026-01-22 00:41:19.6882 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 22 00:41:20.031300 sshd[2246]: Accepted publickey for core from 68.220.241.50 port 59000 ssh2: RSA SHA256:guZGX9gbNcoOyrr8VXliJQHZZzuPYZGvC0Dn+A+42nM Jan 22 00:41:20.032610 sshd-session[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:41:20.038196 systemd-logind[1939]: New session 3 of user core. Jan 22 00:41:20.045094 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 22 00:41:20.267665 sshd[2263]: Connection closed by 68.220.241.50 port 59000 Jan 22 00:41:20.268972 sshd-session[2246]: pam_unix(sshd:session): session closed for user core Jan 22 00:41:20.273093 systemd[1]: sshd@2-172.31.26.54:22-68.220.241.50:59000.service: Deactivated successfully. Jan 22 00:41:20.275096 systemd[1]: session-3.scope: Deactivated successfully. Jan 22 00:41:20.275959 systemd-logind[1939]: Session 3 logged out. Waiting for processes to exit. Jan 22 00:41:20.277706 systemd-logind[1939]: Removed session 3. Jan 22 00:41:20.894620 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:41:20.896991 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 22 00:41:20.898970 systemd[1]: Startup finished in 4.723s (kernel) + 9.111s (initrd) + 9.132s (userspace) = 22.968s. Jan 22 00:41:20.907456 (kubelet)[2273]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 22 00:41:21.594773 kubelet[2273]: E0122 00:41:21.594681 2273 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 22 00:41:21.597179 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 22 00:41:21.597389 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 22 00:41:21.598504 systemd[1]: kubelet.service: Consumed 1.015s CPU time, 265.5M memory peak. Jan 22 00:41:27.044948 systemd-resolved[1535]: Clock change detected. Flushing caches. Jan 22 00:41:31.969241 systemd[1]: Started sshd@3-172.31.26.54:22-68.220.241.50:34906.service - OpenSSH per-connection server daemon (68.220.241.50:34906). Jan 22 00:41:32.435501 sshd[2287]: Accepted publickey for core from 68.220.241.50 port 34906 ssh2: RSA SHA256:guZGX9gbNcoOyrr8VXliJQHZZzuPYZGvC0Dn+A+42nM Jan 22 00:41:32.437201 sshd-session[2287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:41:32.442942 systemd-logind[1939]: New session 4 of user core. Jan 22 00:41:32.453276 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 22 00:41:32.693519 sshd[2290]: Connection closed by 68.220.241.50 port 34906 Jan 22 00:41:32.695170 sshd-session[2287]: pam_unix(sshd:session): session closed for user core Jan 22 00:41:32.700329 systemd-logind[1939]: Session 4 logged out. Waiting for processes to exit. Jan 22 00:41:32.700690 systemd[1]: sshd@3-172.31.26.54:22-68.220.241.50:34906.service: Deactivated successfully. Jan 22 00:41:32.702849 systemd[1]: session-4.scope: Deactivated successfully. Jan 22 00:41:32.705000 systemd-logind[1939]: Removed session 4. Jan 22 00:41:32.775954 systemd[1]: Started sshd@4-172.31.26.54:22-68.220.241.50:56842.service - OpenSSH per-connection server daemon (68.220.241.50:56842). Jan 22 00:41:33.212696 sshd[2296]: Accepted publickey for core from 68.220.241.50 port 56842 ssh2: RSA SHA256:guZGX9gbNcoOyrr8VXliJQHZZzuPYZGvC0Dn+A+42nM Jan 22 00:41:33.214020 sshd-session[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:41:33.215105 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 22 00:41:33.219078 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:41:33.222951 systemd-logind[1939]: New session 5 of user core. Jan 22 00:41:33.225650 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 22 00:41:33.448971 sshd[2302]: Connection closed by 68.220.241.50 port 56842 Jan 22 00:41:33.450089 sshd-session[2296]: pam_unix(sshd:session): session closed for user core Jan 22 00:41:33.457720 systemd[1]: sshd@4-172.31.26.54:22-68.220.241.50:56842.service: Deactivated successfully. Jan 22 00:41:33.461080 systemd[1]: session-5.scope: Deactivated successfully. Jan 22 00:41:33.469245 systemd-logind[1939]: Session 5 logged out. Waiting for processes to exit. Jan 22 00:41:33.471539 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:41:33.477303 systemd-logind[1939]: Removed session 5. Jan 22 00:41:33.482345 (kubelet)[2310]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 22 00:41:33.538205 systemd[1]: Started sshd@5-172.31.26.54:22-68.220.241.50:56852.service - OpenSSH per-connection server daemon (68.220.241.50:56852). Jan 22 00:41:33.541960 kubelet[2310]: E0122 00:41:33.541847 2310 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 22 00:41:33.546836 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 22 00:41:33.547281 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 22 00:41:33.552429 systemd[1]: kubelet.service: Consumed 188ms CPU time, 110.8M memory peak. Jan 22 00:41:33.976125 sshd[2319]: Accepted publickey for core from 68.220.241.50 port 56852 ssh2: RSA SHA256:guZGX9gbNcoOyrr8VXliJQHZZzuPYZGvC0Dn+A+42nM Jan 22 00:41:33.977755 sshd-session[2319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:41:33.984197 systemd-logind[1939]: New session 6 of user core. Jan 22 00:41:33.993141 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 22 00:41:34.211616 sshd[2323]: Connection closed by 68.220.241.50 port 56852 Jan 22 00:41:34.213148 sshd-session[2319]: pam_unix(sshd:session): session closed for user core Jan 22 00:41:34.220814 systemd[1]: sshd@5-172.31.26.54:22-68.220.241.50:56852.service: Deactivated successfully. Jan 22 00:41:34.224561 systemd[1]: session-6.scope: Deactivated successfully. Jan 22 00:41:34.227224 systemd-logind[1939]: Session 6 logged out. Waiting for processes to exit. Jan 22 00:41:34.230532 systemd-logind[1939]: Removed session 6. Jan 22 00:41:34.316270 systemd[1]: Started sshd@6-172.31.26.54:22-68.220.241.50:56860.service - OpenSSH per-connection server daemon (68.220.241.50:56860). Jan 22 00:41:34.774717 sshd[2329]: Accepted publickey for core from 68.220.241.50 port 56860 ssh2: RSA SHA256:guZGX9gbNcoOyrr8VXliJQHZZzuPYZGvC0Dn+A+42nM Jan 22 00:41:34.775222 sshd-session[2329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:41:34.781930 systemd-logind[1939]: New session 7 of user core. Jan 22 00:41:34.788210 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 22 00:41:34.960251 sudo[2333]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 22 00:41:34.960522 sudo[2333]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 22 00:41:34.977009 sudo[2333]: pam_unix(sudo:session): session closed for user root Jan 22 00:41:35.060219 sshd[2332]: Connection closed by 68.220.241.50 port 56860 Jan 22 00:41:35.061448 sshd-session[2329]: pam_unix(sshd:session): session closed for user core Jan 22 00:41:35.066316 systemd[1]: sshd@6-172.31.26.54:22-68.220.241.50:56860.service: Deactivated successfully. Jan 22 00:41:35.068364 systemd[1]: session-7.scope: Deactivated successfully. Jan 22 00:41:35.069676 systemd-logind[1939]: Session 7 logged out. Waiting for processes to exit. Jan 22 00:41:35.071483 systemd-logind[1939]: Removed session 7. Jan 22 00:41:35.154112 systemd[1]: Started sshd@7-172.31.26.54:22-68.220.241.50:56868.service - OpenSSH per-connection server daemon (68.220.241.50:56868). Jan 22 00:41:35.594168 sshd[2339]: Accepted publickey for core from 68.220.241.50 port 56868 ssh2: RSA SHA256:guZGX9gbNcoOyrr8VXliJQHZZzuPYZGvC0Dn+A+42nM Jan 22 00:41:35.595568 sshd-session[2339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:41:35.604286 systemd-logind[1939]: New session 8 of user core. Jan 22 00:41:35.613213 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 22 00:41:35.758461 sudo[2344]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 22 00:41:35.759016 sudo[2344]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 22 00:41:35.765368 sudo[2344]: pam_unix(sudo:session): session closed for user root Jan 22 00:41:35.773170 sudo[2343]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 22 00:41:35.773538 sudo[2343]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 22 00:41:35.786179 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 22 00:41:35.854000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 22 00:41:35.862372 kernel: kauditd_printk_skb: 73 callbacks suppressed Jan 22 00:41:35.864940 kernel: audit: type=1305 audit(1769042495.854:228): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 22 00:41:35.869057 kernel: audit: type=1300 audit(1769042495.854:228): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc9e12dee0 a2=420 a3=0 items=0 ppid=2347 pid=2366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:35.869193 kernel: audit: type=1327 audit(1769042495.854:228): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 22 00:41:35.854000 audit[2366]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc9e12dee0 a2=420 a3=0 items=0 ppid=2347 pid=2366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:35.854000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 22 00:41:35.869400 augenrules[2366]: No rules Jan 22 00:41:35.871097 systemd[1]: audit-rules.service: Deactivated successfully. Jan 22 00:41:35.871461 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 22 00:41:35.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:35.876032 kernel: audit: type=1130 audit(1769042495.871:229): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:35.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:35.880543 sudo[2343]: pam_unix(sudo:session): session closed for user root Jan 22 00:41:35.884442 kernel: audit: type=1131 audit(1769042495.874:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:35.884539 kernel: audit: type=1106 audit(1769042495.879:231): pid=2343 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:41:35.879000 audit[2343]: USER_END pid=2343 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:41:35.887915 kernel: audit: type=1104 audit(1769042495.879:232): pid=2343 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:41:35.879000 audit[2343]: CRED_DISP pid=2343 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:41:35.967200 sshd[2342]: Connection closed by 68.220.241.50 port 56868 Jan 22 00:41:35.970154 sshd-session[2339]: pam_unix(sshd:session): session closed for user core Jan 22 00:41:35.970000 audit[2339]: USER_END pid=2339 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:41:35.977901 kernel: audit: type=1106 audit(1769042495.970:233): pid=2339 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:41:35.970000 audit[2339]: CRED_DISP pid=2339 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:41:35.978429 systemd[1]: sshd@7-172.31.26.54:22-68.220.241.50:56868.service: Deactivated successfully. Jan 22 00:41:35.989899 kernel: audit: type=1104 audit(1769042495.970:234): pid=2339 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:41:35.990011 kernel: audit: type=1131 audit(1769042495.977:235): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.26.54:22-68.220.241.50:56868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:35.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.26.54:22-68.220.241.50:56868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:35.989983 systemd[1]: session-8.scope: Deactivated successfully. Jan 22 00:41:35.992108 systemd-logind[1939]: Session 8 logged out. Waiting for processes to exit. Jan 22 00:41:35.993838 systemd-logind[1939]: Removed session 8. Jan 22 00:41:36.056008 systemd[1]: Started sshd@8-172.31.26.54:22-68.220.241.50:56882.service - OpenSSH per-connection server daemon (68.220.241.50:56882). Jan 22 00:41:36.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.26.54:22-68.220.241.50:56882 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:36.483000 audit[2375]: USER_ACCT pid=2375 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:41:36.484896 sshd[2375]: Accepted publickey for core from 68.220.241.50 port 56882 ssh2: RSA SHA256:guZGX9gbNcoOyrr8VXliJQHZZzuPYZGvC0Dn+A+42nM Jan 22 00:41:36.485000 audit[2375]: CRED_ACQ pid=2375 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:41:36.485000 audit[2375]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe27a38300 a2=3 a3=0 items=0 ppid=1 pid=2375 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:36.485000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:41:36.486404 sshd-session[2375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:41:36.493378 systemd-logind[1939]: New session 9 of user core. Jan 22 00:41:36.499252 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 22 00:41:36.504000 audit[2375]: USER_START pid=2375 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:41:36.507000 audit[2378]: CRED_ACQ pid=2378 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:41:36.645000 audit[2379]: USER_ACCT pid=2379 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:41:36.646283 sudo[2379]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 22 00:41:36.645000 audit[2379]: CRED_REFR pid=2379 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:41:36.646678 sudo[2379]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 22 00:41:36.648000 audit[2379]: USER_START pid=2379 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:41:37.120205 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 22 00:41:37.139512 (dockerd)[2396]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 22 00:41:37.489710 dockerd[2396]: time="2026-01-22T00:41:37.489050637Z" level=info msg="Starting up" Jan 22 00:41:37.490547 dockerd[2396]: time="2026-01-22T00:41:37.490510112Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 22 00:41:37.503752 dockerd[2396]: time="2026-01-22T00:41:37.503521298Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 22 00:41:37.582200 systemd[1]: var-lib-docker-metacopy\x2dcheck2012092116-merged.mount: Deactivated successfully. Jan 22 00:41:37.604756 dockerd[2396]: time="2026-01-22T00:41:37.604548219Z" level=info msg="Loading containers: start." Jan 22 00:41:37.619928 kernel: Initializing XFRM netlink socket Jan 22 00:41:37.694000 audit[2444]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2444 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:41:37.694000 audit[2444]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc4815ddf0 a2=0 a3=0 items=0 ppid=2396 pid=2444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.694000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 22 00:41:37.696000 audit[2446]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2446 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:41:37.696000 audit[2446]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdf1b4ddc0 a2=0 a3=0 items=0 ppid=2396 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.696000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 22 00:41:37.700000 audit[2448]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2448 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:41:37.700000 audit[2448]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff0e1c4f10 a2=0 a3=0 items=0 ppid=2396 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.700000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 22 00:41:37.703000 audit[2450]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2450 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:41:37.703000 audit[2450]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc23e51490 a2=0 a3=0 items=0 ppid=2396 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.703000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 22 00:41:37.706000 audit[2452]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2452 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:41:37.706000 audit[2452]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff86b88180 a2=0 a3=0 items=0 ppid=2396 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.706000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 22 00:41:37.709000 audit[2454]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2454 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:41:37.709000 audit[2454]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffff6c1f0a0 a2=0 a3=0 items=0 ppid=2396 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.709000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 22 00:41:37.712000 audit[2456]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2456 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:41:37.712000 audit[2456]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdabd6f3a0 a2=0 a3=0 items=0 ppid=2396 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.712000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 22 00:41:37.714000 audit[2458]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2458 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:41:37.714000 audit[2458]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fffd208c0a0 a2=0 a3=0 items=0 ppid=2396 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.714000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 22 00:41:37.749000 audit[2461]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2461 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:41:37.749000 audit[2461]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffe11d979c0 a2=0 a3=0 items=0 ppid=2396 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.749000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 22 00:41:37.753000 audit[2463]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2463 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:41:37.753000 audit[2463]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc6d6e81c0 a2=0 a3=0 items=0 ppid=2396 pid=2463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.753000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 22 00:41:37.757000 audit[2465]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2465 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:41:37.757000 audit[2465]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffcf3e84d50 a2=0 a3=0 items=0 ppid=2396 pid=2465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.757000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 22 00:41:37.760000 audit[2467]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2467 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:41:37.760000 audit[2467]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd570bda30 a2=0 a3=0 items=0 ppid=2396 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.760000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 22 00:41:37.763000 audit[2469]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2469 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:41:37.763000 audit[2469]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffd08e17740 a2=0 a3=0 items=0 ppid=2396 pid=2469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.763000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 22 00:41:37.813000 audit[2499]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2499 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:41:37.813000 audit[2499]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff89b224e0 a2=0 a3=0 items=0 ppid=2396 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.813000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 22 00:41:37.816000 audit[2501]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2501 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:41:37.816000 audit[2501]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff7654b760 a2=0 a3=0 items=0 ppid=2396 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.816000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 22 00:41:37.819000 audit[2503]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2503 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:41:37.819000 audit[2503]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe752473d0 a2=0 a3=0 items=0 ppid=2396 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.819000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 22 00:41:37.821000 audit[2505]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2505 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:41:37.821000 audit[2505]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa21eb440 a2=0 a3=0 items=0 ppid=2396 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.821000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 22 00:41:37.824000 audit[2507]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2507 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:41:37.824000 audit[2507]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffec18c5fc0 a2=0 a3=0 items=0 ppid=2396 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.824000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 22 00:41:37.826000 audit[2509]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2509 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:41:37.826000 audit[2509]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffead01100 a2=0 a3=0 items=0 ppid=2396 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.826000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 22 00:41:37.829000 audit[2511]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2511 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:41:37.829000 audit[2511]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffacbbf910 a2=0 a3=0 items=0 ppid=2396 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.829000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 22 00:41:37.831000 audit[2513]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2513 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:41:37.831000 audit[2513]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd030d0950 a2=0 a3=0 items=0 ppid=2396 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.831000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 22 00:41:37.834000 audit[2515]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2515 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:41:37.834000 audit[2515]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7fff265ca130 a2=0 a3=0 items=0 ppid=2396 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.834000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 22 00:41:37.837000 audit[2517]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2517 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:41:37.837000 audit[2517]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd6299c790 a2=0 a3=0 items=0 ppid=2396 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.837000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 22 00:41:37.839000 audit[2519]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2519 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:41:37.839000 audit[2519]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe8e1272f0 a2=0 a3=0 items=0 ppid=2396 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.839000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 22 00:41:37.841000 audit[2521]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2521 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:41:37.841000 audit[2521]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fff1ef89920 a2=0 a3=0 items=0 ppid=2396 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.841000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 22 00:41:37.844000 audit[2523]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2523 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:41:37.844000 audit[2523]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe6d4c6440 a2=0 a3=0 items=0 ppid=2396 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.844000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 22 00:41:37.850000 audit[2528]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2528 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:41:37.850000 audit[2528]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe609d33b0 a2=0 a3=0 items=0 ppid=2396 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.850000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 22 00:41:37.853000 audit[2530]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2530 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:41:37.853000 audit[2530]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff5e9a4440 a2=0 a3=0 items=0 ppid=2396 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.853000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 22 00:41:37.856000 audit[2532]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2532 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:41:37.856000 audit[2532]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffcad628040 a2=0 a3=0 items=0 ppid=2396 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.856000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 22 00:41:37.858000 audit[2534]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2534 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:41:37.858000 audit[2534]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe97789610 a2=0 a3=0 items=0 ppid=2396 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.858000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 22 00:41:37.861000 audit[2536]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2536 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:41:37.861000 audit[2536]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd5f0a8380 a2=0 a3=0 items=0 ppid=2396 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.861000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 22 00:41:37.864000 audit[2538]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2538 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:41:37.864000 audit[2538]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffa59af460 a2=0 a3=0 items=0 ppid=2396 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.864000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 22 00:41:37.886620 (udev-worker)[2417]: Network interface NamePolicy= disabled on kernel command line. Jan 22 00:41:37.901000 audit[2542]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2542 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:41:37.901000 audit[2542]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffe48764860 a2=0 a3=0 items=0 ppid=2396 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.901000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 22 00:41:37.907000 audit[2546]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2546 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:41:37.907000 audit[2546]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe45776520 a2=0 a3=0 items=0 ppid=2396 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.907000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 22 00:41:37.924000 audit[2554]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2554 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:41:37.924000 audit[2554]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffeb9050d10 a2=0 a3=0 items=0 ppid=2396 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.924000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 22 00:41:37.939000 audit[2560]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2560 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:41:37.939000 audit[2560]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe8dcc8eb0 a2=0 a3=0 items=0 ppid=2396 pid=2560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.939000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 22 00:41:37.943000 audit[2562]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2562 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:41:37.943000 audit[2562]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffc0fcf8b20 a2=0 a3=0 items=0 ppid=2396 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.943000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 22 00:41:37.946000 audit[2564]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2564 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:41:37.946000 audit[2564]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff4cc88860 a2=0 a3=0 items=0 ppid=2396 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.946000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 22 00:41:37.949000 audit[2566]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2566 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:41:37.949000 audit[2566]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffeba4aa7b0 a2=0 a3=0 items=0 ppid=2396 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.949000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 22 00:41:37.952000 audit[2568]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2568 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:41:37.952000 audit[2568]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe21479c40 a2=0 a3=0 items=0 ppid=2396 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:37.952000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 22 00:41:37.953761 systemd-networkd[1860]: docker0: Link UP Jan 22 00:41:37.964499 dockerd[2396]: time="2026-01-22T00:41:37.964414897Z" level=info msg="Loading containers: done." Jan 22 00:41:37.988272 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4063996371-merged.mount: Deactivated successfully. Jan 22 00:41:38.020953 dockerd[2396]: time="2026-01-22T00:41:38.020800701Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 22 00:41:38.021150 dockerd[2396]: time="2026-01-22T00:41:38.020957911Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 22 00:41:38.021150 dockerd[2396]: time="2026-01-22T00:41:38.021074567Z" level=info msg="Initializing buildkit" Jan 22 00:41:38.063359 dockerd[2396]: time="2026-01-22T00:41:38.063310625Z" level=info msg="Completed buildkit initialization" Jan 22 00:41:38.071173 dockerd[2396]: time="2026-01-22T00:41:38.071082478Z" level=info msg="Daemon has completed initialization" Jan 22 00:41:38.071907 dockerd[2396]: time="2026-01-22T00:41:38.071356295Z" level=info msg="API listen on /run/docker.sock" Jan 22 00:41:38.071479 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 22 00:41:38.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:39.998757 containerd[1953]: time="2026-01-22T00:41:39.998690646Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 22 00:41:40.688528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3551962912.mount: Deactivated successfully. Jan 22 00:41:42.085248 containerd[1953]: time="2026-01-22T00:41:42.085190694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:41:42.087928 containerd[1953]: time="2026-01-22T00:41:42.087719669Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=27401903" Jan 22 00:41:42.090665 containerd[1953]: time="2026-01-22T00:41:42.090612825Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:41:42.093808 containerd[1953]: time="2026-01-22T00:41:42.093740547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:41:42.094632 containerd[1953]: time="2026-01-22T00:41:42.094583574Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 2.09582576s" Jan 22 00:41:42.094632 containerd[1953]: time="2026-01-22T00:41:42.094632258Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 22 00:41:42.095271 containerd[1953]: time="2026-01-22T00:41:42.095129171Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 22 00:41:43.798664 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 22 00:41:43.800667 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:41:43.979098 containerd[1953]: time="2026-01-22T00:41:43.978204684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:41:43.981596 containerd[1953]: time="2026-01-22T00:41:43.981548568Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 22 00:41:43.983787 containerd[1953]: time="2026-01-22T00:41:43.983740600Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:41:43.990596 containerd[1953]: time="2026-01-22T00:41:43.990505019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:41:43.991404 containerd[1953]: time="2026-01-22T00:41:43.991299020Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.896141458s" Jan 22 00:41:43.991940 containerd[1953]: time="2026-01-22T00:41:43.991885045Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 22 00:41:43.993086 containerd[1953]: time="2026-01-22T00:41:43.992922318Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 22 00:41:44.067245 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:41:44.074741 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 22 00:41:44.074955 kernel: audit: type=1130 audit(1769042504.067:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:44.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:44.085354 (kubelet)[2679]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 22 00:41:44.138287 kubelet[2679]: E0122 00:41:44.138249 2679 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 22 00:41:44.140823 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 22 00:41:44.141032 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 22 00:41:44.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 00:41:44.141578 systemd[1]: kubelet.service: Consumed 203ms CPU time, 111M memory peak. Jan 22 00:41:44.145979 kernel: audit: type=1131 audit(1769042504.140:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 00:41:45.517841 containerd[1953]: time="2026-01-22T00:41:45.517781019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:41:45.520374 containerd[1953]: time="2026-01-22T00:41:45.520150311Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 22 00:41:45.522769 containerd[1953]: time="2026-01-22T00:41:45.522724626Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:41:45.527984 containerd[1953]: time="2026-01-22T00:41:45.527942066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:41:45.529422 containerd[1953]: time="2026-01-22T00:41:45.528802805Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.535850478s" Jan 22 00:41:45.529422 containerd[1953]: time="2026-01-22T00:41:45.528843002Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 22 00:41:45.529761 containerd[1953]: time="2026-01-22T00:41:45.529739674Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 22 00:41:46.606751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4241798483.mount: Deactivated successfully. Jan 22 00:41:47.225299 containerd[1953]: time="2026-01-22T00:41:47.225236497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:41:47.227553 containerd[1953]: time="2026-01-22T00:41:47.227367501Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=19572392" Jan 22 00:41:47.231162 containerd[1953]: time="2026-01-22T00:41:47.230076407Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:41:47.233476 containerd[1953]: time="2026-01-22T00:41:47.233436019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:41:47.234012 containerd[1953]: time="2026-01-22T00:41:47.233984680Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.704219439s" Jan 22 00:41:47.234109 containerd[1953]: time="2026-01-22T00:41:47.234095689Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 22 00:41:47.234743 containerd[1953]: time="2026-01-22T00:41:47.234716045Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 22 00:41:48.007673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount900429558.mount: Deactivated successfully. Jan 22 00:41:49.152439 containerd[1953]: time="2026-01-22T00:41:49.152374889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:41:49.154360 containerd[1953]: time="2026-01-22T00:41:49.154308605Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=17570227" Jan 22 00:41:49.156725 containerd[1953]: time="2026-01-22T00:41:49.156662142Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:41:49.160605 containerd[1953]: time="2026-01-22T00:41:49.160545607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:41:49.162929 containerd[1953]: time="2026-01-22T00:41:49.162063433Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.927315324s" Jan 22 00:41:49.162929 containerd[1953]: time="2026-01-22T00:41:49.162421838Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 22 00:41:49.163429 containerd[1953]: time="2026-01-22T00:41:49.163393607Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 22 00:41:49.562036 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 22 00:41:49.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:49.566918 kernel: audit: type=1131 audit(1769042509.561:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:49.577000 audit: BPF prog-id=65 op=UNLOAD Jan 22 00:41:49.579966 kernel: audit: type=1334 audit(1769042509.577:289): prog-id=65 op=UNLOAD Jan 22 00:41:49.673729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2120921524.mount: Deactivated successfully. Jan 22 00:41:49.687908 containerd[1953]: time="2026-01-22T00:41:49.687851667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 22 00:41:49.690555 containerd[1953]: time="2026-01-22T00:41:49.690392649Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 22 00:41:49.693688 containerd[1953]: time="2026-01-22T00:41:49.692635759Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 22 00:41:49.698954 containerd[1953]: time="2026-01-22T00:41:49.698911420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 22 00:41:49.700142 containerd[1953]: time="2026-01-22T00:41:49.700107448Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 536.550851ms" Jan 22 00:41:49.700142 containerd[1953]: time="2026-01-22T00:41:49.700144857Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 22 00:41:49.700737 containerd[1953]: time="2026-01-22T00:41:49.700699061Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 22 00:41:50.247457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3908663337.mount: Deactivated successfully. Jan 22 00:41:52.585975 containerd[1953]: time="2026-01-22T00:41:52.585917906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:41:52.587910 containerd[1953]: time="2026-01-22T00:41:52.587865387Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=55728979" Jan 22 00:41:52.591899 containerd[1953]: time="2026-01-22T00:41:52.590491953Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:41:52.594856 containerd[1953]: time="2026-01-22T00:41:52.594824156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:41:52.595824 containerd[1953]: time="2026-01-22T00:41:52.595798034Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.895053436s" Jan 22 00:41:52.595937 containerd[1953]: time="2026-01-22T00:41:52.595924431Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 22 00:41:54.322170 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 22 00:41:54.327003 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:41:54.625112 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:41:54.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:54.630908 kernel: audit: type=1130 audit(1769042514.624:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:54.647345 (kubelet)[2838]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 22 00:41:54.728045 kubelet[2838]: E0122 00:41:54.728003 2838 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 22 00:41:54.731501 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 22 00:41:54.731940 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 22 00:41:54.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 00:41:54.732912 systemd[1]: kubelet.service: Consumed 230ms CPU time, 107.7M memory peak. Jan 22 00:41:54.737928 kernel: audit: type=1131 audit(1769042514.731:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 00:41:55.625127 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:41:55.625396 systemd[1]: kubelet.service: Consumed 230ms CPU time, 107.7M memory peak. Jan 22 00:41:55.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:55.635112 kernel: audit: type=1130 audit(1769042515.624:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:55.635231 kernel: audit: type=1131 audit(1769042515.624:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:55.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:55.635505 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:41:55.677794 systemd[1]: Reload requested from client PID 2854 ('systemctl') (unit session-9.scope)... Jan 22 00:41:55.677818 systemd[1]: Reloading... Jan 22 00:41:55.796911 zram_generator::config[2901]: No configuration found. Jan 22 00:41:56.092196 systemd[1]: Reloading finished in 413 ms. Jan 22 00:41:56.117000 audit: BPF prog-id=72 op=LOAD Jan 22 00:41:56.118000 audit: BPF prog-id=73 op=LOAD Jan 22 00:41:56.126093 kernel: audit: type=1334 audit(1769042516.117:294): prog-id=72 op=LOAD Jan 22 00:41:56.126201 kernel: audit: type=1334 audit(1769042516.118:295): prog-id=73 op=LOAD Jan 22 00:41:56.118000 audit: BPF prog-id=74 op=LOAD Jan 22 00:41:56.134935 kernel: audit: type=1334 audit(1769042516.118:296): prog-id=74 op=LOAD Jan 22 00:41:56.118000 audit: BPF prog-id=75 op=LOAD Jan 22 00:41:56.138899 kernel: audit: type=1334 audit(1769042516.118:297): prog-id=75 op=LOAD Jan 22 00:41:56.118000 audit: BPF prog-id=76 op=LOAD Jan 22 00:41:56.142888 kernel: audit: type=1334 audit(1769042516.118:298): prog-id=76 op=LOAD Jan 22 00:41:56.119000 audit: BPF prog-id=45 op=UNLOAD Jan 22 00:41:56.145894 kernel: audit: type=1334 audit(1769042516.119:299): prog-id=45 op=UNLOAD Jan 22 00:41:56.119000 audit: BPF prog-id=51 op=UNLOAD Jan 22 00:41:56.119000 audit: BPF prog-id=52 op=UNLOAD Jan 22 00:41:56.119000 audit: BPF prog-id=53 op=UNLOAD Jan 22 00:41:56.124000 audit: BPF prog-id=77 op=LOAD Jan 22 00:41:56.124000 audit: BPF prog-id=46 op=UNLOAD Jan 22 00:41:56.124000 audit: BPF prog-id=47 op=UNLOAD Jan 22 00:41:56.139000 audit: BPF prog-id=78 op=LOAD Jan 22 00:41:56.139000 audit: BPF prog-id=59 op=UNLOAD Jan 22 00:41:56.139000 audit: BPF prog-id=79 op=LOAD Jan 22 00:41:56.139000 audit: BPF prog-id=80 op=LOAD Jan 22 00:41:56.139000 audit: BPF prog-id=60 op=UNLOAD Jan 22 00:41:56.139000 audit: BPF prog-id=61 op=UNLOAD Jan 22 00:41:56.144000 audit: BPF prog-id=81 op=LOAD Jan 22 00:41:56.144000 audit: BPF prog-id=82 op=LOAD Jan 22 00:41:56.144000 audit: BPF prog-id=43 op=UNLOAD Jan 22 00:41:56.144000 audit: BPF prog-id=44 op=UNLOAD Jan 22 00:41:56.146000 audit: BPF prog-id=83 op=LOAD Jan 22 00:41:56.152000 audit: BPF prog-id=48 op=UNLOAD Jan 22 00:41:56.152000 audit: BPF prog-id=84 op=LOAD Jan 22 00:41:56.152000 audit: BPF prog-id=85 op=LOAD Jan 22 00:41:56.152000 audit: BPF prog-id=49 op=UNLOAD Jan 22 00:41:56.152000 audit: BPF prog-id=50 op=UNLOAD Jan 22 00:41:56.153000 audit: BPF prog-id=86 op=LOAD Jan 22 00:41:56.153000 audit: BPF prog-id=57 op=UNLOAD Jan 22 00:41:56.157000 audit: BPF prog-id=87 op=LOAD Jan 22 00:41:56.157000 audit: BPF prog-id=54 op=UNLOAD Jan 22 00:41:56.157000 audit: BPF prog-id=88 op=LOAD Jan 22 00:41:56.157000 audit: BPF prog-id=89 op=LOAD Jan 22 00:41:56.157000 audit: BPF prog-id=55 op=UNLOAD Jan 22 00:41:56.157000 audit: BPF prog-id=56 op=UNLOAD Jan 22 00:41:56.157000 audit: BPF prog-id=90 op=LOAD Jan 22 00:41:56.158000 audit: BPF prog-id=68 op=UNLOAD Jan 22 00:41:56.161000 audit: BPF prog-id=91 op=LOAD Jan 22 00:41:56.161000 audit: BPF prog-id=58 op=UNLOAD Jan 22 00:41:56.181607 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 22 00:41:56.181688 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 22 00:41:56.182222 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:41:56.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 00:41:56.182301 systemd[1]: kubelet.service: Consumed 137ms CPU time, 98.4M memory peak. Jan 22 00:41:56.184555 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:41:56.409059 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:41:56.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:41:56.419384 (kubelet)[2964]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 22 00:41:56.471892 kubelet[2964]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 00:41:56.471892 kubelet[2964]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 22 00:41:56.471892 kubelet[2964]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 00:41:56.474906 kubelet[2964]: I0122 00:41:56.473989 2964 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 22 00:41:56.793499 kubelet[2964]: I0122 00:41:56.792918 2964 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 22 00:41:56.793499 kubelet[2964]: I0122 00:41:56.792956 2964 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 22 00:41:56.793499 kubelet[2964]: I0122 00:41:56.793331 2964 server.go:954] "Client rotation is on, will bootstrap in background" Jan 22 00:41:56.842739 kubelet[2964]: I0122 00:41:56.842685 2964 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 22 00:41:56.849318 kubelet[2964]: E0122 00:41:56.849247 2964 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.26.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.26.54:6443: connect: connection refused" logger="UnhandledError" Jan 22 00:41:56.881279 kubelet[2964]: I0122 00:41:56.881229 2964 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 22 00:41:56.888587 kubelet[2964]: I0122 00:41:56.888543 2964 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 22 00:41:56.892585 kubelet[2964]: I0122 00:41:56.892508 2964 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 22 00:41:56.892829 kubelet[2964]: I0122 00:41:56.892579 2964 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-54","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 22 00:41:56.894841 kubelet[2964]: I0122 00:41:56.894800 2964 topology_manager.go:138] "Creating topology manager with none policy" Jan 22 00:41:56.894841 kubelet[2964]: I0122 00:41:56.894835 2964 container_manager_linux.go:304] "Creating device plugin manager" Jan 22 00:41:56.898022 kubelet[2964]: I0122 00:41:56.897977 2964 state_mem.go:36] "Initialized new in-memory state store" Jan 22 00:41:56.902674 kubelet[2964]: I0122 00:41:56.902482 2964 kubelet.go:446] "Attempting to sync node with API server" Jan 22 00:41:56.905329 kubelet[2964]: I0122 00:41:56.905223 2964 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 22 00:41:56.907034 kubelet[2964]: I0122 00:41:56.906998 2964 kubelet.go:352] "Adding apiserver pod source" Jan 22 00:41:56.907034 kubelet[2964]: I0122 00:41:56.907030 2964 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 22 00:41:56.912898 kubelet[2964]: W0122 00:41:56.911988 2964 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.26.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-54&limit=500&resourceVersion=0": dial tcp 172.31.26.54:6443: connect: connection refused Jan 22 00:41:56.912898 kubelet[2964]: E0122 00:41:56.912079 2964 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.26.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-54&limit=500&resourceVersion=0\": dial tcp 172.31.26.54:6443: connect: connection refused" logger="UnhandledError" Jan 22 00:41:56.913037 kubelet[2964]: W0122 00:41:56.912998 2964 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.26.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.26.54:6443: connect: connection refused Jan 22 00:41:56.913065 kubelet[2964]: E0122 00:41:56.913031 2964 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.26.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.54:6443: connect: connection refused" logger="UnhandledError" Jan 22 00:41:56.915326 kubelet[2964]: I0122 00:41:56.914979 2964 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 22 00:41:56.918798 kubelet[2964]: I0122 00:41:56.918665 2964 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 22 00:41:56.927810 kubelet[2964]: W0122 00:41:56.927752 2964 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 22 00:41:56.928395 kubelet[2964]: I0122 00:41:56.928374 2964 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 22 00:41:56.928459 kubelet[2964]: I0122 00:41:56.928408 2964 server.go:1287] "Started kubelet" Jan 22 00:41:56.928675 kubelet[2964]: I0122 00:41:56.928639 2964 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 22 00:41:56.933101 kubelet[2964]: I0122 00:41:56.933055 2964 server.go:479] "Adding debug handlers to kubelet server" Jan 22 00:41:56.937929 kubelet[2964]: I0122 00:41:56.937564 2964 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 22 00:41:56.937929 kubelet[2964]: I0122 00:41:56.937854 2964 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 22 00:41:56.938883 kubelet[2964]: I0122 00:41:56.938799 2964 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 22 00:41:56.943056 kubelet[2964]: E0122 00:41:56.939166 2964 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.26.54:6443/api/v1/namespaces/default/events\": dial tcp 172.31.26.54:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-26-54.188ce6cbde145fd1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-54,UID:ip-172-31-26-54,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-54,},FirstTimestamp:2026-01-22 00:41:56.928389073 +0000 UTC m=+0.504687005,LastTimestamp:2026-01-22 00:41:56.928389073 +0000 UTC m=+0.504687005,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-54,}" Jan 22 00:41:56.943782 kubelet[2964]: I0122 00:41:56.943761 2964 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 22 00:41:56.948858 kubelet[2964]: I0122 00:41:56.948457 2964 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 22 00:41:56.950967 kubelet[2964]: E0122 00:41:56.950922 2964 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-54\" not found" Jan 22 00:41:56.951226 kubelet[2964]: I0122 00:41:56.951202 2964 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 22 00:41:56.951278 kubelet[2964]: I0122 00:41:56.951253 2964 reconciler.go:26] "Reconciler: start to sync state" Jan 22 00:41:56.951810 kubelet[2964]: W0122 00:41:56.951545 2964 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.26.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.54:6443: connect: connection refused Jan 22 00:41:56.951810 kubelet[2964]: E0122 00:41:56.951600 2964 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.26.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.54:6443: connect: connection refused" logger="UnhandledError" Jan 22 00:41:56.951810 kubelet[2964]: E0122 00:41:56.951659 2964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-54?timeout=10s\": dial tcp 172.31.26.54:6443: connect: connection refused" interval="200ms" Jan 22 00:41:56.953000 audit[2975]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2975 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:41:56.953000 audit[2975]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc8a0fe710 a2=0 a3=0 items=0 ppid=2964 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:56.953000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 22 00:41:56.954960 kubelet[2964]: I0122 00:41:56.954944 2964 factory.go:221] Registration of the systemd container factory successfully Jan 22 00:41:56.955231 kubelet[2964]: I0122 00:41:56.955214 2964 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 22 00:41:56.955000 audit[2976]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2976 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:41:56.955000 audit[2976]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffea50f8c10 a2=0 a3=0 items=0 ppid=2964 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:56.955000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 22 00:41:56.958000 audit[2978]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2978 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:41:56.958000 audit[2978]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff627a7360 a2=0 a3=0 items=0 ppid=2964 pid=2978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:56.958000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 22 00:41:56.960612 kubelet[2964]: I0122 00:41:56.960571 2964 factory.go:221] Registration of the containerd container factory successfully Jan 22 00:41:56.964000 audit[2980]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2980 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:41:56.964000 audit[2980]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe26c98130 a2=0 a3=0 items=0 ppid=2964 pid=2980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:56.964000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 22 00:41:56.971304 kubelet[2964]: E0122 00:41:56.971280 2964 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 22 00:41:56.979000 audit[2986]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2986 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:41:56.979000 audit[2986]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe614cd2f0 a2=0 a3=0 items=0 ppid=2964 pid=2986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:56.979000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 22 00:41:56.982455 kubelet[2964]: I0122 00:41:56.982421 2964 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 22 00:41:56.984000 audit[2987]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2987 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:41:56.984000 audit[2987]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc2eb59980 a2=0 a3=0 items=0 ppid=2964 pid=2987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:56.984000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 22 00:41:56.986157 kubelet[2964]: I0122 00:41:56.986125 2964 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 22 00:41:56.986157 kubelet[2964]: I0122 00:41:56.986150 2964 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 22 00:41:56.986266 kubelet[2964]: I0122 00:41:56.986170 2964 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 22 00:41:56.986266 kubelet[2964]: I0122 00:41:56.986177 2964 kubelet.go:2382] "Starting kubelet main sync loop" Jan 22 00:41:56.986266 kubelet[2964]: E0122 00:41:56.986215 2964 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 22 00:41:56.986000 audit[2988]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2988 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:41:56.987458 kubelet[2964]: I0122 00:41:56.987217 2964 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 22 00:41:56.987458 kubelet[2964]: I0122 00:41:56.987259 2964 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 22 00:41:56.987458 kubelet[2964]: I0122 00:41:56.987275 2964 state_mem.go:36] "Initialized new in-memory state store" Jan 22 00:41:56.986000 audit[2988]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc5bc64a10 a2=0 a3=0 items=0 ppid=2964 pid=2988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:56.986000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 22 00:41:56.988000 audit[2989]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2989 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:41:56.988000 audit[2989]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd8fa144c0 a2=0 a3=0 items=0 ppid=2964 pid=2989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:56.988000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 22 00:41:56.989000 audit[2990]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2990 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:41:56.989000 audit[2990]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd5ba9c480 a2=0 a3=0 items=0 ppid=2964 pid=2990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:56.989000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 22 00:41:56.990000 audit[2991]: NETFILTER_CFG table=mangle:51 family=10 entries=1 op=nft_register_chain pid=2991 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:41:56.990000 audit[2991]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd357069d0 a2=0 a3=0 items=0 ppid=2964 pid=2991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:56.990000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 22 00:41:56.991000 audit[2992]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2992 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:41:56.991000 audit[2992]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffab41cdd0 a2=0 a3=0 items=0 ppid=2964 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:56.991000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 22 00:41:56.993000 audit[2993]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2993 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:41:56.993000 audit[2993]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf4ae17d0 a2=0 a3=0 items=0 ppid=2964 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:56.993000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 22 00:41:57.014337 kubelet[2964]: W0122 00:41:56.994555 2964 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.26.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.54:6443: connect: connection refused Jan 22 00:41:57.014337 kubelet[2964]: E0122 00:41:56.994604 2964 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.26.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.54:6443: connect: connection refused" logger="UnhandledError" Jan 22 00:41:57.014636 kubelet[2964]: I0122 00:41:57.014598 2964 policy_none.go:49] "None policy: Start" Jan 22 00:41:57.014636 kubelet[2964]: I0122 00:41:57.014636 2964 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 22 00:41:57.014758 kubelet[2964]: I0122 00:41:57.014652 2964 state_mem.go:35] "Initializing new in-memory state store" Jan 22 00:41:57.028812 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 22 00:41:57.044780 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 22 00:41:57.052097 kubelet[2964]: E0122 00:41:57.051701 2964 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-54\" not found" Jan 22 00:41:57.052629 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 22 00:41:57.063077 kubelet[2964]: I0122 00:41:57.063048 2964 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 22 00:41:57.063372 kubelet[2964]: I0122 00:41:57.063360 2964 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 22 00:41:57.063455 kubelet[2964]: I0122 00:41:57.063428 2964 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 22 00:41:57.063977 kubelet[2964]: I0122 00:41:57.063963 2964 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 22 00:41:57.067350 kubelet[2964]: E0122 00:41:57.067322 2964 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 22 00:41:57.067466 kubelet[2964]: E0122 00:41:57.067362 2964 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-26-54\" not found" Jan 22 00:41:57.109992 systemd[1]: Created slice kubepods-burstable-pod61dc2517e50bff50f6b30c1e763f1dac.slice - libcontainer container kubepods-burstable-pod61dc2517e50bff50f6b30c1e763f1dac.slice. Jan 22 00:41:57.116905 kubelet[2964]: E0122 00:41:57.116759 2964 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-54\" not found" node="ip-172-31-26-54" Jan 22 00:41:57.120394 systemd[1]: Created slice kubepods-burstable-pod612d2334c24bda22653ffe4f917d3f03.slice - libcontainer container kubepods-burstable-pod612d2334c24bda22653ffe4f917d3f03.slice. Jan 22 00:41:57.122742 kubelet[2964]: E0122 00:41:57.122712 2964 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-54\" not found" node="ip-172-31-26-54" Jan 22 00:41:57.132683 systemd[1]: Created slice kubepods-burstable-pod7d8a3c776a862aba588c2b59d57a3017.slice - libcontainer container kubepods-burstable-pod7d8a3c776a862aba588c2b59d57a3017.slice. Jan 22 00:41:57.135403 kubelet[2964]: E0122 00:41:57.135212 2964 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-54\" not found" node="ip-172-31-26-54" Jan 22 00:41:57.152268 kubelet[2964]: E0122 00:41:57.152215 2964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-54?timeout=10s\": dial tcp 172.31.26.54:6443: connect: connection refused" interval="400ms" Jan 22 00:41:57.166127 kubelet[2964]: I0122 00:41:57.166058 2964 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-54" Jan 22 00:41:57.166412 kubelet[2964]: E0122 00:41:57.166385 2964 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.54:6443/api/v1/nodes\": dial tcp 172.31.26.54:6443: connect: connection refused" node="ip-172-31-26-54" Jan 22 00:41:57.251930 kubelet[2964]: I0122 00:41:57.251856 2964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61dc2517e50bff50f6b30c1e763f1dac-ca-certs\") pod \"kube-apiserver-ip-172-31-26-54\" (UID: \"61dc2517e50bff50f6b30c1e763f1dac\") " pod="kube-system/kube-apiserver-ip-172-31-26-54" Jan 22 00:41:57.251930 kubelet[2964]: I0122 00:41:57.251921 2964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61dc2517e50bff50f6b30c1e763f1dac-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-54\" (UID: \"61dc2517e50bff50f6b30c1e763f1dac\") " pod="kube-system/kube-apiserver-ip-172-31-26-54" Jan 22 00:41:57.251930 kubelet[2964]: I0122 00:41:57.251940 2964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/612d2334c24bda22653ffe4f917d3f03-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-54\" (UID: \"612d2334c24bda22653ffe4f917d3f03\") " pod="kube-system/kube-controller-manager-ip-172-31-26-54" Jan 22 00:41:57.252122 kubelet[2964]: I0122 00:41:57.251961 2964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/612d2334c24bda22653ffe4f917d3f03-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-54\" (UID: \"612d2334c24bda22653ffe4f917d3f03\") " pod="kube-system/kube-controller-manager-ip-172-31-26-54" Jan 22 00:41:57.252122 kubelet[2964]: I0122 00:41:57.251977 2964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7d8a3c776a862aba588c2b59d57a3017-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-54\" (UID: \"7d8a3c776a862aba588c2b59d57a3017\") " pod="kube-system/kube-scheduler-ip-172-31-26-54" Jan 22 00:41:57.252122 kubelet[2964]: I0122 00:41:57.251990 2964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61dc2517e50bff50f6b30c1e763f1dac-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-54\" (UID: \"61dc2517e50bff50f6b30c1e763f1dac\") " pod="kube-system/kube-apiserver-ip-172-31-26-54" Jan 22 00:41:57.252122 kubelet[2964]: I0122 00:41:57.252003 2964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/612d2334c24bda22653ffe4f917d3f03-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-54\" (UID: \"612d2334c24bda22653ffe4f917d3f03\") " pod="kube-system/kube-controller-manager-ip-172-31-26-54" Jan 22 00:41:57.252122 kubelet[2964]: I0122 00:41:57.252021 2964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/612d2334c24bda22653ffe4f917d3f03-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-54\" (UID: \"612d2334c24bda22653ffe4f917d3f03\") " pod="kube-system/kube-controller-manager-ip-172-31-26-54" Jan 22 00:41:57.252260 kubelet[2964]: I0122 00:41:57.252037 2964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/612d2334c24bda22653ffe4f917d3f03-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-54\" (UID: \"612d2334c24bda22653ffe4f917d3f03\") " pod="kube-system/kube-controller-manager-ip-172-31-26-54" Jan 22 00:41:57.368164 kubelet[2964]: I0122 00:41:57.368127 2964 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-54" Jan 22 00:41:57.368450 kubelet[2964]: E0122 00:41:57.368422 2964 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.54:6443/api/v1/nodes\": dial tcp 172.31.26.54:6443: connect: connection refused" node="ip-172-31-26-54" Jan 22 00:41:57.418914 containerd[1953]: time="2026-01-22T00:41:57.418711655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-54,Uid:61dc2517e50bff50f6b30c1e763f1dac,Namespace:kube-system,Attempt:0,}" Jan 22 00:41:57.424740 containerd[1953]: time="2026-01-22T00:41:57.424533794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-54,Uid:612d2334c24bda22653ffe4f917d3f03,Namespace:kube-system,Attempt:0,}" Jan 22 00:41:57.437096 containerd[1953]: time="2026-01-22T00:41:57.437033926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-54,Uid:7d8a3c776a862aba588c2b59d57a3017,Namespace:kube-system,Attempt:0,}" Jan 22 00:41:57.555012 kubelet[2964]: E0122 00:41:57.554770 2964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-54?timeout=10s\": dial tcp 172.31.26.54:6443: connect: connection refused" interval="800ms" Jan 22 00:41:57.565310 containerd[1953]: time="2026-01-22T00:41:57.565274458Z" level=info msg="connecting to shim d71d5f2e7c209d65af6d26d61d3c83bd208d6d2a4481eb478d91502372a7ada2" address="unix:///run/containerd/s/1a22d927b9c7b635ca656bc6bbe4671fc53413069f6d4bcdb6410cf95eb1507f" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:41:57.568102 containerd[1953]: time="2026-01-22T00:41:57.568018539Z" level=info msg="connecting to shim 6d86e4498593e10f84ddcbdbb8ef7cce9b32e1e90eede8d683f1ac45da2c3455" address="unix:///run/containerd/s/300790fb961c72be63c16253019c0f8406f2fc37f111ee4a3f78d5246f0ac7d4" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:41:57.576916 containerd[1953]: time="2026-01-22T00:41:57.576547573Z" level=info msg="connecting to shim 790f2487a769b434dd9e7314fb2eb6a095757f58aece830d9773173a535e2813" address="unix:///run/containerd/s/435453e7b9038a1cb913e2e4b7b0467660d52d854bf3f38a19360b9fdfa6e72f" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:41:57.685305 systemd[1]: Started cri-containerd-6d86e4498593e10f84ddcbdbb8ef7cce9b32e1e90eede8d683f1ac45da2c3455.scope - libcontainer container 6d86e4498593e10f84ddcbdbb8ef7cce9b32e1e90eede8d683f1ac45da2c3455. Jan 22 00:41:57.702278 systemd[1]: Started cri-containerd-790f2487a769b434dd9e7314fb2eb6a095757f58aece830d9773173a535e2813.scope - libcontainer container 790f2487a769b434dd9e7314fb2eb6a095757f58aece830d9773173a535e2813. Jan 22 00:41:57.703462 systemd[1]: Started cri-containerd-d71d5f2e7c209d65af6d26d61d3c83bd208d6d2a4481eb478d91502372a7ada2.scope - libcontainer container d71d5f2e7c209d65af6d26d61d3c83bd208d6d2a4481eb478d91502372a7ada2. Jan 22 00:41:57.735000 audit: BPF prog-id=92 op=LOAD Jan 22 00:41:57.736000 audit: BPF prog-id=93 op=LOAD Jan 22 00:41:57.736000 audit[3054]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3018 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.736000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664383665343439383539336531306638346464636264626238656637 Jan 22 00:41:57.737000 audit: BPF prog-id=93 op=UNLOAD Jan 22 00:41:57.737000 audit[3054]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3018 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.737000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664383665343439383539336531306638346464636264626238656637 Jan 22 00:41:57.738000 audit: BPF prog-id=94 op=LOAD Jan 22 00:41:57.741000 audit: BPF prog-id=95 op=LOAD Jan 22 00:41:57.741000 audit[3056]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3034 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.741000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739306632343837613736396234333464643965373331346662326562 Jan 22 00:41:57.741000 audit: BPF prog-id=95 op=UNLOAD Jan 22 00:41:57.741000 audit[3056]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3034 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.741000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739306632343837613736396234333464643965373331346662326562 Jan 22 00:41:57.742000 audit: BPF prog-id=96 op=LOAD Jan 22 00:41:57.743000 audit: BPF prog-id=97 op=LOAD Jan 22 00:41:57.743000 audit[3056]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3034 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739306632343837613736396234333464643965373331346662326562 Jan 22 00:41:57.743000 audit: BPF prog-id=98 op=LOAD Jan 22 00:41:57.743000 audit[3056]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3034 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739306632343837613736396234333464643965373331346662326562 Jan 22 00:41:57.743000 audit: BPF prog-id=98 op=UNLOAD Jan 22 00:41:57.743000 audit[3056]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3034 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739306632343837613736396234333464643965373331346662326562 Jan 22 00:41:57.743000 audit: BPF prog-id=97 op=UNLOAD Jan 22 00:41:57.743000 audit[3056]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3034 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739306632343837613736396234333464643965373331346662326562 Jan 22 00:41:57.743000 audit: BPF prog-id=99 op=LOAD Jan 22 00:41:57.743000 audit[3056]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3034 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739306632343837613736396234333464643965373331346662326562 Jan 22 00:41:57.744000 audit: BPF prog-id=100 op=LOAD Jan 22 00:41:57.744000 audit[3050]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3020 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.744000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437316435663265376332303964363561663664323664363164336338 Jan 22 00:41:57.745000 audit: BPF prog-id=100 op=UNLOAD Jan 22 00:41:57.745000 audit[3050]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3020 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.745000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437316435663265376332303964363561663664323664363164336338 Jan 22 00:41:57.745000 audit: BPF prog-id=101 op=LOAD Jan 22 00:41:57.745000 audit[3050]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3020 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.745000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437316435663265376332303964363561663664323664363164336338 Jan 22 00:41:57.745000 audit: BPF prog-id=102 op=LOAD Jan 22 00:41:57.745000 audit[3050]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3020 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.745000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437316435663265376332303964363561663664323664363164336338 Jan 22 00:41:57.746000 audit: BPF prog-id=102 op=UNLOAD Jan 22 00:41:57.746000 audit[3050]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3020 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437316435663265376332303964363561663664323664363164336338 Jan 22 00:41:57.746000 audit: BPF prog-id=101 op=UNLOAD Jan 22 00:41:57.746000 audit[3050]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3020 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437316435663265376332303964363561663664323664363164336338 Jan 22 00:41:57.746000 audit: BPF prog-id=103 op=LOAD Jan 22 00:41:57.746000 audit[3050]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3020 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437316435663265376332303964363561663664323664363164336338 Jan 22 00:41:57.747000 audit: BPF prog-id=104 op=LOAD Jan 22 00:41:57.747000 audit[3054]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3018 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664383665343439383539336531306638346464636264626238656637 Jan 22 00:41:57.747000 audit: BPF prog-id=105 op=LOAD Jan 22 00:41:57.747000 audit[3054]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3018 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664383665343439383539336531306638346464636264626238656637 Jan 22 00:41:57.747000 audit: BPF prog-id=105 op=UNLOAD Jan 22 00:41:57.747000 audit[3054]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3018 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664383665343439383539336531306638346464636264626238656637 Jan 22 00:41:57.748000 audit: BPF prog-id=104 op=UNLOAD Jan 22 00:41:57.748000 audit[3054]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3018 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.748000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664383665343439383539336531306638346464636264626238656637 Jan 22 00:41:57.748000 audit: BPF prog-id=106 op=LOAD Jan 22 00:41:57.748000 audit[3054]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3018 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.748000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664383665343439383539336531306638346464636264626238656637 Jan 22 00:41:57.753548 kubelet[2964]: W0122 00:41:57.753393 2964 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.26.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.26.54:6443: connect: connection refused Jan 22 00:41:57.753716 kubelet[2964]: E0122 00:41:57.753690 2964 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.26.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.54:6443: connect: connection refused" logger="UnhandledError" Jan 22 00:41:57.772040 kubelet[2964]: I0122 00:41:57.772012 2964 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-54" Jan 22 00:41:57.774379 kubelet[2964]: E0122 00:41:57.774342 2964 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.54:6443/api/v1/nodes\": dial tcp 172.31.26.54:6443: connect: connection refused" node="ip-172-31-26-54" Jan 22 00:41:57.812450 containerd[1953]: time="2026-01-22T00:41:57.812388267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-54,Uid:61dc2517e50bff50f6b30c1e763f1dac,Namespace:kube-system,Attempt:0,} returns sandbox id \"d71d5f2e7c209d65af6d26d61d3c83bd208d6d2a4481eb478d91502372a7ada2\"" Jan 22 00:41:57.819896 containerd[1953]: time="2026-01-22T00:41:57.819823141Z" level=info msg="CreateContainer within sandbox \"d71d5f2e7c209d65af6d26d61d3c83bd208d6d2a4481eb478d91502372a7ada2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 22 00:41:57.838643 containerd[1953]: time="2026-01-22T00:41:57.838536434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-54,Uid:612d2334c24bda22653ffe4f917d3f03,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d86e4498593e10f84ddcbdbb8ef7cce9b32e1e90eede8d683f1ac45da2c3455\"" Jan 22 00:41:57.845780 containerd[1953]: time="2026-01-22T00:41:57.845414734Z" level=info msg="CreateContainer within sandbox \"6d86e4498593e10f84ddcbdbb8ef7cce9b32e1e90eede8d683f1ac45da2c3455\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 22 00:41:57.854482 containerd[1953]: time="2026-01-22T00:41:57.854438206Z" level=info msg="Container ffe0bc6470611a01f840db0a66a5867ec4ea8d366bbbdf1a1758fc4f14c9cc1f: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:41:57.855905 containerd[1953]: time="2026-01-22T00:41:57.855816164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-54,Uid:7d8a3c776a862aba588c2b59d57a3017,Namespace:kube-system,Attempt:0,} returns sandbox id \"790f2487a769b434dd9e7314fb2eb6a095757f58aece830d9773173a535e2813\"" Jan 22 00:41:57.861006 containerd[1953]: time="2026-01-22T00:41:57.860964708Z" level=info msg="CreateContainer within sandbox \"790f2487a769b434dd9e7314fb2eb6a095757f58aece830d9773173a535e2813\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 22 00:41:57.869484 containerd[1953]: time="2026-01-22T00:41:57.869443796Z" level=info msg="CreateContainer within sandbox \"d71d5f2e7c209d65af6d26d61d3c83bd208d6d2a4481eb478d91502372a7ada2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ffe0bc6470611a01f840db0a66a5867ec4ea8d366bbbdf1a1758fc4f14c9cc1f\"" Jan 22 00:41:57.870144 containerd[1953]: time="2026-01-22T00:41:57.870120366Z" level=info msg="StartContainer for \"ffe0bc6470611a01f840db0a66a5867ec4ea8d366bbbdf1a1758fc4f14c9cc1f\"" Jan 22 00:41:57.871726 containerd[1953]: time="2026-01-22T00:41:57.871692280Z" level=info msg="connecting to shim ffe0bc6470611a01f840db0a66a5867ec4ea8d366bbbdf1a1758fc4f14c9cc1f" address="unix:///run/containerd/s/1a22d927b9c7b635ca656bc6bbe4671fc53413069f6d4bcdb6410cf95eb1507f" protocol=ttrpc version=3 Jan 22 00:41:57.881918 containerd[1953]: time="2026-01-22T00:41:57.881840985Z" level=info msg="Container 2d0ab1dfa6df226894fbb261c87c740be6f5600bb057130f3bc8bbc6a5a5ca8c: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:41:57.886184 containerd[1953]: time="2026-01-22T00:41:57.885790182Z" level=info msg="Container 3128adf35cf4fd85b64326afed60464878fa2cc26a27ddcd0b5d8c830045ff13: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:41:57.898022 containerd[1953]: time="2026-01-22T00:41:57.897963493Z" level=info msg="CreateContainer within sandbox \"6d86e4498593e10f84ddcbdbb8ef7cce9b32e1e90eede8d683f1ac45da2c3455\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2d0ab1dfa6df226894fbb261c87c740be6f5600bb057130f3bc8bbc6a5a5ca8c\"" Jan 22 00:41:57.898733 containerd[1953]: time="2026-01-22T00:41:57.898691055Z" level=info msg="StartContainer for \"2d0ab1dfa6df226894fbb261c87c740be6f5600bb057130f3bc8bbc6a5a5ca8c\"" Jan 22 00:41:57.901238 containerd[1953]: time="2026-01-22T00:41:57.901195575Z" level=info msg="connecting to shim 2d0ab1dfa6df226894fbb261c87c740be6f5600bb057130f3bc8bbc6a5a5ca8c" address="unix:///run/containerd/s/300790fb961c72be63c16253019c0f8406f2fc37f111ee4a3f78d5246f0ac7d4" protocol=ttrpc version=3 Jan 22 00:41:57.902476 systemd[1]: Started cri-containerd-ffe0bc6470611a01f840db0a66a5867ec4ea8d366bbbdf1a1758fc4f14c9cc1f.scope - libcontainer container ffe0bc6470611a01f840db0a66a5867ec4ea8d366bbbdf1a1758fc4f14c9cc1f. Jan 22 00:41:57.905679 containerd[1953]: time="2026-01-22T00:41:57.905239454Z" level=info msg="CreateContainer within sandbox \"790f2487a769b434dd9e7314fb2eb6a095757f58aece830d9773173a535e2813\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3128adf35cf4fd85b64326afed60464878fa2cc26a27ddcd0b5d8c830045ff13\"" Jan 22 00:41:57.906865 containerd[1953]: time="2026-01-22T00:41:57.906658996Z" level=info msg="StartContainer for \"3128adf35cf4fd85b64326afed60464878fa2cc26a27ddcd0b5d8c830045ff13\"" Jan 22 00:41:57.910233 containerd[1953]: time="2026-01-22T00:41:57.910128526Z" level=info msg="connecting to shim 3128adf35cf4fd85b64326afed60464878fa2cc26a27ddcd0b5d8c830045ff13" address="unix:///run/containerd/s/435453e7b9038a1cb913e2e4b7b0467660d52d854bf3f38a19360b9fdfa6e72f" protocol=ttrpc version=3 Jan 22 00:41:57.936663 kubelet[2964]: W0122 00:41:57.936103 2964 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.26.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.54:6443: connect: connection refused Jan 22 00:41:57.937057 kubelet[2964]: E0122 00:41:57.936948 2964 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.26.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.54:6443: connect: connection refused" logger="UnhandledError" Jan 22 00:41:57.939000 audit: BPF prog-id=107 op=LOAD Jan 22 00:41:57.942000 audit: BPF prog-id=108 op=LOAD Jan 22 00:41:57.942000 audit[3134]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3020 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666653062633634373036313161303166383430646230613636613538 Jan 22 00:41:57.942000 audit: BPF prog-id=108 op=UNLOAD Jan 22 00:41:57.942000 audit[3134]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3020 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666653062633634373036313161303166383430646230613636613538 Jan 22 00:41:57.944000 audit: BPF prog-id=109 op=LOAD Jan 22 00:41:57.944000 audit[3134]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3020 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.944000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666653062633634373036313161303166383430646230613636613538 Jan 22 00:41:57.944000 audit: BPF prog-id=110 op=LOAD Jan 22 00:41:57.944000 audit[3134]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3020 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.944000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666653062633634373036313161303166383430646230613636613538 Jan 22 00:41:57.944000 audit: BPF prog-id=110 op=UNLOAD Jan 22 00:41:57.944000 audit[3134]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3020 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.944000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666653062633634373036313161303166383430646230613636613538 Jan 22 00:41:57.944000 audit: BPF prog-id=109 op=UNLOAD Jan 22 00:41:57.944000 audit[3134]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3020 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.944000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666653062633634373036313161303166383430646230613636613538 Jan 22 00:41:57.944000 audit: BPF prog-id=111 op=LOAD Jan 22 00:41:57.944000 audit[3134]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3020 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.944000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666653062633634373036313161303166383430646230613636613538 Jan 22 00:41:57.949194 systemd[1]: Started cri-containerd-3128adf35cf4fd85b64326afed60464878fa2cc26a27ddcd0b5d8c830045ff13.scope - libcontainer container 3128adf35cf4fd85b64326afed60464878fa2cc26a27ddcd0b5d8c830045ff13. Jan 22 00:41:57.959373 systemd[1]: Started cri-containerd-2d0ab1dfa6df226894fbb261c87c740be6f5600bb057130f3bc8bbc6a5a5ca8c.scope - libcontainer container 2d0ab1dfa6df226894fbb261c87c740be6f5600bb057130f3bc8bbc6a5a5ca8c. Jan 22 00:41:57.988000 audit: BPF prog-id=112 op=LOAD Jan 22 00:41:57.989000 audit: BPF prog-id=113 op=LOAD Jan 22 00:41:57.989000 audit[3156]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3034 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331323861646633356366346664383562363433323661666564363034 Jan 22 00:41:57.989000 audit: BPF prog-id=113 op=UNLOAD Jan 22 00:41:57.989000 audit[3156]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3034 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331323861646633356366346664383562363433323661666564363034 Jan 22 00:41:57.989000 audit: BPF prog-id=114 op=LOAD Jan 22 00:41:57.989000 audit[3156]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3034 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331323861646633356366346664383562363433323661666564363034 Jan 22 00:41:57.990000 audit: BPF prog-id=115 op=LOAD Jan 22 00:41:57.990000 audit[3156]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3034 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.990000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331323861646633356366346664383562363433323661666564363034 Jan 22 00:41:57.990000 audit: BPF prog-id=115 op=UNLOAD Jan 22 00:41:57.990000 audit[3156]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3034 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.990000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331323861646633356366346664383562363433323661666564363034 Jan 22 00:41:57.990000 audit: BPF prog-id=114 op=UNLOAD Jan 22 00:41:57.990000 audit[3156]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3034 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.990000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331323861646633356366346664383562363433323661666564363034 Jan 22 00:41:57.991000 audit: BPF prog-id=116 op=LOAD Jan 22 00:41:57.991000 audit[3156]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3034 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:57.991000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331323861646633356366346664383562363433323661666564363034 Jan 22 00:41:58.007000 audit: BPF prog-id=117 op=LOAD Jan 22 00:41:58.009000 audit: BPF prog-id=118 op=LOAD Jan 22 00:41:58.009000 audit[3147]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3018 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:58.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264306162316466613664663232363839346662623236316338376337 Jan 22 00:41:58.010000 audit: BPF prog-id=118 op=UNLOAD Jan 22 00:41:58.010000 audit[3147]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3018 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:58.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264306162316466613664663232363839346662623236316338376337 Jan 22 00:41:58.010000 audit: BPF prog-id=119 op=LOAD Jan 22 00:41:58.010000 audit[3147]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3018 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:58.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264306162316466613664663232363839346662623236316338376337 Jan 22 00:41:58.010000 audit: BPF prog-id=120 op=LOAD Jan 22 00:41:58.010000 audit[3147]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3018 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:58.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264306162316466613664663232363839346662623236316338376337 Jan 22 00:41:58.010000 audit: BPF prog-id=120 op=UNLOAD Jan 22 00:41:58.010000 audit[3147]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3018 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:58.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264306162316466613664663232363839346662623236316338376337 Jan 22 00:41:58.010000 audit: BPF prog-id=119 op=UNLOAD Jan 22 00:41:58.010000 audit[3147]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3018 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:58.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264306162316466613664663232363839346662623236316338376337 Jan 22 00:41:58.010000 audit: BPF prog-id=121 op=LOAD Jan 22 00:41:58.010000 audit[3147]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3018 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:41:58.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264306162316466613664663232363839346662623236316338376337 Jan 22 00:41:58.053604 containerd[1953]: time="2026-01-22T00:41:58.053545165Z" level=info msg="StartContainer for \"ffe0bc6470611a01f840db0a66a5867ec4ea8d366bbbdf1a1758fc4f14c9cc1f\" returns successfully" Jan 22 00:41:58.077152 containerd[1953]: time="2026-01-22T00:41:58.077111323Z" level=info msg="StartContainer for \"3128adf35cf4fd85b64326afed60464878fa2cc26a27ddcd0b5d8c830045ff13\" returns successfully" Jan 22 00:41:58.096587 containerd[1953]: time="2026-01-22T00:41:58.096520578Z" level=info msg="StartContainer for \"2d0ab1dfa6df226894fbb261c87c740be6f5600bb057130f3bc8bbc6a5a5ca8c\" returns successfully" Jan 22 00:41:58.155633 kubelet[2964]: W0122 00:41:58.155459 2964 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.26.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-54&limit=500&resourceVersion=0": dial tcp 172.31.26.54:6443: connect: connection refused Jan 22 00:41:58.155767 kubelet[2964]: E0122 00:41:58.155647 2964 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.26.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-54&limit=500&resourceVersion=0\": dial tcp 172.31.26.54:6443: connect: connection refused" logger="UnhandledError" Jan 22 00:41:58.202326 kubelet[2964]: W0122 00:41:58.202120 2964 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.26.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.54:6443: connect: connection refused Jan 22 00:41:58.202326 kubelet[2964]: E0122 00:41:58.202271 2964 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.26.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.54:6443: connect: connection refused" logger="UnhandledError" Jan 22 00:41:58.356895 kubelet[2964]: E0122 00:41:58.356583 2964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-54?timeout=10s\": dial tcp 172.31.26.54:6443: connect: connection refused" interval="1.6s" Jan 22 00:41:58.576773 kubelet[2964]: I0122 00:41:58.576360 2964 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-54" Jan 22 00:41:58.576773 kubelet[2964]: E0122 00:41:58.576739 2964 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.54:6443/api/v1/nodes\": dial tcp 172.31.26.54:6443: connect: connection refused" node="ip-172-31-26-54" Jan 22 00:41:59.052906 kubelet[2964]: E0122 00:41:59.051309 2964 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-54\" not found" node="ip-172-31-26-54" Jan 22 00:41:59.068531 kubelet[2964]: E0122 00:41:59.068504 2964 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-54\" not found" node="ip-172-31-26-54" Jan 22 00:41:59.073291 kubelet[2964]: E0122 00:41:59.073266 2964 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-54\" not found" node="ip-172-31-26-54" Jan 22 00:42:00.078100 kubelet[2964]: E0122 00:42:00.077671 2964 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-54\" not found" node="ip-172-31-26-54" Jan 22 00:42:00.078100 kubelet[2964]: E0122 00:42:00.077982 2964 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-54\" not found" node="ip-172-31-26-54" Jan 22 00:42:00.079024 kubelet[2964]: E0122 00:42:00.079005 2964 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-54\" not found" node="ip-172-31-26-54" Jan 22 00:42:00.179630 kubelet[2964]: I0122 00:42:00.178856 2964 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-54" Jan 22 00:42:01.080564 kubelet[2964]: E0122 00:42:01.080529 2964 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-54\" not found" node="ip-172-31-26-54" Jan 22 00:42:01.081017 kubelet[2964]: E0122 00:42:01.080960 2964 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-54\" not found" node="ip-172-31-26-54" Jan 22 00:42:02.523125 kubelet[2964]: E0122 00:42:02.523077 2964 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-26-54\" not found" node="ip-172-31-26-54" Jan 22 00:42:02.581016 kubelet[2964]: I0122 00:42:02.580954 2964 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-26-54" Jan 22 00:42:02.586560 kubelet[2964]: I0122 00:42:02.586274 2964 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-54" Jan 22 00:42:02.600515 kubelet[2964]: E0122 00:42:02.600345 2964 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-26-54\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-26-54" Jan 22 00:42:02.652442 kubelet[2964]: I0122 00:42:02.652400 2964 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-54" Jan 22 00:42:02.660568 kubelet[2964]: E0122 00:42:02.660444 2964 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-26-54.188ce6cbde145fd1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-54,UID:ip-172-31-26-54,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-54,},FirstTimestamp:2026-01-22 00:41:56.928389073 +0000 UTC m=+0.504687005,LastTimestamp:2026-01-22 00:41:56.928389073 +0000 UTC m=+0.504687005,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-54,}" Jan 22 00:42:02.661671 kubelet[2964]: E0122 00:42:02.660796 2964 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-26-54\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-26-54" Jan 22 00:42:02.661671 kubelet[2964]: I0122 00:42:02.660823 2964 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-54" Jan 22 00:42:02.667518 kubelet[2964]: E0122 00:42:02.667464 2964 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-26-54\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-26-54" Jan 22 00:42:02.667518 kubelet[2964]: I0122 00:42:02.667514 2964 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-54" Jan 22 00:42:02.669569 kubelet[2964]: E0122 00:42:02.669521 2964 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-26-54\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-26-54" Jan 22 00:42:02.916994 kubelet[2964]: I0122 00:42:02.916570 2964 apiserver.go:52] "Watching apiserver" Jan 22 00:42:02.953367 kubelet[2964]: I0122 00:42:02.952553 2964 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 22 00:42:03.531922 update_engine[1942]: I20260122 00:42:03.531340 1942 update_attempter.cc:509] Updating boot flags... Jan 22 00:42:04.268602 kubelet[2964]: I0122 00:42:04.268573 2964 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-54" Jan 22 00:42:04.886269 systemd[1]: Reload requested from client PID 3330 ('systemctl') (unit session-9.scope)... Jan 22 00:42:04.886294 systemd[1]: Reloading... Jan 22 00:42:05.031934 zram_generator::config[3374]: No configuration found. Jan 22 00:42:05.336154 systemd[1]: Reloading finished in 449 ms. Jan 22 00:42:05.369458 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:42:05.381583 systemd[1]: kubelet.service: Deactivated successfully. Jan 22 00:42:05.381825 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:42:05.386860 kernel: kauditd_printk_skb: 204 callbacks suppressed Jan 22 00:42:05.386939 kernel: audit: type=1131 audit(1769042525.380:396): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:05.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:05.381916 systemd[1]: kubelet.service: Consumed 1.002s CPU time, 128.2M memory peak. Jan 22 00:42:05.386189 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:42:05.385000 audit: BPF prog-id=122 op=LOAD Jan 22 00:42:05.388879 kernel: audit: type=1334 audit(1769042525.385:397): prog-id=122 op=LOAD Jan 22 00:42:05.385000 audit: BPF prog-id=123 op=LOAD Jan 22 00:42:05.391400 kernel: audit: type=1334 audit(1769042525.385:398): prog-id=123 op=LOAD Jan 22 00:42:05.391454 kernel: audit: type=1334 audit(1769042525.385:399): prog-id=81 op=UNLOAD Jan 22 00:42:05.385000 audit: BPF prog-id=81 op=UNLOAD Jan 22 00:42:05.392725 kernel: audit: type=1334 audit(1769042525.385:400): prog-id=82 op=UNLOAD Jan 22 00:42:05.385000 audit: BPF prog-id=82 op=UNLOAD Jan 22 00:42:05.393918 kernel: audit: type=1334 audit(1769042525.386:401): prog-id=124 op=LOAD Jan 22 00:42:05.386000 audit: BPF prog-id=124 op=LOAD Jan 22 00:42:05.395118 kernel: audit: type=1334 audit(1769042525.386:402): prog-id=91 op=UNLOAD Jan 22 00:42:05.386000 audit: BPF prog-id=91 op=UNLOAD Jan 22 00:42:05.396272 kernel: audit: type=1334 audit(1769042525.393:403): prog-id=125 op=LOAD Jan 22 00:42:05.393000 audit: BPF prog-id=125 op=LOAD Jan 22 00:42:05.397760 kernel: audit: type=1334 audit(1769042525.393:404): prog-id=78 op=UNLOAD Jan 22 00:42:05.398294 kernel: audit: type=1334 audit(1769042525.393:405): prog-id=126 op=LOAD Jan 22 00:42:05.393000 audit: BPF prog-id=78 op=UNLOAD Jan 22 00:42:05.393000 audit: BPF prog-id=126 op=LOAD Jan 22 00:42:05.393000 audit: BPF prog-id=127 op=LOAD Jan 22 00:42:05.393000 audit: BPF prog-id=79 op=UNLOAD Jan 22 00:42:05.393000 audit: BPF prog-id=80 op=UNLOAD Jan 22 00:42:05.395000 audit: BPF prog-id=128 op=LOAD Jan 22 00:42:05.395000 audit: BPF prog-id=83 op=UNLOAD Jan 22 00:42:05.395000 audit: BPF prog-id=129 op=LOAD Jan 22 00:42:05.395000 audit: BPF prog-id=130 op=LOAD Jan 22 00:42:05.395000 audit: BPF prog-id=84 op=UNLOAD Jan 22 00:42:05.395000 audit: BPF prog-id=85 op=UNLOAD Jan 22 00:42:05.395000 audit: BPF prog-id=131 op=LOAD Jan 22 00:42:05.395000 audit: BPF prog-id=87 op=UNLOAD Jan 22 00:42:05.395000 audit: BPF prog-id=132 op=LOAD Jan 22 00:42:05.395000 audit: BPF prog-id=133 op=LOAD Jan 22 00:42:05.395000 audit: BPF prog-id=88 op=UNLOAD Jan 22 00:42:05.395000 audit: BPF prog-id=89 op=UNLOAD Jan 22 00:42:05.400000 audit: BPF prog-id=134 op=LOAD Jan 22 00:42:05.400000 audit: BPF prog-id=86 op=UNLOAD Jan 22 00:42:05.401000 audit: BPF prog-id=135 op=LOAD Jan 22 00:42:05.401000 audit: BPF prog-id=75 op=UNLOAD Jan 22 00:42:05.401000 audit: BPF prog-id=136 op=LOAD Jan 22 00:42:05.402000 audit: BPF prog-id=137 op=LOAD Jan 22 00:42:05.402000 audit: BPF prog-id=76 op=UNLOAD Jan 22 00:42:05.402000 audit: BPF prog-id=77 op=UNLOAD Jan 22 00:42:05.402000 audit: BPF prog-id=138 op=LOAD Jan 22 00:42:05.402000 audit: BPF prog-id=72 op=UNLOAD Jan 22 00:42:05.402000 audit: BPF prog-id=139 op=LOAD Jan 22 00:42:05.402000 audit: BPF prog-id=140 op=LOAD Jan 22 00:42:05.402000 audit: BPF prog-id=73 op=UNLOAD Jan 22 00:42:05.402000 audit: BPF prog-id=74 op=UNLOAD Jan 22 00:42:05.403000 audit: BPF prog-id=141 op=LOAD Jan 22 00:42:05.403000 audit: BPF prog-id=90 op=UNLOAD Jan 22 00:42:05.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:05.749581 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:42:05.761505 (kubelet)[3437]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 22 00:42:05.832258 kubelet[3437]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 00:42:05.832758 kubelet[3437]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 22 00:42:05.832758 kubelet[3437]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 00:42:05.838931 kubelet[3437]: I0122 00:42:05.838284 3437 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 22 00:42:05.853859 kubelet[3437]: I0122 00:42:05.853793 3437 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 22 00:42:05.853859 kubelet[3437]: I0122 00:42:05.853838 3437 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 22 00:42:05.854257 kubelet[3437]: I0122 00:42:05.854215 3437 server.go:954] "Client rotation is on, will bootstrap in background" Jan 22 00:42:05.859959 kubelet[3437]: I0122 00:42:05.859915 3437 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 22 00:42:05.884659 kubelet[3437]: I0122 00:42:05.884540 3437 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 22 00:42:05.905597 kubelet[3437]: I0122 00:42:05.905562 3437 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 22 00:42:05.908905 kubelet[3437]: I0122 00:42:05.908808 3437 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 22 00:42:05.912676 kubelet[3437]: I0122 00:42:05.912618 3437 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 22 00:42:05.914450 kubelet[3437]: I0122 00:42:05.912681 3437 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-54","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 22 00:42:05.914450 kubelet[3437]: I0122 00:42:05.914396 3437 topology_manager.go:138] "Creating topology manager with none policy" Jan 22 00:42:05.914450 kubelet[3437]: I0122 00:42:05.914412 3437 container_manager_linux.go:304] "Creating device plugin manager" Jan 22 00:42:05.915010 kubelet[3437]: I0122 00:42:05.914470 3437 state_mem.go:36] "Initialized new in-memory state store" Jan 22 00:42:05.916944 kubelet[3437]: I0122 00:42:05.916704 3437 kubelet.go:446] "Attempting to sync node with API server" Jan 22 00:42:05.916944 kubelet[3437]: I0122 00:42:05.916744 3437 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 22 00:42:05.916944 kubelet[3437]: I0122 00:42:05.916774 3437 kubelet.go:352] "Adding apiserver pod source" Jan 22 00:42:05.916944 kubelet[3437]: I0122 00:42:05.916787 3437 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 22 00:42:05.920379 kubelet[3437]: I0122 00:42:05.920276 3437 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 22 00:42:05.921579 kubelet[3437]: I0122 00:42:05.921087 3437 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 22 00:42:05.926770 kubelet[3437]: I0122 00:42:05.926291 3437 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 22 00:42:05.927122 kubelet[3437]: I0122 00:42:05.927075 3437 server.go:1287] "Started kubelet" Jan 22 00:42:05.952246 kubelet[3437]: I0122 00:42:05.952208 3437 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 22 00:42:05.956235 kubelet[3437]: I0122 00:42:05.955887 3437 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 22 00:42:05.958717 kubelet[3437]: I0122 00:42:05.958688 3437 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 22 00:42:05.959813 kubelet[3437]: E0122 00:42:05.959772 3437 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-54\" not found" Jan 22 00:42:05.965163 kubelet[3437]: I0122 00:42:05.963999 3437 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 22 00:42:05.965163 kubelet[3437]: I0122 00:42:05.964146 3437 reconciler.go:26] "Reconciler: start to sync state" Jan 22 00:42:05.969273 kubelet[3437]: I0122 00:42:05.969247 3437 server.go:479] "Adding debug handlers to kubelet server" Jan 22 00:42:05.974957 kubelet[3437]: I0122 00:42:05.969961 3437 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 22 00:42:05.975551 kubelet[3437]: I0122 00:42:05.975442 3437 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 22 00:42:05.975962 kubelet[3437]: I0122 00:42:05.972184 3437 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 22 00:42:05.978031 kubelet[3437]: E0122 00:42:05.978009 3437 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 22 00:42:05.985162 kubelet[3437]: I0122 00:42:05.985035 3437 factory.go:221] Registration of the containerd container factory successfully Jan 22 00:42:05.985628 kubelet[3437]: I0122 00:42:05.985519 3437 factory.go:221] Registration of the systemd container factory successfully Jan 22 00:42:05.986053 kubelet[3437]: I0122 00:42:05.986013 3437 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 22 00:42:06.020065 kubelet[3437]: I0122 00:42:06.019604 3437 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 22 00:42:06.027268 kubelet[3437]: I0122 00:42:06.026678 3437 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 22 00:42:06.027268 kubelet[3437]: I0122 00:42:06.026717 3437 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 22 00:42:06.027268 kubelet[3437]: I0122 00:42:06.026744 3437 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 22 00:42:06.027268 kubelet[3437]: I0122 00:42:06.026753 3437 kubelet.go:2382] "Starting kubelet main sync loop" Jan 22 00:42:06.027268 kubelet[3437]: E0122 00:42:06.026814 3437 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 22 00:42:06.086449 kubelet[3437]: I0122 00:42:06.086429 3437 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 22 00:42:06.086656 kubelet[3437]: I0122 00:42:06.086586 3437 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 22 00:42:06.086844 kubelet[3437]: I0122 00:42:06.086785 3437 state_mem.go:36] "Initialized new in-memory state store" Jan 22 00:42:06.087171 kubelet[3437]: I0122 00:42:06.087096 3437 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 22 00:42:06.087171 kubelet[3437]: I0122 00:42:06.087130 3437 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 22 00:42:06.087377 kubelet[3437]: I0122 00:42:06.087155 3437 policy_none.go:49] "None policy: Start" Jan 22 00:42:06.087377 kubelet[3437]: I0122 00:42:06.087319 3437 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 22 00:42:06.087377 kubelet[3437]: I0122 00:42:06.087333 3437 state_mem.go:35] "Initializing new in-memory state store" Jan 22 00:42:06.087887 kubelet[3437]: I0122 00:42:06.087807 3437 state_mem.go:75] "Updated machine memory state" Jan 22 00:42:06.096275 kubelet[3437]: I0122 00:42:06.096238 3437 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 22 00:42:06.096609 kubelet[3437]: I0122 00:42:06.096584 3437 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 22 00:42:06.096755 kubelet[3437]: I0122 00:42:06.096685 3437 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 22 00:42:06.098114 kubelet[3437]: I0122 00:42:06.097899 3437 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 22 00:42:06.104817 kubelet[3437]: E0122 00:42:06.104654 3437 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 22 00:42:06.132858 kubelet[3437]: I0122 00:42:06.132022 3437 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-54" Jan 22 00:42:06.138783 kubelet[3437]: I0122 00:42:06.138516 3437 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-54" Jan 22 00:42:06.139390 kubelet[3437]: I0122 00:42:06.139315 3437 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-54" Jan 22 00:42:06.160131 kubelet[3437]: E0122 00:42:06.160053 3437 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-26-54\" already exists" pod="kube-system/kube-scheduler-ip-172-31-26-54" Jan 22 00:42:06.166113 kubelet[3437]: I0122 00:42:06.166069 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/612d2334c24bda22653ffe4f917d3f03-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-54\" (UID: \"612d2334c24bda22653ffe4f917d3f03\") " pod="kube-system/kube-controller-manager-ip-172-31-26-54" Jan 22 00:42:06.166691 kubelet[3437]: I0122 00:42:06.166333 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7d8a3c776a862aba588c2b59d57a3017-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-54\" (UID: \"7d8a3c776a862aba588c2b59d57a3017\") " pod="kube-system/kube-scheduler-ip-172-31-26-54" Jan 22 00:42:06.166691 kubelet[3437]: I0122 00:42:06.166404 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61dc2517e50bff50f6b30c1e763f1dac-ca-certs\") pod \"kube-apiserver-ip-172-31-26-54\" (UID: \"61dc2517e50bff50f6b30c1e763f1dac\") " pod="kube-system/kube-apiserver-ip-172-31-26-54" Jan 22 00:42:06.166691 kubelet[3437]: I0122 00:42:06.166449 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/612d2334c24bda22653ffe4f917d3f03-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-54\" (UID: \"612d2334c24bda22653ffe4f917d3f03\") " pod="kube-system/kube-controller-manager-ip-172-31-26-54" Jan 22 00:42:06.166691 kubelet[3437]: I0122 00:42:06.166479 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/612d2334c24bda22653ffe4f917d3f03-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-54\" (UID: \"612d2334c24bda22653ffe4f917d3f03\") " pod="kube-system/kube-controller-manager-ip-172-31-26-54" Jan 22 00:42:06.166691 kubelet[3437]: I0122 00:42:06.166503 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61dc2517e50bff50f6b30c1e763f1dac-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-54\" (UID: \"61dc2517e50bff50f6b30c1e763f1dac\") " pod="kube-system/kube-apiserver-ip-172-31-26-54" Jan 22 00:42:06.166955 kubelet[3437]: I0122 00:42:06.166534 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61dc2517e50bff50f6b30c1e763f1dac-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-54\" (UID: \"61dc2517e50bff50f6b30c1e763f1dac\") " pod="kube-system/kube-apiserver-ip-172-31-26-54" Jan 22 00:42:06.166955 kubelet[3437]: I0122 00:42:06.166557 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/612d2334c24bda22653ffe4f917d3f03-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-54\" (UID: \"612d2334c24bda22653ffe4f917d3f03\") " pod="kube-system/kube-controller-manager-ip-172-31-26-54" Jan 22 00:42:06.166955 kubelet[3437]: I0122 00:42:06.166583 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/612d2334c24bda22653ffe4f917d3f03-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-54\" (UID: \"612d2334c24bda22653ffe4f917d3f03\") " pod="kube-system/kube-controller-manager-ip-172-31-26-54" Jan 22 00:42:06.220160 kubelet[3437]: I0122 00:42:06.219606 3437 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-54" Jan 22 00:42:06.232069 kubelet[3437]: I0122 00:42:06.232031 3437 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-26-54" Jan 22 00:42:06.232196 kubelet[3437]: I0122 00:42:06.232110 3437 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-26-54" Jan 22 00:42:06.918443 kubelet[3437]: I0122 00:42:06.918161 3437 apiserver.go:52] "Watching apiserver" Jan 22 00:42:06.965203 kubelet[3437]: I0122 00:42:06.965108 3437 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 22 00:42:07.068981 kubelet[3437]: I0122 00:42:07.068936 3437 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-54" Jan 22 00:42:07.077308 kubelet[3437]: E0122 00:42:07.077268 3437 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-26-54\" already exists" pod="kube-system/kube-apiserver-ip-172-31-26-54" Jan 22 00:42:07.101195 kubelet[3437]: I0122 00:42:07.101121 3437 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-26-54" podStartSLOduration=3.101076582 podStartE2EDuration="3.101076582s" podCreationTimestamp="2026-01-22 00:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 00:42:07.100849219 +0000 UTC m=+1.332815377" watchObservedRunningTime="2026-01-22 00:42:07.101076582 +0000 UTC m=+1.333042720" Jan 22 00:42:07.119552 kubelet[3437]: I0122 00:42:07.119252 3437 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-26-54" podStartSLOduration=1.119228688 podStartE2EDuration="1.119228688s" podCreationTimestamp="2026-01-22 00:42:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 00:42:07.11066344 +0000 UTC m=+1.342629578" watchObservedRunningTime="2026-01-22 00:42:07.119228688 +0000 UTC m=+1.351194836" Jan 22 00:42:07.129654 kubelet[3437]: I0122 00:42:07.129600 3437 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-26-54" podStartSLOduration=1.129486715 podStartE2EDuration="1.129486715s" podCreationTimestamp="2026-01-22 00:42:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 00:42:07.120038422 +0000 UTC m=+1.352004567" watchObservedRunningTime="2026-01-22 00:42:07.129486715 +0000 UTC m=+1.361452845" Jan 22 00:42:10.202145 kubelet[3437]: I0122 00:42:10.202098 3437 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 22 00:42:10.203287 containerd[1953]: time="2026-01-22T00:42:10.203202851Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 22 00:42:10.204514 kubelet[3437]: I0122 00:42:10.204466 3437 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 22 00:42:10.965542 systemd[1]: Created slice kubepods-besteffort-podaa3f9b42_d57f_4d45_8785_3d619b032e17.slice - libcontainer container kubepods-besteffort-podaa3f9b42_d57f_4d45_8785_3d619b032e17.slice. Jan 22 00:42:11.002280 kubelet[3437]: I0122 00:42:11.002129 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aa3f9b42-d57f-4d45-8785-3d619b032e17-kube-proxy\") pod \"kube-proxy-84dk9\" (UID: \"aa3f9b42-d57f-4d45-8785-3d619b032e17\") " pod="kube-system/kube-proxy-84dk9" Jan 22 00:42:11.002280 kubelet[3437]: I0122 00:42:11.002168 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa3f9b42-d57f-4d45-8785-3d619b032e17-xtables-lock\") pod \"kube-proxy-84dk9\" (UID: \"aa3f9b42-d57f-4d45-8785-3d619b032e17\") " pod="kube-system/kube-proxy-84dk9" Jan 22 00:42:11.002280 kubelet[3437]: I0122 00:42:11.002195 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa3f9b42-d57f-4d45-8785-3d619b032e17-lib-modules\") pod \"kube-proxy-84dk9\" (UID: \"aa3f9b42-d57f-4d45-8785-3d619b032e17\") " pod="kube-system/kube-proxy-84dk9" Jan 22 00:42:11.002280 kubelet[3437]: I0122 00:42:11.002224 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5ps6\" (UniqueName: \"kubernetes.io/projected/aa3f9b42-d57f-4d45-8785-3d619b032e17-kube-api-access-c5ps6\") pod \"kube-proxy-84dk9\" (UID: \"aa3f9b42-d57f-4d45-8785-3d619b032e17\") " pod="kube-system/kube-proxy-84dk9" Jan 22 00:42:11.274140 containerd[1953]: time="2026-01-22T00:42:11.273930974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-84dk9,Uid:aa3f9b42-d57f-4d45-8785-3d619b032e17,Namespace:kube-system,Attempt:0,}" Jan 22 00:42:11.332177 containerd[1953]: time="2026-01-22T00:42:11.332122754Z" level=info msg="connecting to shim 875240bf49ef63fdd591d88e8d76ea3e866f5eb13f74a58f0e412939e3c57a0d" address="unix:///run/containerd/s/a00ebad3d3a09d34453ffce5b9e213ba34ae3368d78ca05b35b6175ee01934e7" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:42:11.385169 systemd[1]: Started cri-containerd-875240bf49ef63fdd591d88e8d76ea3e866f5eb13f74a58f0e412939e3c57a0d.scope - libcontainer container 875240bf49ef63fdd591d88e8d76ea3e866f5eb13f74a58f0e412939e3c57a0d. Jan 22 00:42:11.396810 systemd[1]: Created slice kubepods-besteffort-podf80908c2_7ff5_4ad9_99fe_716478a0dee4.slice - libcontainer container kubepods-besteffort-podf80908c2_7ff5_4ad9_99fe_716478a0dee4.slice. Jan 22 00:42:11.404611 kubelet[3437]: I0122 00:42:11.404565 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f80908c2-7ff5-4ad9-99fe-716478a0dee4-var-lib-calico\") pod \"tigera-operator-7dcd859c48-hqbxz\" (UID: \"f80908c2-7ff5-4ad9-99fe-716478a0dee4\") " pod="tigera-operator/tigera-operator-7dcd859c48-hqbxz" Jan 22 00:42:11.405119 kubelet[3437]: I0122 00:42:11.405032 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w44w4\" (UniqueName: \"kubernetes.io/projected/f80908c2-7ff5-4ad9-99fe-716478a0dee4-kube-api-access-w44w4\") pod \"tigera-operator-7dcd859c48-hqbxz\" (UID: \"f80908c2-7ff5-4ad9-99fe-716478a0dee4\") " pod="tigera-operator/tigera-operator-7dcd859c48-hqbxz" Jan 22 00:42:11.417291 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 22 00:42:11.417431 kernel: audit: type=1334 audit(1769042531.413:438): prog-id=142 op=LOAD Jan 22 00:42:11.413000 audit: BPF prog-id=142 op=LOAD Jan 22 00:42:11.419965 kernel: audit: type=1334 audit(1769042531.416:439): prog-id=143 op=LOAD Jan 22 00:42:11.416000 audit: BPF prog-id=143 op=LOAD Jan 22 00:42:11.416000 audit[3501]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3490 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:11.432321 kernel: audit: type=1300 audit(1769042531.416:439): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3490 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:11.432443 kernel: audit: type=1327 audit(1769042531.416:439): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837353234306266343965663633666464353931643838653864373665 Jan 22 00:42:11.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837353234306266343965663633666464353931643838653864373665 Jan 22 00:42:11.416000 audit: BPF prog-id=143 op=UNLOAD Jan 22 00:42:11.440596 kernel: audit: type=1334 audit(1769042531.416:440): prog-id=143 op=UNLOAD Jan 22 00:42:11.440727 kernel: audit: type=1300 audit(1769042531.416:440): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3490 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:11.416000 audit[3501]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3490 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:11.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837353234306266343965663633666464353931643838653864373665 Jan 22 00:42:11.450115 kernel: audit: type=1327 audit(1769042531.416:440): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837353234306266343965663633666464353931643838653864373665 Jan 22 00:42:11.450214 kernel: audit: type=1334 audit(1769042531.416:441): prog-id=144 op=LOAD Jan 22 00:42:11.416000 audit: BPF prog-id=144 op=LOAD Jan 22 00:42:11.455536 kernel: audit: type=1300 audit(1769042531.416:441): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3490 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:11.416000 audit[3501]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3490 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:11.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837353234306266343965663633666464353931643838653864373665 Jan 22 00:42:11.464913 kernel: audit: type=1327 audit(1769042531.416:441): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837353234306266343965663633666464353931643838653864373665 Jan 22 00:42:11.416000 audit: BPF prog-id=145 op=LOAD Jan 22 00:42:11.416000 audit[3501]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3490 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:11.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837353234306266343965663633666464353931643838653864373665 Jan 22 00:42:11.417000 audit: BPF prog-id=145 op=UNLOAD Jan 22 00:42:11.417000 audit[3501]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3490 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:11.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837353234306266343965663633666464353931643838653864373665 Jan 22 00:42:11.417000 audit: BPF prog-id=144 op=UNLOAD Jan 22 00:42:11.417000 audit[3501]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3490 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:11.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837353234306266343965663633666464353931643838653864373665 Jan 22 00:42:11.417000 audit: BPF prog-id=146 op=LOAD Jan 22 00:42:11.417000 audit[3501]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3490 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:11.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837353234306266343965663633666464353931643838653864373665 Jan 22 00:42:11.484744 containerd[1953]: time="2026-01-22T00:42:11.484651243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-84dk9,Uid:aa3f9b42-d57f-4d45-8785-3d619b032e17,Namespace:kube-system,Attempt:0,} returns sandbox id \"875240bf49ef63fdd591d88e8d76ea3e866f5eb13f74a58f0e412939e3c57a0d\"" Jan 22 00:42:11.495588 containerd[1953]: time="2026-01-22T00:42:11.495410692Z" level=info msg="CreateContainer within sandbox \"875240bf49ef63fdd591d88e8d76ea3e866f5eb13f74a58f0e412939e3c57a0d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 22 00:42:11.571957 containerd[1953]: time="2026-01-22T00:42:11.571112013Z" level=info msg="Container 42d3dd3fbc1f7f9cbdff943e7d2512916b96d2b2521e5042f612f332aed1d159: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:42:11.601359 containerd[1953]: time="2026-01-22T00:42:11.601299883Z" level=info msg="CreateContainer within sandbox \"875240bf49ef63fdd591d88e8d76ea3e866f5eb13f74a58f0e412939e3c57a0d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"42d3dd3fbc1f7f9cbdff943e7d2512916b96d2b2521e5042f612f332aed1d159\"" Jan 22 00:42:11.603270 containerd[1953]: time="2026-01-22T00:42:11.603157016Z" level=info msg="StartContainer for \"42d3dd3fbc1f7f9cbdff943e7d2512916b96d2b2521e5042f612f332aed1d159\"" Jan 22 00:42:11.605995 containerd[1953]: time="2026-01-22T00:42:11.605953214Z" level=info msg="connecting to shim 42d3dd3fbc1f7f9cbdff943e7d2512916b96d2b2521e5042f612f332aed1d159" address="unix:///run/containerd/s/a00ebad3d3a09d34453ffce5b9e213ba34ae3368d78ca05b35b6175ee01934e7" protocol=ttrpc version=3 Jan 22 00:42:11.633151 systemd[1]: Started cri-containerd-42d3dd3fbc1f7f9cbdff943e7d2512916b96d2b2521e5042f612f332aed1d159.scope - libcontainer container 42d3dd3fbc1f7f9cbdff943e7d2512916b96d2b2521e5042f612f332aed1d159. Jan 22 00:42:11.687000 audit: BPF prog-id=147 op=LOAD Jan 22 00:42:11.687000 audit[3528]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3490 pid=3528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:11.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432643364643366626331663766396362646666393433653764323531 Jan 22 00:42:11.687000 audit: BPF prog-id=148 op=LOAD Jan 22 00:42:11.687000 audit[3528]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3490 pid=3528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:11.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432643364643366626331663766396362646666393433653764323531 Jan 22 00:42:11.687000 audit: BPF prog-id=148 op=UNLOAD Jan 22 00:42:11.687000 audit[3528]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3490 pid=3528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:11.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432643364643366626331663766396362646666393433653764323531 Jan 22 00:42:11.687000 audit: BPF prog-id=147 op=UNLOAD Jan 22 00:42:11.687000 audit[3528]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3490 pid=3528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:11.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432643364643366626331663766396362646666393433653764323531 Jan 22 00:42:11.687000 audit: BPF prog-id=149 op=LOAD Jan 22 00:42:11.687000 audit[3528]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3490 pid=3528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:11.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432643364643366626331663766396362646666393433653764323531 Jan 22 00:42:11.701523 containerd[1953]: time="2026-01-22T00:42:11.701483234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-hqbxz,Uid:f80908c2-7ff5-4ad9-99fe-716478a0dee4,Namespace:tigera-operator,Attempt:0,}" Jan 22 00:42:11.717513 containerd[1953]: time="2026-01-22T00:42:11.717432177Z" level=info msg="StartContainer for \"42d3dd3fbc1f7f9cbdff943e7d2512916b96d2b2521e5042f612f332aed1d159\" returns successfully" Jan 22 00:42:11.732342 containerd[1953]: time="2026-01-22T00:42:11.732292821Z" level=info msg="connecting to shim a32d2d682a86a48e0db1be1542c3fc6e8de6bd80b7aed3196f563597a3015698" address="unix:///run/containerd/s/8809c807fe746b4205c03dbb7af082b0006adf3a13be052c5fec38691461e7ab" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:42:11.762168 systemd[1]: Started cri-containerd-a32d2d682a86a48e0db1be1542c3fc6e8de6bd80b7aed3196f563597a3015698.scope - libcontainer container a32d2d682a86a48e0db1be1542c3fc6e8de6bd80b7aed3196f563597a3015698. Jan 22 00:42:11.774000 audit: BPF prog-id=150 op=LOAD Jan 22 00:42:11.775000 audit: BPF prog-id=151 op=LOAD Jan 22 00:42:11.775000 audit[3575]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3563 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:11.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133326432643638326138366134386530646231626531353432633366 Jan 22 00:42:11.775000 audit: BPF prog-id=151 op=UNLOAD Jan 22 00:42:11.775000 audit[3575]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3563 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:11.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133326432643638326138366134386530646231626531353432633366 Jan 22 00:42:11.775000 audit: BPF prog-id=152 op=LOAD Jan 22 00:42:11.775000 audit[3575]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3563 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:11.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133326432643638326138366134386530646231626531353432633366 Jan 22 00:42:11.775000 audit: BPF prog-id=153 op=LOAD Jan 22 00:42:11.775000 audit[3575]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3563 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:11.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133326432643638326138366134386530646231626531353432633366 Jan 22 00:42:11.775000 audit: BPF prog-id=153 op=UNLOAD Jan 22 00:42:11.775000 audit[3575]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3563 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:11.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133326432643638326138366134386530646231626531353432633366 Jan 22 00:42:11.775000 audit: BPF prog-id=152 op=UNLOAD Jan 22 00:42:11.775000 audit[3575]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3563 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:11.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133326432643638326138366134386530646231626531353432633366 Jan 22 00:42:11.775000 audit: BPF prog-id=154 op=LOAD Jan 22 00:42:11.775000 audit[3575]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3563 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:11.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133326432643638326138366134386530646231626531353432633366 Jan 22 00:42:11.822071 containerd[1953]: time="2026-01-22T00:42:11.821915001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-hqbxz,Uid:f80908c2-7ff5-4ad9-99fe-716478a0dee4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a32d2d682a86a48e0db1be1542c3fc6e8de6bd80b7aed3196f563597a3015698\"" Jan 22 00:42:11.824823 containerd[1953]: time="2026-01-22T00:42:11.824425539Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 22 00:42:12.122396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1724138722.mount: Deactivated successfully. Jan 22 00:42:12.205663 kubelet[3437]: I0122 00:42:12.205590 3437 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-84dk9" podStartSLOduration=2.205567919 podStartE2EDuration="2.205567919s" podCreationTimestamp="2026-01-22 00:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 00:42:12.092733055 +0000 UTC m=+6.324699209" watchObservedRunningTime="2026-01-22 00:42:12.205567919 +0000 UTC m=+6.437534068" Jan 22 00:42:12.271000 audit[3636]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3636 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:42:12.271000 audit[3636]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcd22a54a0 a2=0 a3=7ffcd22a548c items=0 ppid=3540 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.271000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 22 00:42:12.273000 audit[3638]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3638 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:42:12.273000 audit[3638]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff00558ab0 a2=0 a3=7fff00558a9c items=0 ppid=3540 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.273000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 22 00:42:12.274000 audit[3639]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=3639 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:42:12.274000 audit[3639]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc3d5e49d0 a2=0 a3=7ffc3d5e49bc items=0 ppid=3540 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.274000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 22 00:42:12.275000 audit[3640]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=3640 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:42:12.275000 audit[3640]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc0dd568a0 a2=0 a3=7ffc0dd5688c items=0 ppid=3540 pid=3640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.275000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 22 00:42:12.276000 audit[3641]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3641 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:42:12.276000 audit[3641]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe2c91460 a2=0 a3=7fffe2c9144c items=0 ppid=3540 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.276000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 22 00:42:12.279000 audit[3642]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3642 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:42:12.279000 audit[3642]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff8093e140 a2=0 a3=7fff8093e12c items=0 ppid=3540 pid=3642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.279000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 22 00:42:12.388000 audit[3643]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3643 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:42:12.388000 audit[3643]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffdb96c6420 a2=0 a3=7ffdb96c640c items=0 ppid=3540 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.388000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 22 00:42:12.393000 audit[3645]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3645 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:42:12.393000 audit[3645]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffedb508cf0 a2=0 a3=7ffedb508cdc items=0 ppid=3540 pid=3645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.393000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 22 00:42:12.398000 audit[3648]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3648 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:42:12.398000 audit[3648]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd3c0959c0 a2=0 a3=7ffd3c0959ac items=0 ppid=3540 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.398000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 22 00:42:12.399000 audit[3649]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3649 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:42:12.399000 audit[3649]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf584fd60 a2=0 a3=7ffcf584fd4c items=0 ppid=3540 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.399000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 22 00:42:12.402000 audit[3651]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3651 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:42:12.402000 audit[3651]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdc2481e50 a2=0 a3=7ffdc2481e3c items=0 ppid=3540 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.402000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 22 00:42:12.404000 audit[3652]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3652 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:42:12.404000 audit[3652]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcdacf9010 a2=0 a3=7ffcdacf8ffc items=0 ppid=3540 pid=3652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.404000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 22 00:42:12.407000 audit[3654]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3654 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:42:12.407000 audit[3654]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcb07c2310 a2=0 a3=7ffcb07c22fc items=0 ppid=3540 pid=3654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.407000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 22 00:42:12.411000 audit[3657]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3657 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:42:12.411000 audit[3657]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffe251bd30 a2=0 a3=7fffe251bd1c items=0 ppid=3540 pid=3657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.411000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 22 00:42:12.413000 audit[3658]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3658 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:42:12.413000 audit[3658]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe4b9780c0 a2=0 a3=7ffe4b9780ac items=0 ppid=3540 pid=3658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.413000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 22 00:42:12.417000 audit[3660]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3660 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:42:12.417000 audit[3660]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe9b9ae9c0 a2=0 a3=7ffe9b9ae9ac items=0 ppid=3540 pid=3660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.417000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 22 00:42:12.420000 audit[3661]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3661 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:42:12.420000 audit[3661]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffa91bb9c0 a2=0 a3=7fffa91bb9ac items=0 ppid=3540 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.420000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 22 00:42:12.424000 audit[3663]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3663 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:42:12.424000 audit[3663]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe59fa7c10 a2=0 a3=7ffe59fa7bfc items=0 ppid=3540 pid=3663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.424000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 22 00:42:12.430000 audit[3666]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3666 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:42:12.430000 audit[3666]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdd67bd2f0 a2=0 a3=7ffdd67bd2dc items=0 ppid=3540 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.430000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 22 00:42:12.435000 audit[3669]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3669 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:42:12.435000 audit[3669]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffa09aa450 a2=0 a3=7fffa09aa43c items=0 ppid=3540 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.435000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 22 00:42:12.437000 audit[3670]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3670 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:42:12.437000 audit[3670]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcdaa94110 a2=0 a3=7ffcdaa940fc items=0 ppid=3540 pid=3670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.437000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 22 00:42:12.441000 audit[3672]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3672 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:42:12.441000 audit[3672]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd803d17b0 a2=0 a3=7ffd803d179c items=0 ppid=3540 pid=3672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.441000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 22 00:42:12.446000 audit[3675]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3675 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:42:12.446000 audit[3675]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcc1804a70 a2=0 a3=7ffcc1804a5c items=0 ppid=3540 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.446000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 22 00:42:12.448000 audit[3676]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3676 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:42:12.448000 audit[3676]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe868efcf0 a2=0 a3=7ffe868efcdc items=0 ppid=3540 pid=3676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.448000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 22 00:42:12.451000 audit[3678]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3678 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:42:12.451000 audit[3678]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffdda83eab0 a2=0 a3=7ffdda83ea9c items=0 ppid=3540 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.451000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 22 00:42:12.480000 audit[3684]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3684 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:12.480000 audit[3684]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffee4c69dc0 a2=0 a3=7ffee4c69dac items=0 ppid=3540 pid=3684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.480000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:12.488000 audit[3684]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3684 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:12.488000 audit[3684]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffee4c69dc0 a2=0 a3=7ffee4c69dac items=0 ppid=3540 pid=3684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.488000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:12.490000 audit[3689]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3689 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:42:12.490000 audit[3689]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffca87f7730 a2=0 a3=7ffca87f771c items=0 ppid=3540 pid=3689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.490000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 22 00:42:12.493000 audit[3691]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3691 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:42:12.493000 audit[3691]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fffb51de9c0 a2=0 a3=7fffb51de9ac items=0 ppid=3540 pid=3691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.493000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 22 00:42:12.499000 audit[3694]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3694 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:42:12.499000 audit[3694]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe2dc41530 a2=0 a3=7ffe2dc4151c items=0 ppid=3540 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.499000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 22 00:42:12.501000 audit[3695]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3695 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:42:12.501000 audit[3695]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc6035370 a2=0 a3=7ffdc603535c items=0 ppid=3540 pid=3695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.501000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 22 00:42:12.504000 audit[3697]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3697 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:42:12.504000 audit[3697]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffeac58ae50 a2=0 a3=7ffeac58ae3c items=0 ppid=3540 pid=3697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.504000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 22 00:42:12.505000 audit[3698]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3698 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:42:12.505000 audit[3698]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe61865d80 a2=0 a3=7ffe61865d6c items=0 ppid=3540 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.505000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 22 00:42:12.508000 audit[3700]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3700 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:42:12.508000 audit[3700]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffff6a787a0 a2=0 a3=7ffff6a7878c items=0 ppid=3540 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.508000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 22 00:42:12.513000 audit[3703]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3703 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:42:12.513000 audit[3703]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd93d40f90 a2=0 a3=7ffd93d40f7c items=0 ppid=3540 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.513000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 22 00:42:12.514000 audit[3704]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3704 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:42:12.514000 audit[3704]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8a204910 a2=0 a3=7ffc8a2048fc items=0 ppid=3540 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.514000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 22 00:42:12.517000 audit[3706]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3706 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:42:12.517000 audit[3706]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc413f7b40 a2=0 a3=7ffc413f7b2c items=0 ppid=3540 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.517000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 22 00:42:12.518000 audit[3707]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3707 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:42:12.518000 audit[3707]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeeb6bce40 a2=0 a3=7ffeeb6bce2c items=0 ppid=3540 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.518000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 22 00:42:12.521000 audit[3709]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3709 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:42:12.521000 audit[3709]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff5142c640 a2=0 a3=7fff5142c62c items=0 ppid=3540 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.521000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 22 00:42:12.525000 audit[3712]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3712 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:42:12.525000 audit[3712]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe6ab53060 a2=0 a3=7ffe6ab5304c items=0 ppid=3540 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.525000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 22 00:42:12.529000 audit[3715]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3715 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:42:12.529000 audit[3715]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff7f5ed400 a2=0 a3=7fff7f5ed3ec items=0 ppid=3540 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.529000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 22 00:42:12.531000 audit[3716]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3716 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:42:12.531000 audit[3716]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd2c2f9c50 a2=0 a3=7ffd2c2f9c3c items=0 ppid=3540 pid=3716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.531000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 22 00:42:12.533000 audit[3718]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3718 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:42:12.533000 audit[3718]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffffd1ea840 a2=0 a3=7ffffd1ea82c items=0 ppid=3540 pid=3718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.533000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 22 00:42:12.538000 audit[3721]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3721 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:42:12.538000 audit[3721]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe179aaec0 a2=0 a3=7ffe179aaeac items=0 ppid=3540 pid=3721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.538000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 22 00:42:12.539000 audit[3722]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3722 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:42:12.539000 audit[3722]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc304224d0 a2=0 a3=7ffc304224bc items=0 ppid=3540 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.539000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 22 00:42:12.542000 audit[3724]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3724 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:42:12.542000 audit[3724]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffeeee1df80 a2=0 a3=7ffeeee1df6c items=0 ppid=3540 pid=3724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.542000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 22 00:42:12.543000 audit[3725]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3725 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:42:12.543000 audit[3725]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe081e63b0 a2=0 a3=7ffe081e639c items=0 ppid=3540 pid=3725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.543000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 22 00:42:12.546000 audit[3727]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3727 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:42:12.546000 audit[3727]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe619010e0 a2=0 a3=7ffe619010cc items=0 ppid=3540 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.546000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 22 00:42:12.550000 audit[3730]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3730 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:42:12.550000 audit[3730]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe8c1bb270 a2=0 a3=7ffe8c1bb25c items=0 ppid=3540 pid=3730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.550000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 22 00:42:12.554000 audit[3732]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3732 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 22 00:42:12.554000 audit[3732]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffeb114e940 a2=0 a3=7ffeb114e92c items=0 ppid=3540 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.554000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:12.555000 audit[3732]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3732 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 22 00:42:12.555000 audit[3732]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffeb114e940 a2=0 a3=7ffeb114e92c items=0 ppid=3540 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:12.555000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:13.022863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4158438972.mount: Deactivated successfully. Jan 22 00:42:13.835896 containerd[1953]: time="2026-01-22T00:42:13.835837377Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:42:13.837765 containerd[1953]: time="2026-01-22T00:42:13.837719387Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 22 00:42:13.840008 containerd[1953]: time="2026-01-22T00:42:13.839959509Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:42:13.843316 containerd[1953]: time="2026-01-22T00:42:13.843225413Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:42:13.844337 containerd[1953]: time="2026-01-22T00:42:13.843806833Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.018763155s" Jan 22 00:42:13.844337 containerd[1953]: time="2026-01-22T00:42:13.843833585Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 22 00:42:13.847054 containerd[1953]: time="2026-01-22T00:42:13.847020764Z" level=info msg="CreateContainer within sandbox \"a32d2d682a86a48e0db1be1542c3fc6e8de6bd80b7aed3196f563597a3015698\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 22 00:42:13.864061 containerd[1953]: time="2026-01-22T00:42:13.864016272Z" level=info msg="Container 35863310c3d1b3280aa7d0f21f7ae2d6c2332def2f6ab31ee42d65f0c5a4d1d9: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:42:13.888569 containerd[1953]: time="2026-01-22T00:42:13.888523587Z" level=info msg="CreateContainer within sandbox \"a32d2d682a86a48e0db1be1542c3fc6e8de6bd80b7aed3196f563597a3015698\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"35863310c3d1b3280aa7d0f21f7ae2d6c2332def2f6ab31ee42d65f0c5a4d1d9\"" Jan 22 00:42:13.889900 containerd[1953]: time="2026-01-22T00:42:13.889265101Z" level=info msg="StartContainer for \"35863310c3d1b3280aa7d0f21f7ae2d6c2332def2f6ab31ee42d65f0c5a4d1d9\"" Jan 22 00:42:13.890818 containerd[1953]: time="2026-01-22T00:42:13.890785081Z" level=info msg="connecting to shim 35863310c3d1b3280aa7d0f21f7ae2d6c2332def2f6ab31ee42d65f0c5a4d1d9" address="unix:///run/containerd/s/8809c807fe746b4205c03dbb7af082b0006adf3a13be052c5fec38691461e7ab" protocol=ttrpc version=3 Jan 22 00:42:13.922246 systemd[1]: Started cri-containerd-35863310c3d1b3280aa7d0f21f7ae2d6c2332def2f6ab31ee42d65f0c5a4d1d9.scope - libcontainer container 35863310c3d1b3280aa7d0f21f7ae2d6c2332def2f6ab31ee42d65f0c5a4d1d9. Jan 22 00:42:13.933000 audit: BPF prog-id=155 op=LOAD Jan 22 00:42:13.934000 audit: BPF prog-id=156 op=LOAD Jan 22 00:42:13.934000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3563 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:13.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335383633333130633364316233323830616137643066323166376165 Jan 22 00:42:13.934000 audit: BPF prog-id=156 op=UNLOAD Jan 22 00:42:13.934000 audit[3741]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3563 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:13.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335383633333130633364316233323830616137643066323166376165 Jan 22 00:42:13.934000 audit: BPF prog-id=157 op=LOAD Jan 22 00:42:13.934000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3563 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:13.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335383633333130633364316233323830616137643066323166376165 Jan 22 00:42:13.934000 audit: BPF prog-id=158 op=LOAD Jan 22 00:42:13.934000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3563 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:13.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335383633333130633364316233323830616137643066323166376165 Jan 22 00:42:13.934000 audit: BPF prog-id=158 op=UNLOAD Jan 22 00:42:13.934000 audit[3741]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3563 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:13.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335383633333130633364316233323830616137643066323166376165 Jan 22 00:42:13.934000 audit: BPF prog-id=157 op=UNLOAD Jan 22 00:42:13.934000 audit[3741]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3563 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:13.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335383633333130633364316233323830616137643066323166376165 Jan 22 00:42:13.934000 audit: BPF prog-id=159 op=LOAD Jan 22 00:42:13.934000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3563 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:13.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335383633333130633364316233323830616137643066323166376165 Jan 22 00:42:13.955957 containerd[1953]: time="2026-01-22T00:42:13.955906193Z" level=info msg="StartContainer for \"35863310c3d1b3280aa7d0f21f7ae2d6c2332def2f6ab31ee42d65f0c5a4d1d9\" returns successfully" Jan 22 00:42:14.098222 kubelet[3437]: I0122 00:42:14.097092 3437 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-hqbxz" podStartSLOduration=1.076008192 podStartE2EDuration="3.097075756s" podCreationTimestamp="2026-01-22 00:42:11 +0000 UTC" firstStartedPulling="2026-01-22 00:42:11.824066419 +0000 UTC m=+6.056032546" lastFinishedPulling="2026-01-22 00:42:13.845133972 +0000 UTC m=+8.077100110" observedRunningTime="2026-01-22 00:42:14.096711896 +0000 UTC m=+8.328678034" watchObservedRunningTime="2026-01-22 00:42:14.097075756 +0000 UTC m=+8.329041903" Jan 22 00:42:20.787739 sudo[2379]: pam_unix(sudo:session): session closed for user root Jan 22 00:42:20.794546 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 22 00:42:20.794671 kernel: audit: type=1106 audit(1769042540.787:518): pid=2379 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:42:20.787000 audit[2379]: USER_END pid=2379 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:42:20.787000 audit[2379]: CRED_DISP pid=2379 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:42:20.805979 kernel: audit: type=1104 audit(1769042540.787:519): pid=2379 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:42:20.870104 sshd[2378]: Connection closed by 68.220.241.50 port 56882 Jan 22 00:42:20.872344 sshd-session[2375]: pam_unix(sshd:session): session closed for user core Jan 22 00:42:20.873000 audit[2375]: USER_END pid=2375 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:42:20.882906 kernel: audit: type=1106 audit(1769042540.873:520): pid=2375 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:42:20.885538 systemd[1]: sshd@8-172.31.26.54:22-68.220.241.50:56882.service: Deactivated successfully. Jan 22 00:42:20.891529 systemd[1]: session-9.scope: Deactivated successfully. Jan 22 00:42:20.873000 audit[2375]: CRED_DISP pid=2375 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:42:20.898901 kernel: audit: type=1104 audit(1769042540.873:521): pid=2375 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:42:20.900084 systemd[1]: session-9.scope: Consumed 5.167s CPU time, 151.1M memory peak. Jan 22 00:42:20.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.26.54:22-68.220.241.50:56882 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:20.912461 kernel: audit: type=1131 audit(1769042540.884:522): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.26.54:22-68.220.241.50:56882 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:20.915541 systemd-logind[1939]: Session 9 logged out. Waiting for processes to exit. Jan 22 00:42:20.918367 systemd-logind[1939]: Removed session 9. Jan 22 00:42:22.082000 audit[3823]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3823 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:22.094402 kernel: audit: type=1325 audit(1769042542.082:523): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3823 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:22.094537 kernel: audit: type=1300 audit(1769042542.082:523): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc1fcdd2f0 a2=0 a3=7ffc1fcdd2dc items=0 ppid=3540 pid=3823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:22.082000 audit[3823]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc1fcdd2f0 a2=0 a3=7ffc1fcdd2dc items=0 ppid=3540 pid=3823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:22.082000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:22.100928 kernel: audit: type=1327 audit(1769042542.082:523): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:22.095000 audit[3823]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3823 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:22.106904 kernel: audit: type=1325 audit(1769042542.095:524): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3823 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:22.095000 audit[3823]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc1fcdd2f0 a2=0 a3=0 items=0 ppid=3540 pid=3823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:22.119930 kernel: audit: type=1300 audit(1769042542.095:524): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc1fcdd2f0 a2=0 a3=0 items=0 ppid=3540 pid=3823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:22.095000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:22.193000 audit[3825]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3825 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:22.193000 audit[3825]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdbaa266e0 a2=0 a3=7ffdbaa266cc items=0 ppid=3540 pid=3825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:22.193000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:22.200000 audit[3825]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3825 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:22.200000 audit[3825]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdbaa266e0 a2=0 a3=0 items=0 ppid=3540 pid=3825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:22.200000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:24.713000 audit[3828]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3828 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:24.713000 audit[3828]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff800ffe50 a2=0 a3=7fff800ffe3c items=0 ppid=3540 pid=3828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:24.713000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:24.721000 audit[3828]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3828 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:24.721000 audit[3828]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff800ffe50 a2=0 a3=0 items=0 ppid=3540 pid=3828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:24.721000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:24.770000 audit[3830]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3830 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:24.770000 audit[3830]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffcfa0f1fe0 a2=0 a3=7ffcfa0f1fcc items=0 ppid=3540 pid=3830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:24.770000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:24.779000 audit[3830]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3830 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:24.779000 audit[3830]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcfa0f1fe0 a2=0 a3=0 items=0 ppid=3540 pid=3830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:24.779000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:25.805935 kernel: kauditd_printk_skb: 19 callbacks suppressed Jan 22 00:42:25.806146 kernel: audit: type=1325 audit(1769042545.799:531): table=filter:113 family=2 entries=19 op=nft_register_rule pid=3832 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:25.799000 audit[3832]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3832 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:25.799000 audit[3832]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc166c9c70 a2=0 a3=7ffc166c9c5c items=0 ppid=3540 pid=3832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:25.814045 kernel: audit: type=1300 audit(1769042545.799:531): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc166c9c70 a2=0 a3=7ffc166c9c5c items=0 ppid=3540 pid=3832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:25.799000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:25.821911 kernel: audit: type=1327 audit(1769042545.799:531): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:25.821994 kernel: audit: type=1325 audit(1769042545.813:532): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3832 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:25.813000 audit[3832]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3832 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:25.813000 audit[3832]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc166c9c70 a2=0 a3=0 items=0 ppid=3540 pid=3832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:25.833310 kernel: audit: type=1300 audit(1769042545.813:532): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc166c9c70 a2=0 a3=0 items=0 ppid=3540 pid=3832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:25.833423 kernel: audit: type=1327 audit(1769042545.813:532): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:25.813000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:26.835000 audit[3834]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3834 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:26.847140 kernel: audit: type=1325 audit(1769042546.835:533): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3834 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:26.847276 kernel: audit: type=1300 audit(1769042546.835:533): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffee779b2e0 a2=0 a3=7ffee779b2cc items=0 ppid=3540 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:26.835000 audit[3834]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffee779b2e0 a2=0 a3=7ffee779b2cc items=0 ppid=3540 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:26.850858 kernel: audit: type=1327 audit(1769042546.835:533): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:26.835000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:26.851000 audit[3834]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3834 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:26.855891 kernel: audit: type=1325 audit(1769042546.851:534): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3834 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:26.851000 audit[3834]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffee779b2e0 a2=0 a3=0 items=0 ppid=3540 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:26.851000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:26.911980 systemd[1]: Created slice kubepods-besteffort-podedefe39d_6c4c_4a48_94b4_86ccbb267596.slice - libcontainer container kubepods-besteffort-podedefe39d_6c4c_4a48_94b4_86ccbb267596.slice. Jan 22 00:42:27.014615 kubelet[3437]: I0122 00:42:27.014528 3437 status_manager.go:890] "Failed to get status for pod" podUID="744db5de-f986-4b7b-9829-a150f663565e" pod="calico-system/calico-node-7ff9q" err="pods \"calico-node-7ff9q\" is forbidden: User \"system:node:ip-172-31-26-54\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-26-54' and this object" Jan 22 00:42:27.018818 kubelet[3437]: I0122 00:42:27.018281 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/edefe39d-6c4c-4a48-94b4-86ccbb267596-tigera-ca-bundle\") pod \"calico-typha-7b9f4b474d-qvm7k\" (UID: \"edefe39d-6c4c-4a48-94b4-86ccbb267596\") " pod="calico-system/calico-typha-7b9f4b474d-qvm7k" Jan 22 00:42:27.018818 kubelet[3437]: I0122 00:42:27.018313 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/edefe39d-6c4c-4a48-94b4-86ccbb267596-typha-certs\") pod \"calico-typha-7b9f4b474d-qvm7k\" (UID: \"edefe39d-6c4c-4a48-94b4-86ccbb267596\") " pod="calico-system/calico-typha-7b9f4b474d-qvm7k" Jan 22 00:42:27.018818 kubelet[3437]: I0122 00:42:27.018330 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4rsx\" (UniqueName: \"kubernetes.io/projected/edefe39d-6c4c-4a48-94b4-86ccbb267596-kube-api-access-c4rsx\") pod \"calico-typha-7b9f4b474d-qvm7k\" (UID: \"edefe39d-6c4c-4a48-94b4-86ccbb267596\") " pod="calico-system/calico-typha-7b9f4b474d-qvm7k" Jan 22 00:42:27.024763 systemd[1]: Created slice kubepods-besteffort-pod744db5de_f986_4b7b_9829_a150f663565e.slice - libcontainer container kubepods-besteffort-pod744db5de_f986_4b7b_9829_a150f663565e.slice. Jan 22 00:42:27.119658 kubelet[3437]: I0122 00:42:27.119321 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/744db5de-f986-4b7b-9829-a150f663565e-cni-net-dir\") pod \"calico-node-7ff9q\" (UID: \"744db5de-f986-4b7b-9829-a150f663565e\") " pod="calico-system/calico-node-7ff9q" Jan 22 00:42:27.119658 kubelet[3437]: I0122 00:42:27.119604 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/744db5de-f986-4b7b-9829-a150f663565e-policysync\") pod \"calico-node-7ff9q\" (UID: \"744db5de-f986-4b7b-9829-a150f663565e\") " pod="calico-system/calico-node-7ff9q" Jan 22 00:42:27.119858 kubelet[3437]: I0122 00:42:27.119663 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz5jg\" (UniqueName: \"kubernetes.io/projected/744db5de-f986-4b7b-9829-a150f663565e-kube-api-access-wz5jg\") pod \"calico-node-7ff9q\" (UID: \"744db5de-f986-4b7b-9829-a150f663565e\") " pod="calico-system/calico-node-7ff9q" Jan 22 00:42:27.119858 kubelet[3437]: I0122 00:42:27.119714 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/744db5de-f986-4b7b-9829-a150f663565e-var-run-calico\") pod \"calico-node-7ff9q\" (UID: \"744db5de-f986-4b7b-9829-a150f663565e\") " pod="calico-system/calico-node-7ff9q" Jan 22 00:42:27.119858 kubelet[3437]: I0122 00:42:27.119748 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/744db5de-f986-4b7b-9829-a150f663565e-tigera-ca-bundle\") pod \"calico-node-7ff9q\" (UID: \"744db5de-f986-4b7b-9829-a150f663565e\") " pod="calico-system/calico-node-7ff9q" Jan 22 00:42:27.119858 kubelet[3437]: I0122 00:42:27.119770 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/744db5de-f986-4b7b-9829-a150f663565e-var-lib-calico\") pod \"calico-node-7ff9q\" (UID: \"744db5de-f986-4b7b-9829-a150f663565e\") " pod="calico-system/calico-node-7ff9q" Jan 22 00:42:27.119858 kubelet[3437]: I0122 00:42:27.119836 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/744db5de-f986-4b7b-9829-a150f663565e-node-certs\") pod \"calico-node-7ff9q\" (UID: \"744db5de-f986-4b7b-9829-a150f663565e\") " pod="calico-system/calico-node-7ff9q" Jan 22 00:42:27.120113 kubelet[3437]: I0122 00:42:27.119910 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/744db5de-f986-4b7b-9829-a150f663565e-xtables-lock\") pod \"calico-node-7ff9q\" (UID: \"744db5de-f986-4b7b-9829-a150f663565e\") " pod="calico-system/calico-node-7ff9q" Jan 22 00:42:27.120113 kubelet[3437]: I0122 00:42:27.119945 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/744db5de-f986-4b7b-9829-a150f663565e-lib-modules\") pod \"calico-node-7ff9q\" (UID: \"744db5de-f986-4b7b-9829-a150f663565e\") " pod="calico-system/calico-node-7ff9q" Jan 22 00:42:27.120113 kubelet[3437]: I0122 00:42:27.119997 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/744db5de-f986-4b7b-9829-a150f663565e-flexvol-driver-host\") pod \"calico-node-7ff9q\" (UID: \"744db5de-f986-4b7b-9829-a150f663565e\") " pod="calico-system/calico-node-7ff9q" Jan 22 00:42:27.120113 kubelet[3437]: I0122 00:42:27.120069 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/744db5de-f986-4b7b-9829-a150f663565e-cni-bin-dir\") pod \"calico-node-7ff9q\" (UID: \"744db5de-f986-4b7b-9829-a150f663565e\") " pod="calico-system/calico-node-7ff9q" Jan 22 00:42:27.120113 kubelet[3437]: I0122 00:42:27.120094 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/744db5de-f986-4b7b-9829-a150f663565e-cni-log-dir\") pod \"calico-node-7ff9q\" (UID: \"744db5de-f986-4b7b-9829-a150f663565e\") " pod="calico-system/calico-node-7ff9q" Jan 22 00:42:27.141509 kubelet[3437]: E0122 00:42:27.141262 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qvwlf" podUID="9a122a32-e7c8-4162-bccb-4b71d5c37d97" Jan 22 00:42:27.222926 kubelet[3437]: I0122 00:42:27.222883 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a122a32-e7c8-4162-bccb-4b71d5c37d97-kubelet-dir\") pod \"csi-node-driver-qvwlf\" (UID: \"9a122a32-e7c8-4162-bccb-4b71d5c37d97\") " pod="calico-system/csi-node-driver-qvwlf" Jan 22 00:42:27.223075 kubelet[3437]: I0122 00:42:27.222960 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9a122a32-e7c8-4162-bccb-4b71d5c37d97-registration-dir\") pod \"csi-node-driver-qvwlf\" (UID: \"9a122a32-e7c8-4162-bccb-4b71d5c37d97\") " pod="calico-system/csi-node-driver-qvwlf" Jan 22 00:42:27.223075 kubelet[3437]: I0122 00:42:27.222976 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9a122a32-e7c8-4162-bccb-4b71d5c37d97-varrun\") pod \"csi-node-driver-qvwlf\" (UID: \"9a122a32-e7c8-4162-bccb-4b71d5c37d97\") " pod="calico-system/csi-node-driver-qvwlf" Jan 22 00:42:27.223075 kubelet[3437]: I0122 00:42:27.222991 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng7wz\" (UniqueName: \"kubernetes.io/projected/9a122a32-e7c8-4162-bccb-4b71d5c37d97-kube-api-access-ng7wz\") pod \"csi-node-driver-qvwlf\" (UID: \"9a122a32-e7c8-4162-bccb-4b71d5c37d97\") " pod="calico-system/csi-node-driver-qvwlf" Jan 22 00:42:27.223075 kubelet[3437]: I0122 00:42:27.223055 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9a122a32-e7c8-4162-bccb-4b71d5c37d97-socket-dir\") pod \"csi-node-driver-qvwlf\" (UID: \"9a122a32-e7c8-4162-bccb-4b71d5c37d97\") " pod="calico-system/csi-node-driver-qvwlf" Jan 22 00:42:27.253589 containerd[1953]: time="2026-01-22T00:42:27.253545178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b9f4b474d-qvm7k,Uid:edefe39d-6c4c-4a48-94b4-86ccbb267596,Namespace:calico-system,Attempt:0,}" Jan 22 00:42:27.295542 containerd[1953]: time="2026-01-22T00:42:27.295459886Z" level=info msg="connecting to shim 02e6467f3cb2a2772e1cf354da13e7e68d4dd3c11249f6c1348f6663a92a1551" address="unix:///run/containerd/s/30d9adfb0082eb86018c0af682b66adf709bcb57710606cc5d727fcfd55f573d" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:42:27.325143 kubelet[3437]: E0122 00:42:27.325093 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:27.325799 kubelet[3437]: W0122 00:42:27.325650 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:27.330629 systemd[1]: Started cri-containerd-02e6467f3cb2a2772e1cf354da13e7e68d4dd3c11249f6c1348f6663a92a1551.scope - libcontainer container 02e6467f3cb2a2772e1cf354da13e7e68d4dd3c11249f6c1348f6663a92a1551. Jan 22 00:42:27.331773 kubelet[3437]: E0122 00:42:27.331736 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:27.333194 kubelet[3437]: E0122 00:42:27.333167 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:27.333194 kubelet[3437]: W0122 00:42:27.333192 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:27.333611 kubelet[3437]: E0122 00:42:27.333219 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:27.333714 kubelet[3437]: E0122 00:42:27.333609 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:27.333714 kubelet[3437]: W0122 00:42:27.333622 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:27.333714 kubelet[3437]: E0122 00:42:27.333655 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:27.334099 kubelet[3437]: E0122 00:42:27.334081 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:27.334233 kubelet[3437]: W0122 00:42:27.334099 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:27.334233 kubelet[3437]: E0122 00:42:27.334129 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:27.334496 containerd[1953]: time="2026-01-22T00:42:27.334298992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7ff9q,Uid:744db5de-f986-4b7b-9829-a150f663565e,Namespace:calico-system,Attempt:0,}" Jan 22 00:42:27.334613 kubelet[3437]: E0122 00:42:27.334578 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:27.334613 kubelet[3437]: W0122 00:42:27.334609 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:27.334776 kubelet[3437]: E0122 00:42:27.334642 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:27.335276 kubelet[3437]: E0122 00:42:27.335255 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:27.335276 kubelet[3437]: W0122 00:42:27.335275 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:27.335610 kubelet[3437]: E0122 00:42:27.335392 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:27.335610 kubelet[3437]: E0122 00:42:27.335561 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:27.335610 kubelet[3437]: W0122 00:42:27.335572 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:27.336025 kubelet[3437]: E0122 00:42:27.335993 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:27.336607 kubelet[3437]: E0122 00:42:27.336585 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:27.336607 kubelet[3437]: W0122 00:42:27.336604 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:27.336956 kubelet[3437]: E0122 00:42:27.336934 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:27.336956 kubelet[3437]: W0122 00:42:27.336952 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:27.338105 kubelet[3437]: E0122 00:42:27.337943 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:27.338105 kubelet[3437]: W0122 00:42:27.337956 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:27.338598 kubelet[3437]: E0122 00:42:27.338328 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:27.338598 kubelet[3437]: W0122 00:42:27.338339 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:27.338598 kubelet[3437]: E0122 00:42:27.337865 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:27.338598 kubelet[3437]: E0122 00:42:27.338529 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:27.338598 kubelet[3437]: E0122 00:42:27.338549 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:27.338757 kubelet[3437]: E0122 00:42:27.338613 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:27.339126 kubelet[3437]: E0122 00:42:27.339112 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:27.339311 kubelet[3437]: W0122 00:42:27.339128 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:27.339311 kubelet[3437]: E0122 00:42:27.339165 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:27.339619 kubelet[3437]: E0122 00:42:27.339578 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:27.339619 kubelet[3437]: W0122 00:42:27.339590 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:27.339727 kubelet[3437]: E0122 00:42:27.339689 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:27.346105 kubelet[3437]: E0122 00:42:27.345942 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:27.346105 kubelet[3437]: W0122 00:42:27.345966 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:27.346105 kubelet[3437]: E0122 00:42:27.346015 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:27.346436 kubelet[3437]: E0122 00:42:27.346208 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:27.346436 kubelet[3437]: W0122 00:42:27.346219 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:27.346436 kubelet[3437]: E0122 00:42:27.346389 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:27.346773 kubelet[3437]: E0122 00:42:27.346509 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:27.346773 kubelet[3437]: W0122 00:42:27.346519 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:27.346773 kubelet[3437]: E0122 00:42:27.346601 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:27.347730 kubelet[3437]: E0122 00:42:27.346925 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:27.347730 kubelet[3437]: W0122 00:42:27.346941 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:27.347730 kubelet[3437]: E0122 00:42:27.346980 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:27.347730 kubelet[3437]: E0122 00:42:27.347214 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:27.347730 kubelet[3437]: W0122 00:42:27.347223 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:27.347730 kubelet[3437]: E0122 00:42:27.347249 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:27.347730 kubelet[3437]: E0122 00:42:27.347554 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:27.347730 kubelet[3437]: W0122 00:42:27.347563 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:27.347730 kubelet[3437]: E0122 00:42:27.347588 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:27.348306 kubelet[3437]: E0122 00:42:27.347824 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:27.348306 kubelet[3437]: W0122 00:42:27.347834 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:27.348306 kubelet[3437]: E0122 00:42:27.347847 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:27.348544 kubelet[3437]: E0122 00:42:27.348519 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:27.348544 kubelet[3437]: W0122 00:42:27.348541 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:27.348631 kubelet[3437]: E0122 00:42:27.348561 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:27.348852 kubelet[3437]: E0122 00:42:27.348832 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:27.348852 kubelet[3437]: W0122 00:42:27.348849 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:27.349073 kubelet[3437]: E0122 00:42:27.348989 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:27.349129 kubelet[3437]: E0122 00:42:27.349093 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:27.349129 kubelet[3437]: W0122 00:42:27.349101 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:27.349129 kubelet[3437]: E0122 00:42:27.349113 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:27.349592 kubelet[3437]: E0122 00:42:27.349512 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:27.349592 kubelet[3437]: W0122 00:42:27.349564 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:27.349592 kubelet[3437]: E0122 00:42:27.349577 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:27.350589 kubelet[3437]: E0122 00:42:27.350569 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:27.350589 kubelet[3437]: W0122 00:42:27.350588 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:27.350712 kubelet[3437]: E0122 00:42:27.350603 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:27.364255 kubelet[3437]: E0122 00:42:27.364088 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:27.364255 kubelet[3437]: W0122 00:42:27.364112 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:27.364255 kubelet[3437]: E0122 00:42:27.364136 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:27.364000 audit: BPF prog-id=160 op=LOAD Jan 22 00:42:27.365000 audit: BPF prog-id=161 op=LOAD Jan 22 00:42:27.365000 audit[3858]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3848 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:27.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032653634363766336362326132373732653163663335346461313365 Jan 22 00:42:27.366000 audit: BPF prog-id=161 op=UNLOAD Jan 22 00:42:27.366000 audit[3858]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3848 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:27.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032653634363766336362326132373732653163663335346461313365 Jan 22 00:42:27.366000 audit: BPF prog-id=162 op=LOAD Jan 22 00:42:27.366000 audit[3858]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3848 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:27.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032653634363766336362326132373732653163663335346461313365 Jan 22 00:42:27.367000 audit: BPF prog-id=163 op=LOAD Jan 22 00:42:27.367000 audit[3858]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3848 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:27.367000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032653634363766336362326132373732653163663335346461313365 Jan 22 00:42:27.367000 audit: BPF prog-id=163 op=UNLOAD Jan 22 00:42:27.367000 audit[3858]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3848 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:27.367000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032653634363766336362326132373732653163663335346461313365 Jan 22 00:42:27.367000 audit: BPF prog-id=162 op=UNLOAD Jan 22 00:42:27.367000 audit[3858]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3848 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:27.367000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032653634363766336362326132373732653163663335346461313365 Jan 22 00:42:27.367000 audit: BPF prog-id=164 op=LOAD Jan 22 00:42:27.367000 audit[3858]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3848 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:27.367000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032653634363766336362326132373732653163663335346461313365 Jan 22 00:42:27.389826 containerd[1953]: time="2026-01-22T00:42:27.389766975Z" level=info msg="connecting to shim c63053462aa82ed6e7baab6f09e754d8c97a7b66bb3d773af8c9791216251530" address="unix:///run/containerd/s/7554fe30d6ba6c430b9553a0ac83eda6938ca13e6e12347653f73d846ed412ac" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:42:27.429191 systemd[1]: Started cri-containerd-c63053462aa82ed6e7baab6f09e754d8c97a7b66bb3d773af8c9791216251530.scope - libcontainer container c63053462aa82ed6e7baab6f09e754d8c97a7b66bb3d773af8c9791216251530. Jan 22 00:42:27.451591 containerd[1953]: time="2026-01-22T00:42:27.451275636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b9f4b474d-qvm7k,Uid:edefe39d-6c4c-4a48-94b4-86ccbb267596,Namespace:calico-system,Attempt:0,} returns sandbox id \"02e6467f3cb2a2772e1cf354da13e7e68d4dd3c11249f6c1348f6663a92a1551\"" Jan 22 00:42:27.454000 audit: BPF prog-id=165 op=LOAD Jan 22 00:42:27.455000 audit: BPF prog-id=166 op=LOAD Jan 22 00:42:27.455000 audit[3924]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=3913 pid=3924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:27.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336333035333436326161383265643665376261616236663039653735 Jan 22 00:42:27.455000 audit: BPF prog-id=166 op=UNLOAD Jan 22 00:42:27.455000 audit[3924]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3913 pid=3924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:27.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336333035333436326161383265643665376261616236663039653735 Jan 22 00:42:27.455000 audit: BPF prog-id=167 op=LOAD Jan 22 00:42:27.455000 audit[3924]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3913 pid=3924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:27.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336333035333436326161383265643665376261616236663039653735 Jan 22 00:42:27.455000 audit: BPF prog-id=168 op=LOAD Jan 22 00:42:27.455000 audit[3924]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3913 pid=3924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:27.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336333035333436326161383265643665376261616236663039653735 Jan 22 00:42:27.455000 audit: BPF prog-id=168 op=UNLOAD Jan 22 00:42:27.455000 audit[3924]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3913 pid=3924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:27.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336333035333436326161383265643665376261616236663039653735 Jan 22 00:42:27.455000 audit: BPF prog-id=167 op=UNLOAD Jan 22 00:42:27.455000 audit[3924]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3913 pid=3924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:27.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336333035333436326161383265643665376261616236663039653735 Jan 22 00:42:27.455000 audit: BPF prog-id=169 op=LOAD Jan 22 00:42:27.455000 audit[3924]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3913 pid=3924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:27.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336333035333436326161383265643665376261616236663039653735 Jan 22 00:42:27.458580 containerd[1953]: time="2026-01-22T00:42:27.456795175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 22 00:42:27.485204 containerd[1953]: time="2026-01-22T00:42:27.485130401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7ff9q,Uid:744db5de-f986-4b7b-9829-a150f663565e,Namespace:calico-system,Attempt:0,} returns sandbox id \"c63053462aa82ed6e7baab6f09e754d8c97a7b66bb3d773af8c9791216251530\"" Jan 22 00:42:27.863000 audit[3959]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3959 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:27.863000 audit[3959]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc37f9d960 a2=0 a3=7ffc37f9d94c items=0 ppid=3540 pid=3959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:27.863000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:27.873000 audit[3959]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3959 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:27.873000 audit[3959]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc37f9d960 a2=0 a3=0 items=0 ppid=3540 pid=3959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:27.873000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:28.906903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3237797095.mount: Deactivated successfully. Jan 22 00:42:29.028524 kubelet[3437]: E0122 00:42:29.028433 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qvwlf" podUID="9a122a32-e7c8-4162-bccb-4b71d5c37d97" Jan 22 00:42:30.081781 containerd[1953]: time="2026-01-22T00:42:30.081715084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:42:30.083545 containerd[1953]: time="2026-01-22T00:42:30.083466362Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 22 00:42:30.086070 containerd[1953]: time="2026-01-22T00:42:30.086030641Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:42:30.090099 containerd[1953]: time="2026-01-22T00:42:30.089950859Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:42:30.091199 containerd[1953]: time="2026-01-22T00:42:30.091139510Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.634306171s" Jan 22 00:42:30.091199 containerd[1953]: time="2026-01-22T00:42:30.091169537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 22 00:42:30.092705 containerd[1953]: time="2026-01-22T00:42:30.092686476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 22 00:42:30.121129 containerd[1953]: time="2026-01-22T00:42:30.120996670Z" level=info msg="CreateContainer within sandbox \"02e6467f3cb2a2772e1cf354da13e7e68d4dd3c11249f6c1348f6663a92a1551\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 22 00:42:30.140900 containerd[1953]: time="2026-01-22T00:42:30.139718689Z" level=info msg="Container a297b42a35fc0bc20688f29602cc44f16dc03010c2553ad9bdda4cc3b76e42ea: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:42:30.154221 containerd[1953]: time="2026-01-22T00:42:30.154184049Z" level=info msg="CreateContainer within sandbox \"02e6467f3cb2a2772e1cf354da13e7e68d4dd3c11249f6c1348f6663a92a1551\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a297b42a35fc0bc20688f29602cc44f16dc03010c2553ad9bdda4cc3b76e42ea\"" Jan 22 00:42:30.156028 containerd[1953]: time="2026-01-22T00:42:30.155980549Z" level=info msg="StartContainer for \"a297b42a35fc0bc20688f29602cc44f16dc03010c2553ad9bdda4cc3b76e42ea\"" Jan 22 00:42:30.157070 containerd[1953]: time="2026-01-22T00:42:30.157035906Z" level=info msg="connecting to shim a297b42a35fc0bc20688f29602cc44f16dc03010c2553ad9bdda4cc3b76e42ea" address="unix:///run/containerd/s/30d9adfb0082eb86018c0af682b66adf709bcb57710606cc5d727fcfd55f573d" protocol=ttrpc version=3 Jan 22 00:42:30.200112 systemd[1]: Started cri-containerd-a297b42a35fc0bc20688f29602cc44f16dc03010c2553ad9bdda4cc3b76e42ea.scope - libcontainer container a297b42a35fc0bc20688f29602cc44f16dc03010c2553ad9bdda4cc3b76e42ea. Jan 22 00:42:30.217000 audit: BPF prog-id=170 op=LOAD Jan 22 00:42:30.217000 audit: BPF prog-id=171 op=LOAD Jan 22 00:42:30.217000 audit[3970]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3848 pid=3970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:30.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132393762343261333566633062633230363838663239363032636334 Jan 22 00:42:30.219000 audit: BPF prog-id=171 op=UNLOAD Jan 22 00:42:30.219000 audit[3970]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3848 pid=3970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:30.219000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132393762343261333566633062633230363838663239363032636334 Jan 22 00:42:30.219000 audit: BPF prog-id=172 op=LOAD Jan 22 00:42:30.219000 audit[3970]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3848 pid=3970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:30.219000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132393762343261333566633062633230363838663239363032636334 Jan 22 00:42:30.219000 audit: BPF prog-id=173 op=LOAD Jan 22 00:42:30.219000 audit[3970]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3848 pid=3970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:30.219000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132393762343261333566633062633230363838663239363032636334 Jan 22 00:42:30.219000 audit: BPF prog-id=173 op=UNLOAD Jan 22 00:42:30.219000 audit[3970]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3848 pid=3970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:30.219000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132393762343261333566633062633230363838663239363032636334 Jan 22 00:42:30.219000 audit: BPF prog-id=172 op=UNLOAD Jan 22 00:42:30.219000 audit[3970]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3848 pid=3970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:30.219000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132393762343261333566633062633230363838663239363032636334 Jan 22 00:42:30.219000 audit: BPF prog-id=174 op=LOAD Jan 22 00:42:30.219000 audit[3970]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3848 pid=3970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:30.219000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132393762343261333566633062633230363838663239363032636334 Jan 22 00:42:30.283939 containerd[1953]: time="2026-01-22T00:42:30.283895168Z" level=info msg="StartContainer for \"a297b42a35fc0bc20688f29602cc44f16dc03010c2553ad9bdda4cc3b76e42ea\" returns successfully" Jan 22 00:42:31.027861 kubelet[3437]: E0122 00:42:31.027772 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qvwlf" podUID="9a122a32-e7c8-4162-bccb-4b71d5c37d97" Jan 22 00:42:31.201329 kubelet[3437]: I0122 00:42:31.201276 3437 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7b9f4b474d-qvm7k" podStartSLOduration=2.562536179 podStartE2EDuration="5.201254047s" podCreationTimestamp="2026-01-22 00:42:26 +0000 UTC" firstStartedPulling="2026-01-22 00:42:27.453837645 +0000 UTC m=+21.685803784" lastFinishedPulling="2026-01-22 00:42:30.092555515 +0000 UTC m=+24.324521652" observedRunningTime="2026-01-22 00:42:31.201183423 +0000 UTC m=+25.433149569" watchObservedRunningTime="2026-01-22 00:42:31.201254047 +0000 UTC m=+25.433220194" Jan 22 00:42:31.229727 kubelet[3437]: E0122 00:42:31.229689 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:31.229727 kubelet[3437]: W0122 00:42:31.229723 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:31.231977 kubelet[3437]: E0122 00:42:31.231922 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:31.232206 kubelet[3437]: E0122 00:42:31.232182 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:31.232206 kubelet[3437]: W0122 00:42:31.232199 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:31.232277 kubelet[3437]: E0122 00:42:31.232216 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:31.232379 kubelet[3437]: E0122 00:42:31.232364 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:31.232379 kubelet[3437]: W0122 00:42:31.232374 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:31.232473 kubelet[3437]: E0122 00:42:31.232381 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:31.238320 kubelet[3437]: E0122 00:42:31.238250 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:31.238320 kubelet[3437]: W0122 00:42:31.238278 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:31.238320 kubelet[3437]: E0122 00:42:31.238302 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:31.238580 kubelet[3437]: E0122 00:42:31.238554 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:31.238632 kubelet[3437]: W0122 00:42:31.238583 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:31.238632 kubelet[3437]: E0122 00:42:31.238598 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:31.238824 kubelet[3437]: E0122 00:42:31.238807 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:31.238824 kubelet[3437]: W0122 00:42:31.238823 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:31.239084 kubelet[3437]: E0122 00:42:31.238837 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:31.239084 kubelet[3437]: E0122 00:42:31.239035 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:31.239084 kubelet[3437]: W0122 00:42:31.239046 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:31.239084 kubelet[3437]: E0122 00:42:31.239058 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:31.239454 kubelet[3437]: E0122 00:42:31.239235 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:31.239454 kubelet[3437]: W0122 00:42:31.239245 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:31.239454 kubelet[3437]: E0122 00:42:31.239256 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:31.239619 kubelet[3437]: E0122 00:42:31.239527 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:31.239619 kubelet[3437]: W0122 00:42:31.239538 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:31.239619 kubelet[3437]: E0122 00:42:31.239550 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:31.239774 kubelet[3437]: E0122 00:42:31.239736 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:31.239774 kubelet[3437]: W0122 00:42:31.239746 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:31.239774 kubelet[3437]: E0122 00:42:31.239757 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:31.240004 kubelet[3437]: E0122 00:42:31.239954 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:31.240004 kubelet[3437]: W0122 00:42:31.239967 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:31.240004 kubelet[3437]: E0122 00:42:31.239978 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:31.240182 kubelet[3437]: E0122 00:42:31.240160 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:31.240182 kubelet[3437]: W0122 00:42:31.240169 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:31.240298 kubelet[3437]: E0122 00:42:31.240181 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:31.240414 kubelet[3437]: E0122 00:42:31.240363 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:31.240414 kubelet[3437]: W0122 00:42:31.240390 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:31.240414 kubelet[3437]: E0122 00:42:31.240402 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:31.240604 kubelet[3437]: E0122 00:42:31.240590 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:31.240604 kubelet[3437]: W0122 00:42:31.240600 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:31.240709 kubelet[3437]: E0122 00:42:31.240611 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:31.240822 kubelet[3437]: E0122 00:42:31.240801 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:31.240936 kubelet[3437]: W0122 00:42:31.240821 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:31.240936 kubelet[3437]: E0122 00:42:31.240833 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:31.272342 kubelet[3437]: E0122 00:42:31.272306 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:31.272342 kubelet[3437]: W0122 00:42:31.272330 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:31.272342 kubelet[3437]: E0122 00:42:31.272351 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:31.272759 kubelet[3437]: E0122 00:42:31.272616 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:31.272759 kubelet[3437]: W0122 00:42:31.272632 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:31.272759 kubelet[3437]: E0122 00:42:31.272656 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:31.273129 kubelet[3437]: E0122 00:42:31.273003 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:31.273129 kubelet[3437]: W0122 00:42:31.273013 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:31.273129 kubelet[3437]: E0122 00:42:31.273032 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:31.273327 kubelet[3437]: E0122 00:42:31.273319 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:31.273459 kubelet[3437]: W0122 00:42:31.273382 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:31.273459 kubelet[3437]: E0122 00:42:31.273396 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:31.273680 kubelet[3437]: E0122 00:42:31.273606 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:31.273680 kubelet[3437]: W0122 00:42:31.273614 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:31.273680 kubelet[3437]: E0122 00:42:31.273622 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:31.273882 kubelet[3437]: E0122 00:42:31.273858 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:31.274041 kubelet[3437]: W0122 00:42:31.273934 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:31.274041 kubelet[3437]: E0122 00:42:31.273948 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:31.274196 kubelet[3437]: E0122 00:42:31.274188 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:31.274237 kubelet[3437]: W0122 00:42:31.274230 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:31.274461 kubelet[3437]: E0122 00:42:31.274275 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:31.275023 kubelet[3437]: E0122 00:42:31.274991 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:31.275023 kubelet[3437]: W0122 00:42:31.275022 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:31.275206 kubelet[3437]: E0122 00:42:31.275047 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:31.275490 kubelet[3437]: E0122 00:42:31.275471 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:31.275490 kubelet[3437]: W0122 00:42:31.275488 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:31.275617 kubelet[3437]: E0122 00:42:31.275573 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:31.275809 kubelet[3437]: E0122 00:42:31.275787 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:31.275809 kubelet[3437]: W0122 00:42:31.275805 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:31.276016 kubelet[3437]: E0122 00:42:31.275894 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:31.276151 kubelet[3437]: E0122 00:42:31.276025 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:31.276151 kubelet[3437]: W0122 00:42:31.276035 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:31.276151 kubelet[3437]: E0122 00:42:31.276064 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:31.276354 kubelet[3437]: E0122 00:42:31.276225 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:31.276354 kubelet[3437]: W0122 00:42:31.276234 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:31.276354 kubelet[3437]: E0122 00:42:31.276255 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:31.276532 kubelet[3437]: E0122 00:42:31.276491 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:31.276532 kubelet[3437]: W0122 00:42:31.276502 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:31.276532 kubelet[3437]: E0122 00:42:31.276520 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:31.276926 kubelet[3437]: E0122 00:42:31.276905 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:31.276926 kubelet[3437]: W0122 00:42:31.276923 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:31.277082 kubelet[3437]: E0122 00:42:31.276943 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:31.277164 kubelet[3437]: E0122 00:42:31.277136 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:31.277214 kubelet[3437]: W0122 00:42:31.277179 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:31.277214 kubelet[3437]: E0122 00:42:31.277204 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:31.277491 kubelet[3437]: E0122 00:42:31.277472 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:31.277491 kubelet[3437]: W0122 00:42:31.277488 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:31.277600 kubelet[3437]: E0122 00:42:31.277515 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:31.279121 kubelet[3437]: E0122 00:42:31.278009 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:31.279121 kubelet[3437]: W0122 00:42:31.278025 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:31.279121 kubelet[3437]: E0122 00:42:31.278040 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:31.279121 kubelet[3437]: E0122 00:42:31.278305 3437 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:42:31.279121 kubelet[3437]: W0122 00:42:31.278315 3437 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:42:31.279121 kubelet[3437]: E0122 00:42:31.278327 3437 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:42:31.506367 containerd[1953]: time="2026-01-22T00:42:31.506314423Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:42:31.508517 containerd[1953]: time="2026-01-22T00:42:31.508386877Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 22 00:42:31.511648 containerd[1953]: time="2026-01-22T00:42:31.510614115Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:42:31.513857 containerd[1953]: time="2026-01-22T00:42:31.513821785Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:42:31.514465 containerd[1953]: time="2026-01-22T00:42:31.514439458Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.421523001s" Jan 22 00:42:31.514558 containerd[1953]: time="2026-01-22T00:42:31.514545491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 22 00:42:31.517248 containerd[1953]: time="2026-01-22T00:42:31.517213364Z" level=info msg="CreateContainer within sandbox \"c63053462aa82ed6e7baab6f09e754d8c97a7b66bb3d773af8c9791216251530\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 22 00:42:31.533950 containerd[1953]: time="2026-01-22T00:42:31.532943959Z" level=info msg="Container 86f717928257233a9854c70d7e91c2934f47122fd04a286ca92d51d8f33497cf: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:42:31.539213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount889973982.mount: Deactivated successfully. Jan 22 00:42:31.559704 containerd[1953]: time="2026-01-22T00:42:31.559639665Z" level=info msg="CreateContainer within sandbox \"c63053462aa82ed6e7baab6f09e754d8c97a7b66bb3d773af8c9791216251530\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"86f717928257233a9854c70d7e91c2934f47122fd04a286ca92d51d8f33497cf\"" Jan 22 00:42:31.560441 containerd[1953]: time="2026-01-22T00:42:31.560393908Z" level=info msg="StartContainer for \"86f717928257233a9854c70d7e91c2934f47122fd04a286ca92d51d8f33497cf\"" Jan 22 00:42:31.562625 containerd[1953]: time="2026-01-22T00:42:31.562587133Z" level=info msg="connecting to shim 86f717928257233a9854c70d7e91c2934f47122fd04a286ca92d51d8f33497cf" address="unix:///run/containerd/s/7554fe30d6ba6c430b9553a0ac83eda6938ca13e6e12347653f73d846ed412ac" protocol=ttrpc version=3 Jan 22 00:42:31.596192 systemd[1]: Started cri-containerd-86f717928257233a9854c70d7e91c2934f47122fd04a286ca92d51d8f33497cf.scope - libcontainer container 86f717928257233a9854c70d7e91c2934f47122fd04a286ca92d51d8f33497cf. Jan 22 00:42:31.673000 audit: BPF prog-id=175 op=LOAD Jan 22 00:42:31.675207 kernel: kauditd_printk_skb: 74 callbacks suppressed Jan 22 00:42:31.675250 kernel: audit: type=1334 audit(1769042551.673:561): prog-id=175 op=LOAD Jan 22 00:42:31.673000 audit[4048]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3913 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:31.678422 kernel: audit: type=1300 audit(1769042551.673:561): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3913 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:31.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836663731373932383235373233336139383534633730643765393163 Jan 22 00:42:31.683635 kernel: audit: type=1327 audit(1769042551.673:561): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836663731373932383235373233336139383534633730643765393163 Jan 22 00:42:31.688897 kernel: audit: type=1334 audit(1769042551.673:562): prog-id=176 op=LOAD Jan 22 00:42:31.673000 audit: BPF prog-id=176 op=LOAD Jan 22 00:42:31.673000 audit[4048]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3913 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:31.690630 kernel: audit: type=1300 audit(1769042551.673:562): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3913 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:31.695179 kernel: audit: type=1327 audit(1769042551.673:562): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836663731373932383235373233336139383534633730643765393163 Jan 22 00:42:31.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836663731373932383235373233336139383534633730643765393163 Jan 22 00:42:31.701450 kernel: audit: type=1334 audit(1769042551.673:563): prog-id=176 op=UNLOAD Jan 22 00:42:31.673000 audit: BPF prog-id=176 op=UNLOAD Jan 22 00:42:31.703917 kernel: audit: type=1300 audit(1769042551.673:563): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3913 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:31.673000 audit[4048]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3913 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:31.712074 kernel: audit: type=1327 audit(1769042551.673:563): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836663731373932383235373233336139383534633730643765393163 Jan 22 00:42:31.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836663731373932383235373233336139383534633730643765393163 Jan 22 00:42:31.673000 audit: BPF prog-id=175 op=UNLOAD Jan 22 00:42:31.713918 kernel: audit: type=1334 audit(1769042551.673:564): prog-id=175 op=UNLOAD Jan 22 00:42:31.673000 audit[4048]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3913 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:31.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836663731373932383235373233336139383534633730643765393163 Jan 22 00:42:31.673000 audit: BPF prog-id=177 op=LOAD Jan 22 00:42:31.673000 audit[4048]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3913 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:31.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836663731373932383235373233336139383534633730643765393163 Jan 22 00:42:31.727362 containerd[1953]: time="2026-01-22T00:42:31.727291125Z" level=info msg="StartContainer for \"86f717928257233a9854c70d7e91c2934f47122fd04a286ca92d51d8f33497cf\" returns successfully" Jan 22 00:42:31.733611 systemd[1]: cri-containerd-86f717928257233a9854c70d7e91c2934f47122fd04a286ca92d51d8f33497cf.scope: Deactivated successfully. Jan 22 00:42:31.736000 audit: BPF prog-id=177 op=UNLOAD Jan 22 00:42:31.765067 containerd[1953]: time="2026-01-22T00:42:31.765001258Z" level=info msg="received container exit event container_id:\"86f717928257233a9854c70d7e91c2934f47122fd04a286ca92d51d8f33497cf\" id:\"86f717928257233a9854c70d7e91c2934f47122fd04a286ca92d51d8f33497cf\" pid:4060 exited_at:{seconds:1769042551 nanos:736759153}" Jan 22 00:42:31.796466 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86f717928257233a9854c70d7e91c2934f47122fd04a286ca92d51d8f33497cf-rootfs.mount: Deactivated successfully. Jan 22 00:42:32.191550 kubelet[3437]: I0122 00:42:32.191137 3437 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 00:42:32.192576 containerd[1953]: time="2026-01-22T00:42:32.192358288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 22 00:42:33.027139 kubelet[3437]: E0122 00:42:33.027088 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qvwlf" podUID="9a122a32-e7c8-4162-bccb-4b71d5c37d97" Jan 22 00:42:35.029743 kubelet[3437]: E0122 00:42:35.029678 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qvwlf" podUID="9a122a32-e7c8-4162-bccb-4b71d5c37d97" Jan 22 00:42:37.027231 kubelet[3437]: E0122 00:42:37.027178 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qvwlf" podUID="9a122a32-e7c8-4162-bccb-4b71d5c37d97" Jan 22 00:42:38.336147 containerd[1953]: time="2026-01-22T00:42:38.336000758Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:42:38.340429 containerd[1953]: time="2026-01-22T00:42:38.340391032Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 22 00:42:38.344907 containerd[1953]: time="2026-01-22T00:42:38.344429315Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:42:38.351812 containerd[1953]: time="2026-01-22T00:42:38.351771495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:42:38.352275 containerd[1953]: time="2026-01-22T00:42:38.352243705Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 6.159846547s" Jan 22 00:42:38.352346 containerd[1953]: time="2026-01-22T00:42:38.352278688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 22 00:42:38.396346 containerd[1953]: time="2026-01-22T00:42:38.396312976Z" level=info msg="CreateContainer within sandbox \"c63053462aa82ed6e7baab6f09e754d8c97a7b66bb3d773af8c9791216251530\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 22 00:42:38.423161 containerd[1953]: time="2026-01-22T00:42:38.423111215Z" level=info msg="Container d1bf2add574910784e650f9ecd1e24af01df1fa8b3c50830eb6ad6d024df3229: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:42:38.453098 containerd[1953]: time="2026-01-22T00:42:38.452923375Z" level=info msg="CreateContainer within sandbox \"c63053462aa82ed6e7baab6f09e754d8c97a7b66bb3d773af8c9791216251530\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d1bf2add574910784e650f9ecd1e24af01df1fa8b3c50830eb6ad6d024df3229\"" Jan 22 00:42:38.456498 containerd[1953]: time="2026-01-22T00:42:38.456421091Z" level=info msg="StartContainer for \"d1bf2add574910784e650f9ecd1e24af01df1fa8b3c50830eb6ad6d024df3229\"" Jan 22 00:42:38.461634 containerd[1953]: time="2026-01-22T00:42:38.461584040Z" level=info msg="connecting to shim d1bf2add574910784e650f9ecd1e24af01df1fa8b3c50830eb6ad6d024df3229" address="unix:///run/containerd/s/7554fe30d6ba6c430b9553a0ac83eda6938ca13e6e12347653f73d846ed412ac" protocol=ttrpc version=3 Jan 22 00:42:38.488461 systemd[1]: Started cri-containerd-d1bf2add574910784e650f9ecd1e24af01df1fa8b3c50830eb6ad6d024df3229.scope - libcontainer container d1bf2add574910784e650f9ecd1e24af01df1fa8b3c50830eb6ad6d024df3229. Jan 22 00:42:38.559463 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 22 00:42:38.560953 kernel: audit: type=1334 audit(1769042558.556:567): prog-id=178 op=LOAD Jan 22 00:42:38.561012 kernel: audit: type=1300 audit(1769042558.556:567): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3913 pid=4106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:38.556000 audit: BPF prog-id=178 op=LOAD Jan 22 00:42:38.556000 audit[4106]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3913 pid=4106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:38.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431626632616464353734393130373834653635306639656364316532 Jan 22 00:42:38.577182 kernel: audit: type=1327 audit(1769042558.556:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431626632616464353734393130373834653635306639656364316532 Jan 22 00:42:38.577444 kernel: audit: type=1334 audit(1769042558.556:568): prog-id=179 op=LOAD Jan 22 00:42:38.556000 audit: BPF prog-id=179 op=LOAD Jan 22 00:42:38.584101 kernel: audit: type=1300 audit(1769042558.556:568): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3913 pid=4106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:38.556000 audit[4106]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3913 pid=4106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:38.590327 kernel: audit: type=1327 audit(1769042558.556:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431626632616464353734393130373834653635306639656364316532 Jan 22 00:42:38.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431626632616464353734393130373834653635306639656364316532 Jan 22 00:42:38.556000 audit: BPF prog-id=179 op=UNLOAD Jan 22 00:42:38.595055 kernel: audit: type=1334 audit(1769042558.556:569): prog-id=179 op=UNLOAD Jan 22 00:42:38.556000 audit[4106]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3913 pid=4106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:38.601911 kernel: audit: type=1300 audit(1769042558.556:569): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3913 pid=4106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:38.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431626632616464353734393130373834653635306639656364316532 Jan 22 00:42:38.608886 kernel: audit: type=1327 audit(1769042558.556:569): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431626632616464353734393130373834653635306639656364316532 Jan 22 00:42:38.556000 audit: BPF prog-id=178 op=UNLOAD Jan 22 00:42:38.611900 kernel: audit: type=1334 audit(1769042558.556:570): prog-id=178 op=UNLOAD Jan 22 00:42:38.556000 audit[4106]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3913 pid=4106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:38.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431626632616464353734393130373834653635306639656364316532 Jan 22 00:42:38.556000 audit: BPF prog-id=180 op=LOAD Jan 22 00:42:38.556000 audit[4106]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3913 pid=4106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:38.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431626632616464353734393130373834653635306639656364316532 Jan 22 00:42:38.643302 containerd[1953]: time="2026-01-22T00:42:38.643156049Z" level=info msg="StartContainer for \"d1bf2add574910784e650f9ecd1e24af01df1fa8b3c50830eb6ad6d024df3229\" returns successfully" Jan 22 00:42:39.028029 kubelet[3437]: E0122 00:42:39.027635 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qvwlf" podUID="9a122a32-e7c8-4162-bccb-4b71d5c37d97" Jan 22 00:42:39.397436 systemd[1]: cri-containerd-d1bf2add574910784e650f9ecd1e24af01df1fa8b3c50830eb6ad6d024df3229.scope: Deactivated successfully. Jan 22 00:42:39.397938 systemd[1]: cri-containerd-d1bf2add574910784e650f9ecd1e24af01df1fa8b3c50830eb6ad6d024df3229.scope: Consumed 597ms CPU time, 162M memory peak, 3.5M read from disk, 171.3M written to disk. Jan 22 00:42:39.400000 audit: BPF prog-id=180 op=UNLOAD Jan 22 00:42:39.439109 containerd[1953]: time="2026-01-22T00:42:39.439013230Z" level=info msg="received container exit event container_id:\"d1bf2add574910784e650f9ecd1e24af01df1fa8b3c50830eb6ad6d024df3229\" id:\"d1bf2add574910784e650f9ecd1e24af01df1fa8b3c50830eb6ad6d024df3229\" pid:4119 exited_at:{seconds:1769042559 nanos:423695756}" Jan 22 00:42:39.488704 kubelet[3437]: I0122 00:42:39.486742 3437 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 22 00:42:39.519096 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1bf2add574910784e650f9ecd1e24af01df1fa8b3c50830eb6ad6d024df3229-rootfs.mount: Deactivated successfully. Jan 22 00:42:39.547006 systemd[1]: Created slice kubepods-burstable-pod10b09332_a889_486e_9821_a11a8220ea2e.slice - libcontainer container kubepods-burstable-pod10b09332_a889_486e_9821_a11a8220ea2e.slice. Jan 22 00:42:39.575368 systemd[1]: Created slice kubepods-besteffort-podae564662_102b_4169_9db4_4f39f121460c.slice - libcontainer container kubepods-besteffort-podae564662_102b_4169_9db4_4f39f121460c.slice. Jan 22 00:42:39.592059 systemd[1]: Created slice kubepods-besteffort-pod3c1a4692_58d4_49e9_ab29_e19d2d7ee1e4.slice - libcontainer container kubepods-besteffort-pod3c1a4692_58d4_49e9_ab29_e19d2d7ee1e4.slice. Jan 22 00:42:39.601958 systemd[1]: Created slice kubepods-besteffort-pod0db24862_6144_45b1_8f39_39a11a3c80dd.slice - libcontainer container kubepods-besteffort-pod0db24862_6144_45b1_8f39_39a11a3c80dd.slice. Jan 22 00:42:39.611533 systemd[1]: Created slice kubepods-besteffort-podffc8de62_f696_4cfe_ab23_12f34741b8d0.slice - libcontainer container kubepods-besteffort-podffc8de62_f696_4cfe_ab23_12f34741b8d0.slice. Jan 22 00:42:39.617362 systemd[1]: Created slice kubepods-burstable-pod0036b825_99df_4406_a66f_e7d1814859a1.slice - libcontainer container kubepods-burstable-pod0036b825_99df_4406_a66f_e7d1814859a1.slice. Jan 22 00:42:39.625580 systemd[1]: Created slice kubepods-besteffort-pod4921dab1_a0fc_4a4a_9ec2_f3f09160935a.slice - libcontainer container kubepods-besteffort-pod4921dab1_a0fc_4a4a_9ec2_f3f09160935a.slice. Jan 22 00:42:39.652471 kubelet[3437]: I0122 00:42:39.652278 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fcw7\" (UniqueName: \"kubernetes.io/projected/10b09332-a889-486e-9821-a11a8220ea2e-kube-api-access-8fcw7\") pod \"coredns-668d6bf9bc-8sggz\" (UID: \"10b09332-a889-486e-9821-a11a8220ea2e\") " pod="kube-system/coredns-668d6bf9bc-8sggz" Jan 22 00:42:39.652471 kubelet[3437]: I0122 00:42:39.652338 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c1a4692-58d4-49e9-ab29-e19d2d7ee1e4-tigera-ca-bundle\") pod \"calico-kube-controllers-5bd666cb6-pggl9\" (UID: \"3c1a4692-58d4-49e9-ab29-e19d2d7ee1e4\") " pod="calico-system/calico-kube-controllers-5bd666cb6-pggl9" Jan 22 00:42:39.652471 kubelet[3437]: I0122 00:42:39.652370 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksv99\" (UniqueName: \"kubernetes.io/projected/0db24862-6144-45b1-8f39-39a11a3c80dd-kube-api-access-ksv99\") pod \"calico-apiserver-6bf9469c9d-75c6s\" (UID: \"0db24862-6144-45b1-8f39-39a11a3c80dd\") " pod="calico-apiserver/calico-apiserver-6bf9469c9d-75c6s" Jan 22 00:42:39.652471 kubelet[3437]: I0122 00:42:39.652387 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae564662-102b-4169-9db4-4f39f121460c-whisker-ca-bundle\") pod \"whisker-6c8dfb8df9-dfxfx\" (UID: \"ae564662-102b-4169-9db4-4f39f121460c\") " pod="calico-system/whisker-6c8dfb8df9-dfxfx" Jan 22 00:42:39.653047 kubelet[3437]: I0122 00:42:39.652904 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5pzc\" (UniqueName: \"kubernetes.io/projected/4921dab1-a0fc-4a4a-9ec2-f3f09160935a-kube-api-access-c5pzc\") pod \"calico-apiserver-6bf9469c9d-jt79q\" (UID: \"4921dab1-a0fc-4a4a-9ec2-f3f09160935a\") " pod="calico-apiserver/calico-apiserver-6bf9469c9d-jt79q" Jan 22 00:42:39.653047 kubelet[3437]: I0122 00:42:39.652964 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0036b825-99df-4406-a66f-e7d1814859a1-config-volume\") pod \"coredns-668d6bf9bc-6nhlr\" (UID: \"0036b825-99df-4406-a66f-e7d1814859a1\") " pod="kube-system/coredns-668d6bf9bc-6nhlr" Jan 22 00:42:39.653047 kubelet[3437]: I0122 00:42:39.652995 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffc8de62-f696-4cfe-ab23-12f34741b8d0-goldmane-ca-bundle\") pod \"goldmane-666569f655-4rjzm\" (UID: \"ffc8de62-f696-4cfe-ab23-12f34741b8d0\") " pod="calico-system/goldmane-666569f655-4rjzm" Jan 22 00:42:39.653047 kubelet[3437]: I0122 00:42:39.653015 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwx64\" (UniqueName: \"kubernetes.io/projected/3c1a4692-58d4-49e9-ab29-e19d2d7ee1e4-kube-api-access-hwx64\") pod \"calico-kube-controllers-5bd666cb6-pggl9\" (UID: \"3c1a4692-58d4-49e9-ab29-e19d2d7ee1e4\") " pod="calico-system/calico-kube-controllers-5bd666cb6-pggl9" Jan 22 00:42:39.653210 kubelet[3437]: I0122 00:42:39.653047 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4921dab1-a0fc-4a4a-9ec2-f3f09160935a-calico-apiserver-certs\") pod \"calico-apiserver-6bf9469c9d-jt79q\" (UID: \"4921dab1-a0fc-4a4a-9ec2-f3f09160935a\") " pod="calico-apiserver/calico-apiserver-6bf9469c9d-jt79q" Jan 22 00:42:39.653210 kubelet[3437]: I0122 00:42:39.653095 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ae564662-102b-4169-9db4-4f39f121460c-whisker-backend-key-pair\") pod \"whisker-6c8dfb8df9-dfxfx\" (UID: \"ae564662-102b-4169-9db4-4f39f121460c\") " pod="calico-system/whisker-6c8dfb8df9-dfxfx" Jan 22 00:42:39.653210 kubelet[3437]: I0122 00:42:39.653116 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86j2g\" (UniqueName: \"kubernetes.io/projected/0036b825-99df-4406-a66f-e7d1814859a1-kube-api-access-86j2g\") pod \"coredns-668d6bf9bc-6nhlr\" (UID: \"0036b825-99df-4406-a66f-e7d1814859a1\") " pod="kube-system/coredns-668d6bf9bc-6nhlr" Jan 22 00:42:39.653296 kubelet[3437]: I0122 00:42:39.653278 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0db24862-6144-45b1-8f39-39a11a3c80dd-calico-apiserver-certs\") pod \"calico-apiserver-6bf9469c9d-75c6s\" (UID: \"0db24862-6144-45b1-8f39-39a11a3c80dd\") " pod="calico-apiserver/calico-apiserver-6bf9469c9d-75c6s" Jan 22 00:42:39.653352 kubelet[3437]: I0122 00:42:39.653332 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcttl\" (UniqueName: \"kubernetes.io/projected/ae564662-102b-4169-9db4-4f39f121460c-kube-api-access-vcttl\") pod \"whisker-6c8dfb8df9-dfxfx\" (UID: \"ae564662-102b-4169-9db4-4f39f121460c\") " pod="calico-system/whisker-6c8dfb8df9-dfxfx" Jan 22 00:42:39.653396 kubelet[3437]: I0122 00:42:39.653379 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffc8de62-f696-4cfe-ab23-12f34741b8d0-config\") pod \"goldmane-666569f655-4rjzm\" (UID: \"ffc8de62-f696-4cfe-ab23-12f34741b8d0\") " pod="calico-system/goldmane-666569f655-4rjzm" Jan 22 00:42:39.653428 kubelet[3437]: I0122 00:42:39.653416 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ffc8de62-f696-4cfe-ab23-12f34741b8d0-goldmane-key-pair\") pod \"goldmane-666569f655-4rjzm\" (UID: \"ffc8de62-f696-4cfe-ab23-12f34741b8d0\") " pod="calico-system/goldmane-666569f655-4rjzm" Jan 22 00:42:39.653455 kubelet[3437]: I0122 00:42:39.653431 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frzdg\" (UniqueName: \"kubernetes.io/projected/ffc8de62-f696-4cfe-ab23-12f34741b8d0-kube-api-access-frzdg\") pod \"goldmane-666569f655-4rjzm\" (UID: \"ffc8de62-f696-4cfe-ab23-12f34741b8d0\") " pod="calico-system/goldmane-666569f655-4rjzm" Jan 22 00:42:39.653455 kubelet[3437]: I0122 00:42:39.653449 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10b09332-a889-486e-9821-a11a8220ea2e-config-volume\") pod \"coredns-668d6bf9bc-8sggz\" (UID: \"10b09332-a889-486e-9821-a11a8220ea2e\") " pod="kube-system/coredns-668d6bf9bc-8sggz" Jan 22 00:42:39.882611 containerd[1953]: time="2026-01-22T00:42:39.882553407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8sggz,Uid:10b09332-a889-486e-9821-a11a8220ea2e,Namespace:kube-system,Attempt:0,}" Jan 22 00:42:39.893065 containerd[1953]: time="2026-01-22T00:42:39.893014326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c8dfb8df9-dfxfx,Uid:ae564662-102b-4169-9db4-4f39f121460c,Namespace:calico-system,Attempt:0,}" Jan 22 00:42:39.914534 containerd[1953]: time="2026-01-22T00:42:39.914000625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf9469c9d-75c6s,Uid:0db24862-6144-45b1-8f39-39a11a3c80dd,Namespace:calico-apiserver,Attempt:0,}" Jan 22 00:42:39.915538 containerd[1953]: time="2026-01-22T00:42:39.915053750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bd666cb6-pggl9,Uid:3c1a4692-58d4-49e9-ab29-e19d2d7ee1e4,Namespace:calico-system,Attempt:0,}" Jan 22 00:42:39.917290 containerd[1953]: time="2026-01-22T00:42:39.917249357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4rjzm,Uid:ffc8de62-f696-4cfe-ab23-12f34741b8d0,Namespace:calico-system,Attempt:0,}" Jan 22 00:42:39.939700 containerd[1953]: time="2026-01-22T00:42:39.938693320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf9469c9d-jt79q,Uid:4921dab1-a0fc-4a4a-9ec2-f3f09160935a,Namespace:calico-apiserver,Attempt:0,}" Jan 22 00:42:39.939700 containerd[1953]: time="2026-01-22T00:42:39.938812258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6nhlr,Uid:0036b825-99df-4406-a66f-e7d1814859a1,Namespace:kube-system,Attempt:0,}" Jan 22 00:42:40.266975 containerd[1953]: time="2026-01-22T00:42:40.266088945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 22 00:42:40.348472 containerd[1953]: time="2026-01-22T00:42:40.348421326Z" level=error msg="Failed to destroy network for sandbox \"a56facf29b6487527e634c21436976462ebc7ace3b545a36b147ea37ea844940\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:42:40.355974 containerd[1953]: time="2026-01-22T00:42:40.355927358Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bd666cb6-pggl9,Uid:3c1a4692-58d4-49e9-ab29-e19d2d7ee1e4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a56facf29b6487527e634c21436976462ebc7ace3b545a36b147ea37ea844940\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:42:40.356828 containerd[1953]: time="2026-01-22T00:42:40.356794920Z" level=error msg="Failed to destroy network for sandbox \"3e59dfedccacb473cf590cfd1caeb27f76a87e44f92c2e3fab8ebfe5f3e5e3ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:42:40.357618 kubelet[3437]: E0122 00:42:40.357578 3437 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a56facf29b6487527e634c21436976462ebc7ace3b545a36b147ea37ea844940\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:42:40.358093 kubelet[3437]: E0122 00:42:40.358025 3437 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a56facf29b6487527e634c21436976462ebc7ace3b545a36b147ea37ea844940\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bd666cb6-pggl9" Jan 22 00:42:40.358093 kubelet[3437]: E0122 00:42:40.358066 3437 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a56facf29b6487527e634c21436976462ebc7ace3b545a36b147ea37ea844940\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bd666cb6-pggl9" Jan 22 00:42:40.361158 kubelet[3437]: E0122 00:42:40.359434 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5bd666cb6-pggl9_calico-system(3c1a4692-58d4-49e9-ab29-e19d2d7ee1e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5bd666cb6-pggl9_calico-system(3c1a4692-58d4-49e9-ab29-e19d2d7ee1e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a56facf29b6487527e634c21436976462ebc7ace3b545a36b147ea37ea844940\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5bd666cb6-pggl9" podUID="3c1a4692-58d4-49e9-ab29-e19d2d7ee1e4" Jan 22 00:42:40.361290 containerd[1953]: time="2026-01-22T00:42:40.361203038Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf9469c9d-jt79q,Uid:4921dab1-a0fc-4a4a-9ec2-f3f09160935a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e59dfedccacb473cf590cfd1caeb27f76a87e44f92c2e3fab8ebfe5f3e5e3ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:42:40.364776 kubelet[3437]: E0122 00:42:40.364746 3437 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e59dfedccacb473cf590cfd1caeb27f76a87e44f92c2e3fab8ebfe5f3e5e3ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:42:40.365467 kubelet[3437]: E0122 00:42:40.365437 3437 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e59dfedccacb473cf590cfd1caeb27f76a87e44f92c2e3fab8ebfe5f3e5e3ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bf9469c9d-jt79q" Jan 22 00:42:40.365591 kubelet[3437]: E0122 00:42:40.365577 3437 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e59dfedccacb473cf590cfd1caeb27f76a87e44f92c2e3fab8ebfe5f3e5e3ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bf9469c9d-jt79q" Jan 22 00:42:40.365729 kubelet[3437]: E0122 00:42:40.365705 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6bf9469c9d-jt79q_calico-apiserver(4921dab1-a0fc-4a4a-9ec2-f3f09160935a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6bf9469c9d-jt79q_calico-apiserver(4921dab1-a0fc-4a4a-9ec2-f3f09160935a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e59dfedccacb473cf590cfd1caeb27f76a87e44f92c2e3fab8ebfe5f3e5e3ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bf9469c9d-jt79q" podUID="4921dab1-a0fc-4a4a-9ec2-f3f09160935a" Jan 22 00:42:40.368960 containerd[1953]: time="2026-01-22T00:42:40.368900344Z" level=error msg="Failed to destroy network for sandbox \"67e851127cb845da304c07a10a849f5ed3a98cd0981de5df0733d733046bbc4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:42:40.378591 containerd[1953]: time="2026-01-22T00:42:40.378348929Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6nhlr,Uid:0036b825-99df-4406-a66f-e7d1814859a1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"67e851127cb845da304c07a10a849f5ed3a98cd0981de5df0733d733046bbc4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:42:40.378903 kubelet[3437]: E0122 00:42:40.378820 3437 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67e851127cb845da304c07a10a849f5ed3a98cd0981de5df0733d733046bbc4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:42:40.379985 kubelet[3437]: E0122 00:42:40.379941 3437 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67e851127cb845da304c07a10a849f5ed3a98cd0981de5df0733d733046bbc4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6nhlr" Jan 22 00:42:40.380103 kubelet[3437]: E0122 00:42:40.379984 3437 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67e851127cb845da304c07a10a849f5ed3a98cd0981de5df0733d733046bbc4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6nhlr" Jan 22 00:42:40.380103 kubelet[3437]: E0122 00:42:40.380045 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6nhlr_kube-system(0036b825-99df-4406-a66f-e7d1814859a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6nhlr_kube-system(0036b825-99df-4406-a66f-e7d1814859a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67e851127cb845da304c07a10a849f5ed3a98cd0981de5df0733d733046bbc4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6nhlr" podUID="0036b825-99df-4406-a66f-e7d1814859a1" Jan 22 00:42:40.393497 containerd[1953]: time="2026-01-22T00:42:40.393430435Z" level=error msg="Failed to destroy network for sandbox \"f9e9751df81ff7b40f9fa6069960b148d66dca5ff35024f6b1429dd5dad0d54f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:42:40.400541 containerd[1953]: time="2026-01-22T00:42:40.400455151Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8sggz,Uid:10b09332-a889-486e-9821-a11a8220ea2e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9e9751df81ff7b40f9fa6069960b148d66dca5ff35024f6b1429dd5dad0d54f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:42:40.401178 kubelet[3437]: E0122 00:42:40.401138 3437 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9e9751df81ff7b40f9fa6069960b148d66dca5ff35024f6b1429dd5dad0d54f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:42:40.401315 kubelet[3437]: E0122 00:42:40.401208 3437 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9e9751df81ff7b40f9fa6069960b148d66dca5ff35024f6b1429dd5dad0d54f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8sggz" Jan 22 00:42:40.401315 kubelet[3437]: E0122 00:42:40.401237 3437 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9e9751df81ff7b40f9fa6069960b148d66dca5ff35024f6b1429dd5dad0d54f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8sggz" Jan 22 00:42:40.401767 kubelet[3437]: E0122 00:42:40.401303 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-8sggz_kube-system(10b09332-a889-486e-9821-a11a8220ea2e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-8sggz_kube-system(10b09332-a889-486e-9821-a11a8220ea2e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9e9751df81ff7b40f9fa6069960b148d66dca5ff35024f6b1429dd5dad0d54f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8sggz" podUID="10b09332-a889-486e-9821-a11a8220ea2e" Jan 22 00:42:40.404720 containerd[1953]: time="2026-01-22T00:42:40.404663391Z" level=error msg="Failed to destroy network for sandbox \"a3f792c31ae5431258dd541703add41b24fe4ad109ecce0207d1fd8da873fbda\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:42:40.408640 containerd[1953]: time="2026-01-22T00:42:40.408579293Z" level=error msg="Failed to destroy network for sandbox \"3e4bf5ebea8eb80f1c261a86e917551e20f0999c63b34fc9656f368286499d8f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:42:40.409685 containerd[1953]: time="2026-01-22T00:42:40.409594598Z" level=error msg="Failed to destroy network for sandbox \"474c7840a2794b0941cfe579c1e5b609d574586dbeec316a0e9873868c4a3326\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:42:40.411505 containerd[1953]: time="2026-01-22T00:42:40.411443458Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c8dfb8df9-dfxfx,Uid:ae564662-102b-4169-9db4-4f39f121460c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3f792c31ae5431258dd541703add41b24fe4ad109ecce0207d1fd8da873fbda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:42:40.411801 kubelet[3437]: E0122 00:42:40.411659 3437 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3f792c31ae5431258dd541703add41b24fe4ad109ecce0207d1fd8da873fbda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:42:40.411801 kubelet[3437]: E0122 00:42:40.411714 3437 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3f792c31ae5431258dd541703add41b24fe4ad109ecce0207d1fd8da873fbda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c8dfb8df9-dfxfx" Jan 22 00:42:40.411801 kubelet[3437]: E0122 00:42:40.411732 3437 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3f792c31ae5431258dd541703add41b24fe4ad109ecce0207d1fd8da873fbda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c8dfb8df9-dfxfx" Jan 22 00:42:40.412155 kubelet[3437]: E0122 00:42:40.411768 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c8dfb8df9-dfxfx_calico-system(ae564662-102b-4169-9db4-4f39f121460c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c8dfb8df9-dfxfx_calico-system(ae564662-102b-4169-9db4-4f39f121460c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3f792c31ae5431258dd541703add41b24fe4ad109ecce0207d1fd8da873fbda\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c8dfb8df9-dfxfx" podUID="ae564662-102b-4169-9db4-4f39f121460c" Jan 22 00:42:40.418189 containerd[1953]: time="2026-01-22T00:42:40.418069934Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4rjzm,Uid:ffc8de62-f696-4cfe-ab23-12f34741b8d0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"474c7840a2794b0941cfe579c1e5b609d574586dbeec316a0e9873868c4a3326\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:42:40.418377 kubelet[3437]: E0122 00:42:40.418335 3437 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"474c7840a2794b0941cfe579c1e5b609d574586dbeec316a0e9873868c4a3326\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:42:40.418430 kubelet[3437]: E0122 00:42:40.418395 3437 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"474c7840a2794b0941cfe579c1e5b609d574586dbeec316a0e9873868c4a3326\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-4rjzm" Jan 22 00:42:40.418430 kubelet[3437]: E0122 00:42:40.418414 3437 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"474c7840a2794b0941cfe579c1e5b609d574586dbeec316a0e9873868c4a3326\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-4rjzm" Jan 22 00:42:40.418533 kubelet[3437]: E0122 00:42:40.418468 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-4rjzm_calico-system(ffc8de62-f696-4cfe-ab23-12f34741b8d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-4rjzm_calico-system(ffc8de62-f696-4cfe-ab23-12f34741b8d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"474c7840a2794b0941cfe579c1e5b609d574586dbeec316a0e9873868c4a3326\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-4rjzm" podUID="ffc8de62-f696-4cfe-ab23-12f34741b8d0" Jan 22 00:42:40.420110 containerd[1953]: time="2026-01-22T00:42:40.420064866Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf9469c9d-75c6s,Uid:0db24862-6144-45b1-8f39-39a11a3c80dd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e4bf5ebea8eb80f1c261a86e917551e20f0999c63b34fc9656f368286499d8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:42:40.420333 kubelet[3437]: E0122 00:42:40.420299 3437 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e4bf5ebea8eb80f1c261a86e917551e20f0999c63b34fc9656f368286499d8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:42:40.420383 kubelet[3437]: E0122 00:42:40.420353 3437 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e4bf5ebea8eb80f1c261a86e917551e20f0999c63b34fc9656f368286499d8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bf9469c9d-75c6s" Jan 22 00:42:40.420494 kubelet[3437]: E0122 00:42:40.420377 3437 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e4bf5ebea8eb80f1c261a86e917551e20f0999c63b34fc9656f368286499d8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bf9469c9d-75c6s" Jan 22 00:42:40.420562 kubelet[3437]: E0122 00:42:40.420516 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6bf9469c9d-75c6s_calico-apiserver(0db24862-6144-45b1-8f39-39a11a3c80dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6bf9469c9d-75c6s_calico-apiserver(0db24862-6144-45b1-8f39-39a11a3c80dd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e4bf5ebea8eb80f1c261a86e917551e20f0999c63b34fc9656f368286499d8f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bf9469c9d-75c6s" podUID="0db24862-6144-45b1-8f39-39a11a3c80dd" Jan 22 00:42:41.034165 systemd[1]: Created slice kubepods-besteffort-pod9a122a32_e7c8_4162_bccb_4b71d5c37d97.slice - libcontainer container kubepods-besteffort-pod9a122a32_e7c8_4162_bccb_4b71d5c37d97.slice. Jan 22 00:42:41.037983 containerd[1953]: time="2026-01-22T00:42:41.037940783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qvwlf,Uid:9a122a32-e7c8-4162-bccb-4b71d5c37d97,Namespace:calico-system,Attempt:0,}" Jan 22 00:42:41.108351 containerd[1953]: time="2026-01-22T00:42:41.108298686Z" level=error msg="Failed to destroy network for sandbox \"39e967c4eba002bd637ea575f162ca3ac4ee78a9d991ccc9ef9eaa96006074e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:42:41.112247 systemd[1]: run-netns-cni\x2dadcb3ab1\x2dc264\x2de3a7\x2d37ae\x2d477394032f6a.mount: Deactivated successfully. Jan 22 00:42:41.115059 containerd[1953]: time="2026-01-22T00:42:41.115009673Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qvwlf,Uid:9a122a32-e7c8-4162-bccb-4b71d5c37d97,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"39e967c4eba002bd637ea575f162ca3ac4ee78a9d991ccc9ef9eaa96006074e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:42:41.115667 kubelet[3437]: E0122 00:42:41.115622 3437 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39e967c4eba002bd637ea575f162ca3ac4ee78a9d991ccc9ef9eaa96006074e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:42:41.115760 kubelet[3437]: E0122 00:42:41.115685 3437 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39e967c4eba002bd637ea575f162ca3ac4ee78a9d991ccc9ef9eaa96006074e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qvwlf" Jan 22 00:42:41.115760 kubelet[3437]: E0122 00:42:41.115712 3437 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39e967c4eba002bd637ea575f162ca3ac4ee78a9d991ccc9ef9eaa96006074e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qvwlf" Jan 22 00:42:41.115853 kubelet[3437]: E0122 00:42:41.115765 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qvwlf_calico-system(9a122a32-e7c8-4162-bccb-4b71d5c37d97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qvwlf_calico-system(9a122a32-e7c8-4162-bccb-4b71d5c37d97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39e967c4eba002bd637ea575f162ca3ac4ee78a9d991ccc9ef9eaa96006074e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qvwlf" podUID="9a122a32-e7c8-4162-bccb-4b71d5c37d97" Jan 22 00:42:47.431979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1196164684.mount: Deactivated successfully. Jan 22 00:42:47.525013 containerd[1953]: time="2026-01-22T00:42:47.524928866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 22 00:42:47.529976 containerd[1953]: time="2026-01-22T00:42:47.510513467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:42:47.563654 containerd[1953]: time="2026-01-22T00:42:47.563418374Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:42:47.566232 containerd[1953]: time="2026-01-22T00:42:47.564551615Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.298405835s" Jan 22 00:42:47.566232 containerd[1953]: time="2026-01-22T00:42:47.564596419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 22 00:42:47.566232 containerd[1953]: time="2026-01-22T00:42:47.564851318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:42:47.603527 containerd[1953]: time="2026-01-22T00:42:47.603485255Z" level=info msg="CreateContainer within sandbox \"c63053462aa82ed6e7baab6f09e754d8c97a7b66bb3d773af8c9791216251530\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 22 00:42:47.704418 containerd[1953]: time="2026-01-22T00:42:47.704323611Z" level=info msg="Container b97eb6ed571c7a93d3c95fd3a27a83a1950603aafb1c4267794579b8c02e9ef0: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:42:47.707530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2672174908.mount: Deactivated successfully. Jan 22 00:42:47.799526 containerd[1953]: time="2026-01-22T00:42:47.799477091Z" level=info msg="CreateContainer within sandbox \"c63053462aa82ed6e7baab6f09e754d8c97a7b66bb3d773af8c9791216251530\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b97eb6ed571c7a93d3c95fd3a27a83a1950603aafb1c4267794579b8c02e9ef0\"" Jan 22 00:42:47.803916 containerd[1953]: time="2026-01-22T00:42:47.803859590Z" level=info msg="StartContainer for \"b97eb6ed571c7a93d3c95fd3a27a83a1950603aafb1c4267794579b8c02e9ef0\"" Jan 22 00:42:47.809912 containerd[1953]: time="2026-01-22T00:42:47.809849485Z" level=info msg="connecting to shim b97eb6ed571c7a93d3c95fd3a27a83a1950603aafb1c4267794579b8c02e9ef0" address="unix:///run/containerd/s/7554fe30d6ba6c430b9553a0ac83eda6938ca13e6e12347653f73d846ed412ac" protocol=ttrpc version=3 Jan 22 00:42:47.973144 systemd[1]: Started cri-containerd-b97eb6ed571c7a93d3c95fd3a27a83a1950603aafb1c4267794579b8c02e9ef0.scope - libcontainer container b97eb6ed571c7a93d3c95fd3a27a83a1950603aafb1c4267794579b8c02e9ef0. Jan 22 00:42:48.029000 audit: BPF prog-id=181 op=LOAD Jan 22 00:42:48.032191 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 22 00:42:48.032258 kernel: audit: type=1334 audit(1769042568.029:573): prog-id=181 op=LOAD Jan 22 00:42:48.034900 kernel: audit: type=1300 audit(1769042568.029:573): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3913 pid=4381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:48.029000 audit[4381]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3913 pid=4381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:48.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239376562366564353731633761393364336339356664336132376138 Jan 22 00:42:48.044962 kernel: audit: type=1327 audit(1769042568.029:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239376562366564353731633761393364336339356664336132376138 Jan 22 00:42:48.045619 kernel: audit: type=1334 audit(1769042568.033:574): prog-id=182 op=LOAD Jan 22 00:42:48.033000 audit: BPF prog-id=182 op=LOAD Jan 22 00:42:48.033000 audit[4381]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3913 pid=4381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:48.053908 kernel: audit: type=1300 audit(1769042568.033:574): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3913 pid=4381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:48.054046 kernel: audit: type=1327 audit(1769042568.033:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239376562366564353731633761393364336339356664336132376138 Jan 22 00:42:48.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239376562366564353731633761393364336339356664336132376138 Jan 22 00:42:48.058842 kernel: audit: type=1334 audit(1769042568.033:575): prog-id=182 op=UNLOAD Jan 22 00:42:48.058937 kernel: audit: type=1300 audit(1769042568.033:575): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3913 pid=4381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:48.033000 audit: BPF prog-id=182 op=UNLOAD Jan 22 00:42:48.033000 audit[4381]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3913 pid=4381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:48.069151 kernel: audit: type=1327 audit(1769042568.033:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239376562366564353731633761393364336339356664336132376138 Jan 22 00:42:48.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239376562366564353731633761393364336339356664336132376138 Jan 22 00:42:48.073108 kernel: audit: type=1334 audit(1769042568.033:576): prog-id=181 op=UNLOAD Jan 22 00:42:48.033000 audit: BPF prog-id=181 op=UNLOAD Jan 22 00:42:48.033000 audit[4381]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3913 pid=4381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:48.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239376562366564353731633761393364336339356664336132376138 Jan 22 00:42:48.033000 audit: BPF prog-id=183 op=LOAD Jan 22 00:42:48.033000 audit[4381]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3913 pid=4381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:48.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239376562366564353731633761393364336339356664336132376138 Jan 22 00:42:48.170716 containerd[1953]: time="2026-01-22T00:42:48.170675313Z" level=info msg="StartContainer for \"b97eb6ed571c7a93d3c95fd3a27a83a1950603aafb1c4267794579b8c02e9ef0\" returns successfully" Jan 22 00:42:48.300817 kubelet[3437]: I0122 00:42:48.300010 3437 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7ff9q" podStartSLOduration=2.220811434 podStartE2EDuration="22.299994403s" podCreationTimestamp="2026-01-22 00:42:26 +0000 UTC" firstStartedPulling="2026-01-22 00:42:27.486563021 +0000 UTC m=+21.718529161" lastFinishedPulling="2026-01-22 00:42:47.565745992 +0000 UTC m=+41.797712130" observedRunningTime="2026-01-22 00:42:48.299649651 +0000 UTC m=+42.531615800" watchObservedRunningTime="2026-01-22 00:42:48.299994403 +0000 UTC m=+42.531960609" Jan 22 00:42:48.498907 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 22 00:42:48.499046 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 22 00:42:49.055966 kubelet[3437]: I0122 00:42:49.055913 3437 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ae564662-102b-4169-9db4-4f39f121460c-whisker-backend-key-pair\") pod \"ae564662-102b-4169-9db4-4f39f121460c\" (UID: \"ae564662-102b-4169-9db4-4f39f121460c\") " Jan 22 00:42:49.056156 kubelet[3437]: I0122 00:42:49.055994 3437 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcttl\" (UniqueName: \"kubernetes.io/projected/ae564662-102b-4169-9db4-4f39f121460c-kube-api-access-vcttl\") pod \"ae564662-102b-4169-9db4-4f39f121460c\" (UID: \"ae564662-102b-4169-9db4-4f39f121460c\") " Jan 22 00:42:49.056156 kubelet[3437]: I0122 00:42:49.056026 3437 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae564662-102b-4169-9db4-4f39f121460c-whisker-ca-bundle\") pod \"ae564662-102b-4169-9db4-4f39f121460c\" (UID: \"ae564662-102b-4169-9db4-4f39f121460c\") " Jan 22 00:42:49.057855 kubelet[3437]: I0122 00:42:49.057613 3437 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae564662-102b-4169-9db4-4f39f121460c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ae564662-102b-4169-9db4-4f39f121460c" (UID: "ae564662-102b-4169-9db4-4f39f121460c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 00:42:49.064478 systemd[1]: var-lib-kubelet-pods-ae564662\x2d102b\x2d4169\x2d9db4\x2d4f39f121460c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvcttl.mount: Deactivated successfully. Jan 22 00:42:49.065096 systemd[1]: var-lib-kubelet-pods-ae564662\x2d102b\x2d4169\x2d9db4\x2d4f39f121460c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 22 00:42:49.065399 kubelet[3437]: I0122 00:42:49.065365 3437 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae564662-102b-4169-9db4-4f39f121460c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ae564662-102b-4169-9db4-4f39f121460c" (UID: "ae564662-102b-4169-9db4-4f39f121460c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 00:42:49.065523 kubelet[3437]: I0122 00:42:49.065500 3437 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae564662-102b-4169-9db4-4f39f121460c-kube-api-access-vcttl" (OuterVolumeSpecName: "kube-api-access-vcttl") pod "ae564662-102b-4169-9db4-4f39f121460c" (UID: "ae564662-102b-4169-9db4-4f39f121460c"). InnerVolumeSpecName "kube-api-access-vcttl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 00:42:49.157171 kubelet[3437]: I0122 00:42:49.157126 3437 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae564662-102b-4169-9db4-4f39f121460c-whisker-ca-bundle\") on node \"ip-172-31-26-54\" DevicePath \"\"" Jan 22 00:42:49.157171 kubelet[3437]: I0122 00:42:49.157162 3437 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ae564662-102b-4169-9db4-4f39f121460c-whisker-backend-key-pair\") on node \"ip-172-31-26-54\" DevicePath \"\"" Jan 22 00:42:49.157171 kubelet[3437]: I0122 00:42:49.157172 3437 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vcttl\" (UniqueName: \"kubernetes.io/projected/ae564662-102b-4169-9db4-4f39f121460c-kube-api-access-vcttl\") on node \"ip-172-31-26-54\" DevicePath \"\"" Jan 22 00:42:49.284419 systemd[1]: Removed slice kubepods-besteffort-podae564662_102b_4169_9db4_4f39f121460c.slice - libcontainer container kubepods-besteffort-podae564662_102b_4169_9db4_4f39f121460c.slice. Jan 22 00:42:49.430696 systemd[1]: Created slice kubepods-besteffort-pod96b1517d_4481_408c_9294_a20121dee9ff.slice - libcontainer container kubepods-besteffort-pod96b1517d_4481_408c_9294_a20121dee9ff.slice. Jan 22 00:42:49.559386 kubelet[3437]: I0122 00:42:49.559297 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/96b1517d-4481-408c-9294-a20121dee9ff-whisker-backend-key-pair\") pod \"whisker-65dd85ff-jgxjj\" (UID: \"96b1517d-4481-408c-9294-a20121dee9ff\") " pod="calico-system/whisker-65dd85ff-jgxjj" Jan 22 00:42:49.559386 kubelet[3437]: I0122 00:42:49.559337 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7cwt\" (UniqueName: \"kubernetes.io/projected/96b1517d-4481-408c-9294-a20121dee9ff-kube-api-access-q7cwt\") pod \"whisker-65dd85ff-jgxjj\" (UID: \"96b1517d-4481-408c-9294-a20121dee9ff\") " pod="calico-system/whisker-65dd85ff-jgxjj" Jan 22 00:42:49.559386 kubelet[3437]: I0122 00:42:49.559368 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96b1517d-4481-408c-9294-a20121dee9ff-whisker-ca-bundle\") pod \"whisker-65dd85ff-jgxjj\" (UID: \"96b1517d-4481-408c-9294-a20121dee9ff\") " pod="calico-system/whisker-65dd85ff-jgxjj" Jan 22 00:42:49.738895 containerd[1953]: time="2026-01-22T00:42:49.738742414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65dd85ff-jgxjj,Uid:96b1517d-4481-408c-9294-a20121dee9ff,Namespace:calico-system,Attempt:0,}" Jan 22 00:42:50.033563 kubelet[3437]: I0122 00:42:50.033458 3437 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae564662-102b-4169-9db4-4f39f121460c" path="/var/lib/kubelet/pods/ae564662-102b-4169-9db4-4f39f121460c/volumes" Jan 22 00:42:50.359424 (udev-worker)[4441]: Network interface NamePolicy= disabled on kernel command line. Jan 22 00:42:50.367207 systemd-networkd[1860]: cali3cd2fcf1209: Link UP Jan 22 00:42:50.367501 systemd-networkd[1860]: cali3cd2fcf1209: Gained carrier Jan 22 00:42:50.405147 containerd[1953]: 2026-01-22 00:42:49.790 [INFO][4500] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 22 00:42:50.405147 containerd[1953]: 2026-01-22 00:42:49.853 [INFO][4500] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--54-k8s-whisker--65dd85ff--jgxjj-eth0 whisker-65dd85ff- calico-system 96b1517d-4481-408c-9294-a20121dee9ff 909 0 2026-01-22 00:42:49 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:65dd85ff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-26-54 whisker-65dd85ff-jgxjj eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3cd2fcf1209 [] [] }} ContainerID="e2655a2f3b228be3185effac57ef66f53669bfdbc1d044a453e223cd2b9989cf" Namespace="calico-system" Pod="whisker-65dd85ff-jgxjj" WorkloadEndpoint="ip--172--31--26--54-k8s-whisker--65dd85ff--jgxjj-" Jan 22 00:42:50.405147 containerd[1953]: 2026-01-22 00:42:49.853 [INFO][4500] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e2655a2f3b228be3185effac57ef66f53669bfdbc1d044a453e223cd2b9989cf" Namespace="calico-system" Pod="whisker-65dd85ff-jgxjj" WorkloadEndpoint="ip--172--31--26--54-k8s-whisker--65dd85ff--jgxjj-eth0" Jan 22 00:42:50.405147 containerd[1953]: 2026-01-22 00:42:50.220 [INFO][4508] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2655a2f3b228be3185effac57ef66f53669bfdbc1d044a453e223cd2b9989cf" HandleID="k8s-pod-network.e2655a2f3b228be3185effac57ef66f53669bfdbc1d044a453e223cd2b9989cf" Workload="ip--172--31--26--54-k8s-whisker--65dd85ff--jgxjj-eth0" Jan 22 00:42:50.405713 containerd[1953]: 2026-01-22 00:42:50.225 [INFO][4508] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e2655a2f3b228be3185effac57ef66f53669bfdbc1d044a453e223cd2b9989cf" HandleID="k8s-pod-network.e2655a2f3b228be3185effac57ef66f53669bfdbc1d044a453e223cd2b9989cf" Workload="ip--172--31--26--54-k8s-whisker--65dd85ff--jgxjj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fc70), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-54", "pod":"whisker-65dd85ff-jgxjj", "timestamp":"2026-01-22 00:42:50.22082055 +0000 UTC"}, Hostname:"ip-172-31-26-54", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 00:42:50.405713 containerd[1953]: 2026-01-22 00:42:50.226 [INFO][4508] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:42:50.405713 containerd[1953]: 2026-01-22 00:42:50.228 [INFO][4508] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:42:50.405713 containerd[1953]: 2026-01-22 00:42:50.230 [INFO][4508] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-54' Jan 22 00:42:50.405713 containerd[1953]: 2026-01-22 00:42:50.253 [INFO][4508] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e2655a2f3b228be3185effac57ef66f53669bfdbc1d044a453e223cd2b9989cf" host="ip-172-31-26-54" Jan 22 00:42:50.405713 containerd[1953]: 2026-01-22 00:42:50.269 [INFO][4508] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-54" Jan 22 00:42:50.405713 containerd[1953]: 2026-01-22 00:42:50.279 [INFO][4508] ipam/ipam.go 511: Trying affinity for 192.168.74.0/26 host="ip-172-31-26-54" Jan 22 00:42:50.405713 containerd[1953]: 2026-01-22 00:42:50.284 [INFO][4508] ipam/ipam.go 158: Attempting to load block cidr=192.168.74.0/26 host="ip-172-31-26-54" Jan 22 00:42:50.405713 containerd[1953]: 2026-01-22 00:42:50.289 [INFO][4508] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.74.0/26 host="ip-172-31-26-54" Jan 22 00:42:50.405713 containerd[1953]: 2026-01-22 00:42:50.289 [INFO][4508] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.74.0/26 handle="k8s-pod-network.e2655a2f3b228be3185effac57ef66f53669bfdbc1d044a453e223cd2b9989cf" host="ip-172-31-26-54" Jan 22 00:42:50.406191 containerd[1953]: 2026-01-22 00:42:50.292 [INFO][4508] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e2655a2f3b228be3185effac57ef66f53669bfdbc1d044a453e223cd2b9989cf Jan 22 00:42:50.406191 containerd[1953]: 2026-01-22 00:42:50.298 [INFO][4508] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.74.0/26 handle="k8s-pod-network.e2655a2f3b228be3185effac57ef66f53669bfdbc1d044a453e223cd2b9989cf" host="ip-172-31-26-54" Jan 22 00:42:50.406191 containerd[1953]: 2026-01-22 00:42:50.307 [INFO][4508] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.74.1/26] block=192.168.74.0/26 handle="k8s-pod-network.e2655a2f3b228be3185effac57ef66f53669bfdbc1d044a453e223cd2b9989cf" host="ip-172-31-26-54" Jan 22 00:42:50.406191 containerd[1953]: 2026-01-22 00:42:50.307 [INFO][4508] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.74.1/26] handle="k8s-pod-network.e2655a2f3b228be3185effac57ef66f53669bfdbc1d044a453e223cd2b9989cf" host="ip-172-31-26-54" Jan 22 00:42:50.406191 containerd[1953]: 2026-01-22 00:42:50.308 [INFO][4508] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:42:50.406191 containerd[1953]: 2026-01-22 00:42:50.308 [INFO][4508] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.74.1/26] IPv6=[] ContainerID="e2655a2f3b228be3185effac57ef66f53669bfdbc1d044a453e223cd2b9989cf" HandleID="k8s-pod-network.e2655a2f3b228be3185effac57ef66f53669bfdbc1d044a453e223cd2b9989cf" Workload="ip--172--31--26--54-k8s-whisker--65dd85ff--jgxjj-eth0" Jan 22 00:42:50.408723 containerd[1953]: 2026-01-22 00:42:50.326 [INFO][4500] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e2655a2f3b228be3185effac57ef66f53669bfdbc1d044a453e223cd2b9989cf" Namespace="calico-system" Pod="whisker-65dd85ff-jgxjj" WorkloadEndpoint="ip--172--31--26--54-k8s-whisker--65dd85ff--jgxjj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--54-k8s-whisker--65dd85ff--jgxjj-eth0", GenerateName:"whisker-65dd85ff-", Namespace:"calico-system", SelfLink:"", UID:"96b1517d-4481-408c-9294-a20121dee9ff", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 42, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65dd85ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-54", ContainerID:"", Pod:"whisker-65dd85ff-jgxjj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.74.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3cd2fcf1209", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:42:50.408723 containerd[1953]: 2026-01-22 00:42:50.326 [INFO][4500] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.1/32] ContainerID="e2655a2f3b228be3185effac57ef66f53669bfdbc1d044a453e223cd2b9989cf" Namespace="calico-system" Pod="whisker-65dd85ff-jgxjj" WorkloadEndpoint="ip--172--31--26--54-k8s-whisker--65dd85ff--jgxjj-eth0" Jan 22 00:42:50.412103 containerd[1953]: 2026-01-22 00:42:50.326 [INFO][4500] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3cd2fcf1209 ContainerID="e2655a2f3b228be3185effac57ef66f53669bfdbc1d044a453e223cd2b9989cf" Namespace="calico-system" Pod="whisker-65dd85ff-jgxjj" WorkloadEndpoint="ip--172--31--26--54-k8s-whisker--65dd85ff--jgxjj-eth0" Jan 22 00:42:50.412103 containerd[1953]: 2026-01-22 00:42:50.368 [INFO][4500] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e2655a2f3b228be3185effac57ef66f53669bfdbc1d044a453e223cd2b9989cf" Namespace="calico-system" Pod="whisker-65dd85ff-jgxjj" WorkloadEndpoint="ip--172--31--26--54-k8s-whisker--65dd85ff--jgxjj-eth0" Jan 22 00:42:50.412199 containerd[1953]: 2026-01-22 00:42:50.368 [INFO][4500] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e2655a2f3b228be3185effac57ef66f53669bfdbc1d044a453e223cd2b9989cf" Namespace="calico-system" Pod="whisker-65dd85ff-jgxjj" WorkloadEndpoint="ip--172--31--26--54-k8s-whisker--65dd85ff--jgxjj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--54-k8s-whisker--65dd85ff--jgxjj-eth0", GenerateName:"whisker-65dd85ff-", Namespace:"calico-system", SelfLink:"", UID:"96b1517d-4481-408c-9294-a20121dee9ff", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 42, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65dd85ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-54", ContainerID:"e2655a2f3b228be3185effac57ef66f53669bfdbc1d044a453e223cd2b9989cf", Pod:"whisker-65dd85ff-jgxjj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.74.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3cd2fcf1209", MAC:"d2:4b:c6:e9:54:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:42:50.412298 containerd[1953]: 2026-01-22 00:42:50.389 [INFO][4500] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e2655a2f3b228be3185effac57ef66f53669bfdbc1d044a453e223cd2b9989cf" Namespace="calico-system" Pod="whisker-65dd85ff-jgxjj" WorkloadEndpoint="ip--172--31--26--54-k8s-whisker--65dd85ff--jgxjj-eth0" Jan 22 00:42:50.636802 containerd[1953]: time="2026-01-22T00:42:50.636652868Z" level=info msg="connecting to shim e2655a2f3b228be3185effac57ef66f53669bfdbc1d044a453e223cd2b9989cf" address="unix:///run/containerd/s/140460fd97dbb4facd3474e89d02c24088ed813b5223e8f6d975c8321cb3ac3e" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:42:50.674083 systemd[1]: Started cri-containerd-e2655a2f3b228be3185effac57ef66f53669bfdbc1d044a453e223cd2b9989cf.scope - libcontainer container e2655a2f3b228be3185effac57ef66f53669bfdbc1d044a453e223cd2b9989cf. Jan 22 00:42:50.695000 audit: BPF prog-id=184 op=LOAD Jan 22 00:42:50.696000 audit: BPF prog-id=185 op=LOAD Jan 22 00:42:50.696000 audit[4634]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4622 pid=4634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:50.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532363535613266336232323862653331383565666661633537656636 Jan 22 00:42:50.696000 audit: BPF prog-id=185 op=UNLOAD Jan 22 00:42:50.696000 audit[4634]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4622 pid=4634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:50.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532363535613266336232323862653331383565666661633537656636 Jan 22 00:42:50.696000 audit: BPF prog-id=186 op=LOAD Jan 22 00:42:50.696000 audit[4634]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4622 pid=4634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:50.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532363535613266336232323862653331383565666661633537656636 Jan 22 00:42:50.696000 audit: BPF prog-id=187 op=LOAD Jan 22 00:42:50.696000 audit[4634]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4622 pid=4634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:50.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532363535613266336232323862653331383565666661633537656636 Jan 22 00:42:50.696000 audit: BPF prog-id=187 op=UNLOAD Jan 22 00:42:50.696000 audit[4634]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4622 pid=4634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:50.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532363535613266336232323862653331383565666661633537656636 Jan 22 00:42:50.696000 audit: BPF prog-id=186 op=UNLOAD Jan 22 00:42:50.696000 audit[4634]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4622 pid=4634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:50.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532363535613266336232323862653331383565666661633537656636 Jan 22 00:42:50.696000 audit: BPF prog-id=188 op=LOAD Jan 22 00:42:50.696000 audit[4634]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4622 pid=4634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:50.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532363535613266336232323862653331383565666661633537656636 Jan 22 00:42:50.750557 containerd[1953]: time="2026-01-22T00:42:50.750498976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65dd85ff-jgxjj,Uid:96b1517d-4481-408c-9294-a20121dee9ff,Namespace:calico-system,Attempt:0,} returns sandbox id \"e2655a2f3b228be3185effac57ef66f53669bfdbc1d044a453e223cd2b9989cf\"" Jan 22 00:42:50.757023 containerd[1953]: time="2026-01-22T00:42:50.756866248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 22 00:42:51.048276 containerd[1953]: time="2026-01-22T00:42:51.048146280Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:42:51.050793 containerd[1953]: time="2026-01-22T00:42:51.050745621Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 22 00:42:51.050971 containerd[1953]: time="2026-01-22T00:42:51.050754049Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 22 00:42:51.051063 kubelet[3437]: E0122 00:42:51.050998 3437 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 00:42:51.051534 kubelet[3437]: E0122 00:42:51.051073 3437 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 00:42:51.071037 kubelet[3437]: E0122 00:42:51.070977 3437 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1493b4b6e39948d398b0f1f06536a205,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q7cwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65dd85ff-jgxjj_calico-system(96b1517d-4481-408c-9294-a20121dee9ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 22 00:42:51.074077 containerd[1953]: time="2026-01-22T00:42:51.074039984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 22 00:42:51.328450 containerd[1953]: time="2026-01-22T00:42:51.328416472Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:42:51.330699 containerd[1953]: time="2026-01-22T00:42:51.330626672Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 22 00:42:51.330851 containerd[1953]: time="2026-01-22T00:42:51.330732701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 22 00:42:51.330949 kubelet[3437]: E0122 00:42:51.330900 3437 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 00:42:51.331019 kubelet[3437]: E0122 00:42:51.330947 3437 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 00:42:51.331151 kubelet[3437]: E0122 00:42:51.331098 3437 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q7cwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65dd85ff-jgxjj_calico-system(96b1517d-4481-408c-9294-a20121dee9ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 22 00:42:51.332679 kubelet[3437]: E0122 00:42:51.332602 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65dd85ff-jgxjj" podUID="96b1517d-4481-408c-9294-a20121dee9ff" Jan 22 00:42:52.036572 containerd[1953]: time="2026-01-22T00:42:52.036537672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4rjzm,Uid:ffc8de62-f696-4cfe-ab23-12f34741b8d0,Namespace:calico-system,Attempt:0,}" Jan 22 00:42:52.036948 containerd[1953]: time="2026-01-22T00:42:52.036544835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf9469c9d-75c6s,Uid:0db24862-6144-45b1-8f39-39a11a3c80dd,Namespace:calico-apiserver,Attempt:0,}" Jan 22 00:42:52.039101 systemd-networkd[1860]: cali3cd2fcf1209: Gained IPv6LL Jan 22 00:42:52.206231 systemd-networkd[1860]: calia72685caf17: Link UP Jan 22 00:42:52.207907 systemd-networkd[1860]: calia72685caf17: Gained carrier Jan 22 00:42:52.227328 containerd[1953]: 2026-01-22 00:42:52.095 [INFO][4684] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 22 00:42:52.227328 containerd[1953]: 2026-01-22 00:42:52.114 [INFO][4684] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--54-k8s-goldmane--666569f655--4rjzm-eth0 goldmane-666569f655- calico-system ffc8de62-f696-4cfe-ab23-12f34741b8d0 836 0 2026-01-22 00:42:25 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-26-54 goldmane-666569f655-4rjzm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia72685caf17 [] [] }} ContainerID="ee69b0fb931eeb91fe4c657ed1d019d7c15bb05163b8da1d9c26892772a7191f" Namespace="calico-system" Pod="goldmane-666569f655-4rjzm" WorkloadEndpoint="ip--172--31--26--54-k8s-goldmane--666569f655--4rjzm-" Jan 22 00:42:52.227328 containerd[1953]: 2026-01-22 00:42:52.114 [INFO][4684] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ee69b0fb931eeb91fe4c657ed1d019d7c15bb05163b8da1d9c26892772a7191f" Namespace="calico-system" Pod="goldmane-666569f655-4rjzm" WorkloadEndpoint="ip--172--31--26--54-k8s-goldmane--666569f655--4rjzm-eth0" Jan 22 00:42:52.227328 containerd[1953]: 2026-01-22 00:42:52.154 [INFO][4711] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee69b0fb931eeb91fe4c657ed1d019d7c15bb05163b8da1d9c26892772a7191f" HandleID="k8s-pod-network.ee69b0fb931eeb91fe4c657ed1d019d7c15bb05163b8da1d9c26892772a7191f" Workload="ip--172--31--26--54-k8s-goldmane--666569f655--4rjzm-eth0" Jan 22 00:42:52.227641 containerd[1953]: 2026-01-22 00:42:52.154 [INFO][4711] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ee69b0fb931eeb91fe4c657ed1d019d7c15bb05163b8da1d9c26892772a7191f" HandleID="k8s-pod-network.ee69b0fb931eeb91fe4c657ed1d019d7c15bb05163b8da1d9c26892772a7191f" Workload="ip--172--31--26--54-k8s-goldmane--666569f655--4rjzm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-54", "pod":"goldmane-666569f655-4rjzm", "timestamp":"2026-01-22 00:42:52.154593124 +0000 UTC"}, Hostname:"ip-172-31-26-54", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 00:42:52.227641 containerd[1953]: 2026-01-22 00:42:52.155 [INFO][4711] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:42:52.227641 containerd[1953]: 2026-01-22 00:42:52.155 [INFO][4711] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:42:52.227641 containerd[1953]: 2026-01-22 00:42:52.155 [INFO][4711] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-54' Jan 22 00:42:52.227641 containerd[1953]: 2026-01-22 00:42:52.162 [INFO][4711] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ee69b0fb931eeb91fe4c657ed1d019d7c15bb05163b8da1d9c26892772a7191f" host="ip-172-31-26-54" Jan 22 00:42:52.227641 containerd[1953]: 2026-01-22 00:42:52.167 [INFO][4711] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-54" Jan 22 00:42:52.227641 containerd[1953]: 2026-01-22 00:42:52.172 [INFO][4711] ipam/ipam.go 511: Trying affinity for 192.168.74.0/26 host="ip-172-31-26-54" Jan 22 00:42:52.227641 containerd[1953]: 2026-01-22 00:42:52.174 [INFO][4711] ipam/ipam.go 158: Attempting to load block cidr=192.168.74.0/26 host="ip-172-31-26-54" Jan 22 00:42:52.227641 containerd[1953]: 2026-01-22 00:42:52.176 [INFO][4711] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.74.0/26 host="ip-172-31-26-54" Jan 22 00:42:52.227641 containerd[1953]: 2026-01-22 00:42:52.176 [INFO][4711] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.74.0/26 handle="k8s-pod-network.ee69b0fb931eeb91fe4c657ed1d019d7c15bb05163b8da1d9c26892772a7191f" host="ip-172-31-26-54" Jan 22 00:42:52.229196 containerd[1953]: 2026-01-22 00:42:52.178 [INFO][4711] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ee69b0fb931eeb91fe4c657ed1d019d7c15bb05163b8da1d9c26892772a7191f Jan 22 00:42:52.229196 containerd[1953]: 2026-01-22 00:42:52.183 [INFO][4711] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.74.0/26 handle="k8s-pod-network.ee69b0fb931eeb91fe4c657ed1d019d7c15bb05163b8da1d9c26892772a7191f" host="ip-172-31-26-54" Jan 22 00:42:52.229196 containerd[1953]: 2026-01-22 00:42:52.191 [INFO][4711] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.74.2/26] block=192.168.74.0/26 handle="k8s-pod-network.ee69b0fb931eeb91fe4c657ed1d019d7c15bb05163b8da1d9c26892772a7191f" host="ip-172-31-26-54" Jan 22 00:42:52.229196 containerd[1953]: 2026-01-22 00:42:52.191 [INFO][4711] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.74.2/26] handle="k8s-pod-network.ee69b0fb931eeb91fe4c657ed1d019d7c15bb05163b8da1d9c26892772a7191f" host="ip-172-31-26-54" Jan 22 00:42:52.229196 containerd[1953]: 2026-01-22 00:42:52.192 [INFO][4711] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:42:52.229196 containerd[1953]: 2026-01-22 00:42:52.192 [INFO][4711] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.74.2/26] IPv6=[] ContainerID="ee69b0fb931eeb91fe4c657ed1d019d7c15bb05163b8da1d9c26892772a7191f" HandleID="k8s-pod-network.ee69b0fb931eeb91fe4c657ed1d019d7c15bb05163b8da1d9c26892772a7191f" Workload="ip--172--31--26--54-k8s-goldmane--666569f655--4rjzm-eth0" Jan 22 00:42:52.229580 containerd[1953]: 2026-01-22 00:42:52.200 [INFO][4684] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ee69b0fb931eeb91fe4c657ed1d019d7c15bb05163b8da1d9c26892772a7191f" Namespace="calico-system" Pod="goldmane-666569f655-4rjzm" WorkloadEndpoint="ip--172--31--26--54-k8s-goldmane--666569f655--4rjzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--54-k8s-goldmane--666569f655--4rjzm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ffc8de62-f696-4cfe-ab23-12f34741b8d0", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 42, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-54", ContainerID:"", Pod:"goldmane-666569f655-4rjzm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.74.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia72685caf17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:42:52.229580 containerd[1953]: 2026-01-22 00:42:52.200 [INFO][4684] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.2/32] ContainerID="ee69b0fb931eeb91fe4c657ed1d019d7c15bb05163b8da1d9c26892772a7191f" Namespace="calico-system" Pod="goldmane-666569f655-4rjzm" WorkloadEndpoint="ip--172--31--26--54-k8s-goldmane--666569f655--4rjzm-eth0" Jan 22 00:42:52.229847 containerd[1953]: 2026-01-22 00:42:52.200 [INFO][4684] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia72685caf17 ContainerID="ee69b0fb931eeb91fe4c657ed1d019d7c15bb05163b8da1d9c26892772a7191f" Namespace="calico-system" Pod="goldmane-666569f655-4rjzm" WorkloadEndpoint="ip--172--31--26--54-k8s-goldmane--666569f655--4rjzm-eth0" Jan 22 00:42:52.229847 containerd[1953]: 2026-01-22 00:42:52.207 [INFO][4684] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee69b0fb931eeb91fe4c657ed1d019d7c15bb05163b8da1d9c26892772a7191f" Namespace="calico-system" Pod="goldmane-666569f655-4rjzm" WorkloadEndpoint="ip--172--31--26--54-k8s-goldmane--666569f655--4rjzm-eth0" Jan 22 00:42:52.230007 containerd[1953]: 2026-01-22 00:42:52.208 [INFO][4684] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ee69b0fb931eeb91fe4c657ed1d019d7c15bb05163b8da1d9c26892772a7191f" Namespace="calico-system" Pod="goldmane-666569f655-4rjzm" WorkloadEndpoint="ip--172--31--26--54-k8s-goldmane--666569f655--4rjzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--54-k8s-goldmane--666569f655--4rjzm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ffc8de62-f696-4cfe-ab23-12f34741b8d0", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 42, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-54", ContainerID:"ee69b0fb931eeb91fe4c657ed1d019d7c15bb05163b8da1d9c26892772a7191f", Pod:"goldmane-666569f655-4rjzm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.74.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia72685caf17", MAC:"4a:26:7f:23:bc:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:42:52.230154 containerd[1953]: 2026-01-22 00:42:52.222 [INFO][4684] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ee69b0fb931eeb91fe4c657ed1d019d7c15bb05163b8da1d9c26892772a7191f" Namespace="calico-system" Pod="goldmane-666569f655-4rjzm" WorkloadEndpoint="ip--172--31--26--54-k8s-goldmane--666569f655--4rjzm-eth0" Jan 22 00:42:52.260896 containerd[1953]: time="2026-01-22T00:42:52.260620530Z" level=info msg="connecting to shim ee69b0fb931eeb91fe4c657ed1d019d7c15bb05163b8da1d9c26892772a7191f" address="unix:///run/containerd/s/109f36f380902a101f522129c5356f7c1423118df7d98885f8c7b9be35c25ceb" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:42:52.296108 systemd[1]: Started cri-containerd-ee69b0fb931eeb91fe4c657ed1d019d7c15bb05163b8da1d9c26892772a7191f.scope - libcontainer container ee69b0fb931eeb91fe4c657ed1d019d7c15bb05163b8da1d9c26892772a7191f. Jan 22 00:42:52.311853 systemd-networkd[1860]: califf8e7b8b89e: Link UP Jan 22 00:42:52.313190 systemd-networkd[1860]: califf8e7b8b89e: Gained carrier Jan 22 00:42:52.326000 audit: BPF prog-id=189 op=LOAD Jan 22 00:42:52.327000 audit: BPF prog-id=190 op=LOAD Jan 22 00:42:52.327000 audit[4748]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4737 pid=4748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:52.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565363962306662393331656562393166653463363537656431643031 Jan 22 00:42:52.327000 audit: BPF prog-id=190 op=UNLOAD Jan 22 00:42:52.327000 audit[4748]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4737 pid=4748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:52.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565363962306662393331656562393166653463363537656431643031 Jan 22 00:42:52.328000 audit: BPF prog-id=191 op=LOAD Jan 22 00:42:52.328000 audit[4748]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4737 pid=4748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:52.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565363962306662393331656562393166653463363537656431643031 Jan 22 00:42:52.328000 audit: BPF prog-id=192 op=LOAD Jan 22 00:42:52.328000 audit[4748]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4737 pid=4748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:52.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565363962306662393331656562393166653463363537656431643031 Jan 22 00:42:52.328000 audit: BPF prog-id=192 op=UNLOAD Jan 22 00:42:52.328000 audit[4748]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4737 pid=4748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:52.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565363962306662393331656562393166653463363537656431643031 Jan 22 00:42:52.328000 audit: BPF prog-id=191 op=UNLOAD Jan 22 00:42:52.328000 audit[4748]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4737 pid=4748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:52.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565363962306662393331656562393166653463363537656431643031 Jan 22 00:42:52.328000 audit: BPF prog-id=193 op=LOAD Jan 22 00:42:52.328000 audit[4748]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4737 pid=4748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:52.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565363962306662393331656562393166653463363537656431643031 Jan 22 00:42:52.332106 containerd[1953]: 2026-01-22 00:42:52.093 [INFO][4688] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 22 00:42:52.332106 containerd[1953]: 2026-01-22 00:42:52.112 [INFO][4688] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--54-k8s-calico--apiserver--6bf9469c9d--75c6s-eth0 calico-apiserver-6bf9469c9d- calico-apiserver 0db24862-6144-45b1-8f39-39a11a3c80dd 835 0 2026-01-22 00:42:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bf9469c9d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-26-54 calico-apiserver-6bf9469c9d-75c6s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califf8e7b8b89e [] [] }} ContainerID="c1faebbd03d9a493ce36ba2be88b09ac0ef188ca531236ac0d91dd6f4f78a8c4" Namespace="calico-apiserver" Pod="calico-apiserver-6bf9469c9d-75c6s" WorkloadEndpoint="ip--172--31--26--54-k8s-calico--apiserver--6bf9469c9d--75c6s-" Jan 22 00:42:52.332106 containerd[1953]: 2026-01-22 00:42:52.112 [INFO][4688] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c1faebbd03d9a493ce36ba2be88b09ac0ef188ca531236ac0d91dd6f4f78a8c4" Namespace="calico-apiserver" Pod="calico-apiserver-6bf9469c9d-75c6s" WorkloadEndpoint="ip--172--31--26--54-k8s-calico--apiserver--6bf9469c9d--75c6s-eth0" Jan 22 00:42:52.332106 containerd[1953]: 2026-01-22 00:42:52.155 [INFO][4709] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1faebbd03d9a493ce36ba2be88b09ac0ef188ca531236ac0d91dd6f4f78a8c4" HandleID="k8s-pod-network.c1faebbd03d9a493ce36ba2be88b09ac0ef188ca531236ac0d91dd6f4f78a8c4" Workload="ip--172--31--26--54-k8s-calico--apiserver--6bf9469c9d--75c6s-eth0" Jan 22 00:42:52.332331 containerd[1953]: 2026-01-22 00:42:52.156 [INFO][4709] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c1faebbd03d9a493ce36ba2be88b09ac0ef188ca531236ac0d91dd6f4f78a8c4" HandleID="k8s-pod-network.c1faebbd03d9a493ce36ba2be88b09ac0ef188ca531236ac0d91dd6f4f78a8c4" Workload="ip--172--31--26--54-k8s-calico--apiserver--6bf9469c9d--75c6s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5770), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-26-54", "pod":"calico-apiserver-6bf9469c9d-75c6s", "timestamp":"2026-01-22 00:42:52.155720432 +0000 UTC"}, Hostname:"ip-172-31-26-54", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 00:42:52.332331 containerd[1953]: 2026-01-22 00:42:52.156 [INFO][4709] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:42:52.332331 containerd[1953]: 2026-01-22 00:42:52.192 [INFO][4709] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:42:52.332331 containerd[1953]: 2026-01-22 00:42:52.192 [INFO][4709] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-54' Jan 22 00:42:52.332331 containerd[1953]: 2026-01-22 00:42:52.265 [INFO][4709] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c1faebbd03d9a493ce36ba2be88b09ac0ef188ca531236ac0d91dd6f4f78a8c4" host="ip-172-31-26-54" Jan 22 00:42:52.332331 containerd[1953]: 2026-01-22 00:42:52.273 [INFO][4709] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-54" Jan 22 00:42:52.332331 containerd[1953]: 2026-01-22 00:42:52.279 [INFO][4709] ipam/ipam.go 511: Trying affinity for 192.168.74.0/26 host="ip-172-31-26-54" Jan 22 00:42:52.332331 containerd[1953]: 2026-01-22 00:42:52.282 [INFO][4709] ipam/ipam.go 158: Attempting to load block cidr=192.168.74.0/26 host="ip-172-31-26-54" Jan 22 00:42:52.332331 containerd[1953]: 2026-01-22 00:42:52.286 [INFO][4709] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.74.0/26 host="ip-172-31-26-54" Jan 22 00:42:52.332627 containerd[1953]: 2026-01-22 00:42:52.286 [INFO][4709] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.74.0/26 handle="k8s-pod-network.c1faebbd03d9a493ce36ba2be88b09ac0ef188ca531236ac0d91dd6f4f78a8c4" host="ip-172-31-26-54" Jan 22 00:42:52.332627 containerd[1953]: 2026-01-22 00:42:52.288 [INFO][4709] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c1faebbd03d9a493ce36ba2be88b09ac0ef188ca531236ac0d91dd6f4f78a8c4 Jan 22 00:42:52.332627 containerd[1953]: 2026-01-22 00:42:52.296 [INFO][4709] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.74.0/26 handle="k8s-pod-network.c1faebbd03d9a493ce36ba2be88b09ac0ef188ca531236ac0d91dd6f4f78a8c4" host="ip-172-31-26-54" Jan 22 00:42:52.332627 containerd[1953]: 2026-01-22 00:42:52.303 [INFO][4709] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.74.3/26] block=192.168.74.0/26 handle="k8s-pod-network.c1faebbd03d9a493ce36ba2be88b09ac0ef188ca531236ac0d91dd6f4f78a8c4" host="ip-172-31-26-54" Jan 22 00:42:52.332627 containerd[1953]: 2026-01-22 00:42:52.304 [INFO][4709] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.74.3/26] handle="k8s-pod-network.c1faebbd03d9a493ce36ba2be88b09ac0ef188ca531236ac0d91dd6f4f78a8c4" host="ip-172-31-26-54" Jan 22 00:42:52.332627 containerd[1953]: 2026-01-22 00:42:52.305 [INFO][4709] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:42:52.332627 containerd[1953]: 2026-01-22 00:42:52.305 [INFO][4709] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.74.3/26] IPv6=[] ContainerID="c1faebbd03d9a493ce36ba2be88b09ac0ef188ca531236ac0d91dd6f4f78a8c4" HandleID="k8s-pod-network.c1faebbd03d9a493ce36ba2be88b09ac0ef188ca531236ac0d91dd6f4f78a8c4" Workload="ip--172--31--26--54-k8s-calico--apiserver--6bf9469c9d--75c6s-eth0" Jan 22 00:42:52.332793 containerd[1953]: 2026-01-22 00:42:52.307 [INFO][4688] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c1faebbd03d9a493ce36ba2be88b09ac0ef188ca531236ac0d91dd6f4f78a8c4" Namespace="calico-apiserver" Pod="calico-apiserver-6bf9469c9d-75c6s" WorkloadEndpoint="ip--172--31--26--54-k8s-calico--apiserver--6bf9469c9d--75c6s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--54-k8s-calico--apiserver--6bf9469c9d--75c6s-eth0", GenerateName:"calico-apiserver-6bf9469c9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"0db24862-6144-45b1-8f39-39a11a3c80dd", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 42, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bf9469c9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-54", ContainerID:"", Pod:"calico-apiserver-6bf9469c9d-75c6s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califf8e7b8b89e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:42:52.332854 containerd[1953]: 2026-01-22 00:42:52.307 [INFO][4688] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.3/32] ContainerID="c1faebbd03d9a493ce36ba2be88b09ac0ef188ca531236ac0d91dd6f4f78a8c4" Namespace="calico-apiserver" Pod="calico-apiserver-6bf9469c9d-75c6s" WorkloadEndpoint="ip--172--31--26--54-k8s-calico--apiserver--6bf9469c9d--75c6s-eth0" Jan 22 00:42:52.332854 containerd[1953]: 2026-01-22 00:42:52.307 [INFO][4688] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf8e7b8b89e ContainerID="c1faebbd03d9a493ce36ba2be88b09ac0ef188ca531236ac0d91dd6f4f78a8c4" Namespace="calico-apiserver" Pod="calico-apiserver-6bf9469c9d-75c6s" WorkloadEndpoint="ip--172--31--26--54-k8s-calico--apiserver--6bf9469c9d--75c6s-eth0" Jan 22 00:42:52.332854 containerd[1953]: 2026-01-22 00:42:52.309 [INFO][4688] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c1faebbd03d9a493ce36ba2be88b09ac0ef188ca531236ac0d91dd6f4f78a8c4" Namespace="calico-apiserver" Pod="calico-apiserver-6bf9469c9d-75c6s" WorkloadEndpoint="ip--172--31--26--54-k8s-calico--apiserver--6bf9469c9d--75c6s-eth0" Jan 22 00:42:52.332964 containerd[1953]: 2026-01-22 00:42:52.309 [INFO][4688] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c1faebbd03d9a493ce36ba2be88b09ac0ef188ca531236ac0d91dd6f4f78a8c4" Namespace="calico-apiserver" Pod="calico-apiserver-6bf9469c9d-75c6s" WorkloadEndpoint="ip--172--31--26--54-k8s-calico--apiserver--6bf9469c9d--75c6s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--54-k8s-calico--apiserver--6bf9469c9d--75c6s-eth0", GenerateName:"calico-apiserver-6bf9469c9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"0db24862-6144-45b1-8f39-39a11a3c80dd", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 42, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bf9469c9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-54", ContainerID:"c1faebbd03d9a493ce36ba2be88b09ac0ef188ca531236ac0d91dd6f4f78a8c4", Pod:"calico-apiserver-6bf9469c9d-75c6s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califf8e7b8b89e", MAC:"ca:e0:93:a1:80:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:42:52.333022 containerd[1953]: 2026-01-22 00:42:52.324 [INFO][4688] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c1faebbd03d9a493ce36ba2be88b09ac0ef188ca531236ac0d91dd6f4f78a8c4" Namespace="calico-apiserver" Pod="calico-apiserver-6bf9469c9d-75c6s" WorkloadEndpoint="ip--172--31--26--54-k8s-calico--apiserver--6bf9469c9d--75c6s-eth0" Jan 22 00:42:52.351968 kubelet[3437]: E0122 00:42:52.351852 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65dd85ff-jgxjj" podUID="96b1517d-4481-408c-9294-a20121dee9ff" Jan 22 00:42:52.401834 containerd[1953]: time="2026-01-22T00:42:52.400991324Z" level=info msg="connecting to shim c1faebbd03d9a493ce36ba2be88b09ac0ef188ca531236ac0d91dd6f4f78a8c4" address="unix:///run/containerd/s/67de858a9fdd84247b3f78213958565be2b743448e41e98ff1783dcad04c732b" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:42:52.418009 containerd[1953]: time="2026-01-22T00:42:52.417970287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4rjzm,Uid:ffc8de62-f696-4cfe-ab23-12f34741b8d0,Namespace:calico-system,Attempt:0,} returns sandbox id \"ee69b0fb931eeb91fe4c657ed1d019d7c15bb05163b8da1d9c26892772a7191f\"" Jan 22 00:42:52.420504 containerd[1953]: time="2026-01-22T00:42:52.420469275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 22 00:42:52.431000 audit[4809]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=4809 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:52.431000 audit[4809]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc6a2b96c0 a2=0 a3=7ffc6a2b96ac items=0 ppid=3540 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:52.431000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:52.435183 systemd[1]: Started cri-containerd-c1faebbd03d9a493ce36ba2be88b09ac0ef188ca531236ac0d91dd6f4f78a8c4.scope - libcontainer container c1faebbd03d9a493ce36ba2be88b09ac0ef188ca531236ac0d91dd6f4f78a8c4. Jan 22 00:42:52.434000 audit[4809]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=4809 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:52.434000 audit[4809]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc6a2b96c0 a2=0 a3=0 items=0 ppid=3540 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:52.434000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:52.451000 audit: BPF prog-id=194 op=LOAD Jan 22 00:42:52.452000 audit: BPF prog-id=195 op=LOAD Jan 22 00:42:52.452000 audit[4799]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4782 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:52.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331666165626264303364396134393363653336626132626538386230 Jan 22 00:42:52.453000 audit: BPF prog-id=195 op=UNLOAD Jan 22 00:42:52.453000 audit[4799]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4782 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:52.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331666165626264303364396134393363653336626132626538386230 Jan 22 00:42:52.453000 audit: BPF prog-id=196 op=LOAD Jan 22 00:42:52.453000 audit[4799]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4782 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:52.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331666165626264303364396134393363653336626132626538386230 Jan 22 00:42:52.453000 audit: BPF prog-id=197 op=LOAD Jan 22 00:42:52.453000 audit[4799]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4782 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:52.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331666165626264303364396134393363653336626132626538386230 Jan 22 00:42:52.453000 audit: BPF prog-id=197 op=UNLOAD Jan 22 00:42:52.453000 audit[4799]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4782 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:52.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331666165626264303364396134393363653336626132626538386230 Jan 22 00:42:52.453000 audit: BPF prog-id=196 op=UNLOAD Jan 22 00:42:52.453000 audit[4799]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4782 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:52.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331666165626264303364396134393363653336626132626538386230 Jan 22 00:42:52.454000 audit: BPF prog-id=198 op=LOAD Jan 22 00:42:52.454000 audit[4799]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4782 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:52.454000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331666165626264303364396134393363653336626132626538386230 Jan 22 00:42:52.508209 containerd[1953]: time="2026-01-22T00:42:52.508167214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf9469c9d-75c6s,Uid:0db24862-6144-45b1-8f39-39a11a3c80dd,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c1faebbd03d9a493ce36ba2be88b09ac0ef188ca531236ac0d91dd6f4f78a8c4\"" Jan 22 00:42:52.674428 containerd[1953]: time="2026-01-22T00:42:52.674381488Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:42:52.676673 containerd[1953]: time="2026-01-22T00:42:52.676614180Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 22 00:42:52.677525 containerd[1953]: time="2026-01-22T00:42:52.676693612Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 22 00:42:52.677525 containerd[1953]: time="2026-01-22T00:42:52.677503244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 00:42:52.677603 kubelet[3437]: E0122 00:42:52.676864 3437 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 00:42:52.677603 kubelet[3437]: E0122 00:42:52.676958 3437 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 00:42:52.677603 kubelet[3437]: E0122 00:42:52.677233 3437 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-frzdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4rjzm_calico-system(ffc8de62-f696-4cfe-ab23-12f34741b8d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 22 00:42:52.679141 kubelet[3437]: E0122 00:42:52.679087 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4rjzm" podUID="ffc8de62-f696-4cfe-ab23-12f34741b8d0" Jan 22 00:42:52.962864 containerd[1953]: time="2026-01-22T00:42:52.962584406Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:42:52.965048 containerd[1953]: time="2026-01-22T00:42:52.964897261Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 00:42:52.965048 containerd[1953]: time="2026-01-22T00:42:52.964925124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 00:42:52.965383 kubelet[3437]: E0122 00:42:52.965324 3437 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:42:52.965489 kubelet[3437]: E0122 00:42:52.965395 3437 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:42:52.965612 kubelet[3437]: E0122 00:42:52.965551 3437 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ksv99,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bf9469c9d-75c6s_calico-apiserver(0db24862-6144-45b1-8f39-39a11a3c80dd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 00:42:52.967357 kubelet[3437]: E0122 00:42:52.967177 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf9469c9d-75c6s" podUID="0db24862-6144-45b1-8f39-39a11a3c80dd" Jan 22 00:42:53.028325 containerd[1953]: time="2026-01-22T00:42:53.028229575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6nhlr,Uid:0036b825-99df-4406-a66f-e7d1814859a1,Namespace:kube-system,Attempt:0,}" Jan 22 00:42:53.192939 systemd-networkd[1860]: calid25edf507d9: Link UP Jan 22 00:42:53.193548 systemd-networkd[1860]: calid25edf507d9: Gained carrier Jan 22 00:42:53.210480 containerd[1953]: 2026-01-22 00:42:53.090 [INFO][4848] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 22 00:42:53.210480 containerd[1953]: 2026-01-22 00:42:53.104 [INFO][4848] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--54-k8s-coredns--668d6bf9bc--6nhlr-eth0 coredns-668d6bf9bc- kube-system 0036b825-99df-4406-a66f-e7d1814859a1 839 0 2026-01-22 00:42:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-54 coredns-668d6bf9bc-6nhlr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid25edf507d9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="921bd1b14867eb4481e44a62b2d8e099a5227d494da7447fa195b60b4c985a7f" Namespace="kube-system" Pod="coredns-668d6bf9bc-6nhlr" WorkloadEndpoint="ip--172--31--26--54-k8s-coredns--668d6bf9bc--6nhlr-" Jan 22 00:42:53.210480 containerd[1953]: 2026-01-22 00:42:53.104 [INFO][4848] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="921bd1b14867eb4481e44a62b2d8e099a5227d494da7447fa195b60b4c985a7f" Namespace="kube-system" Pod="coredns-668d6bf9bc-6nhlr" WorkloadEndpoint="ip--172--31--26--54-k8s-coredns--668d6bf9bc--6nhlr-eth0" Jan 22 00:42:53.210480 containerd[1953]: 2026-01-22 00:42:53.144 [INFO][4861] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="921bd1b14867eb4481e44a62b2d8e099a5227d494da7447fa195b60b4c985a7f" HandleID="k8s-pod-network.921bd1b14867eb4481e44a62b2d8e099a5227d494da7447fa195b60b4c985a7f" Workload="ip--172--31--26--54-k8s-coredns--668d6bf9bc--6nhlr-eth0" Jan 22 00:42:53.212562 containerd[1953]: 2026-01-22 00:42:53.145 [INFO][4861] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="921bd1b14867eb4481e44a62b2d8e099a5227d494da7447fa195b60b4c985a7f" HandleID="k8s-pod-network.921bd1b14867eb4481e44a62b2d8e099a5227d494da7447fa195b60b4c985a7f" Workload="ip--172--31--26--54-k8s-coredns--668d6bf9bc--6nhlr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f200), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-54", "pod":"coredns-668d6bf9bc-6nhlr", "timestamp":"2026-01-22 00:42:53.14492495 +0000 UTC"}, Hostname:"ip-172-31-26-54", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 00:42:53.212562 containerd[1953]: 2026-01-22 00:42:53.145 [INFO][4861] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:42:53.212562 containerd[1953]: 2026-01-22 00:42:53.145 [INFO][4861] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:42:53.212562 containerd[1953]: 2026-01-22 00:42:53.145 [INFO][4861] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-54' Jan 22 00:42:53.212562 containerd[1953]: 2026-01-22 00:42:53.153 [INFO][4861] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.921bd1b14867eb4481e44a62b2d8e099a5227d494da7447fa195b60b4c985a7f" host="ip-172-31-26-54" Jan 22 00:42:53.212562 containerd[1953]: 2026-01-22 00:42:53.158 [INFO][4861] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-54" Jan 22 00:42:53.212562 containerd[1953]: 2026-01-22 00:42:53.163 [INFO][4861] ipam/ipam.go 511: Trying affinity for 192.168.74.0/26 host="ip-172-31-26-54" Jan 22 00:42:53.212562 containerd[1953]: 2026-01-22 00:42:53.165 [INFO][4861] ipam/ipam.go 158: Attempting to load block cidr=192.168.74.0/26 host="ip-172-31-26-54" Jan 22 00:42:53.212562 containerd[1953]: 2026-01-22 00:42:53.168 [INFO][4861] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.74.0/26 host="ip-172-31-26-54" Jan 22 00:42:53.212562 containerd[1953]: 2026-01-22 00:42:53.168 [INFO][4861] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.74.0/26 handle="k8s-pod-network.921bd1b14867eb4481e44a62b2d8e099a5227d494da7447fa195b60b4c985a7f" host="ip-172-31-26-54" Jan 22 00:42:53.212828 containerd[1953]: 2026-01-22 00:42:53.170 [INFO][4861] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.921bd1b14867eb4481e44a62b2d8e099a5227d494da7447fa195b60b4c985a7f Jan 22 00:42:53.212828 containerd[1953]: 2026-01-22 00:42:53.175 [INFO][4861] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.74.0/26 handle="k8s-pod-network.921bd1b14867eb4481e44a62b2d8e099a5227d494da7447fa195b60b4c985a7f" host="ip-172-31-26-54" Jan 22 00:42:53.212828 containerd[1953]: 2026-01-22 00:42:53.187 [INFO][4861] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.74.4/26] block=192.168.74.0/26 handle="k8s-pod-network.921bd1b14867eb4481e44a62b2d8e099a5227d494da7447fa195b60b4c985a7f" host="ip-172-31-26-54" Jan 22 00:42:53.212828 containerd[1953]: 2026-01-22 00:42:53.187 [INFO][4861] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.74.4/26] handle="k8s-pod-network.921bd1b14867eb4481e44a62b2d8e099a5227d494da7447fa195b60b4c985a7f" host="ip-172-31-26-54" Jan 22 00:42:53.212828 containerd[1953]: 2026-01-22 00:42:53.187 [INFO][4861] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:42:53.212828 containerd[1953]: 2026-01-22 00:42:53.187 [INFO][4861] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.74.4/26] IPv6=[] ContainerID="921bd1b14867eb4481e44a62b2d8e099a5227d494da7447fa195b60b4c985a7f" HandleID="k8s-pod-network.921bd1b14867eb4481e44a62b2d8e099a5227d494da7447fa195b60b4c985a7f" Workload="ip--172--31--26--54-k8s-coredns--668d6bf9bc--6nhlr-eth0" Jan 22 00:42:53.213043 containerd[1953]: 2026-01-22 00:42:53.190 [INFO][4848] cni-plugin/k8s.go 418: Populated endpoint ContainerID="921bd1b14867eb4481e44a62b2d8e099a5227d494da7447fa195b60b4c985a7f" Namespace="kube-system" Pod="coredns-668d6bf9bc-6nhlr" WorkloadEndpoint="ip--172--31--26--54-k8s-coredns--668d6bf9bc--6nhlr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--54-k8s-coredns--668d6bf9bc--6nhlr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0036b825-99df-4406-a66f-e7d1814859a1", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 42, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-54", ContainerID:"", Pod:"coredns-668d6bf9bc-6nhlr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid25edf507d9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:42:53.213043 containerd[1953]: 2026-01-22 00:42:53.190 [INFO][4848] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.4/32] ContainerID="921bd1b14867eb4481e44a62b2d8e099a5227d494da7447fa195b60b4c985a7f" Namespace="kube-system" Pod="coredns-668d6bf9bc-6nhlr" WorkloadEndpoint="ip--172--31--26--54-k8s-coredns--668d6bf9bc--6nhlr-eth0" Jan 22 00:42:53.213043 containerd[1953]: 2026-01-22 00:42:53.190 [INFO][4848] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid25edf507d9 ContainerID="921bd1b14867eb4481e44a62b2d8e099a5227d494da7447fa195b60b4c985a7f" Namespace="kube-system" Pod="coredns-668d6bf9bc-6nhlr" WorkloadEndpoint="ip--172--31--26--54-k8s-coredns--668d6bf9bc--6nhlr-eth0" Jan 22 00:42:53.213043 containerd[1953]: 2026-01-22 00:42:53.192 [INFO][4848] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="921bd1b14867eb4481e44a62b2d8e099a5227d494da7447fa195b60b4c985a7f" Namespace="kube-system" Pod="coredns-668d6bf9bc-6nhlr" WorkloadEndpoint="ip--172--31--26--54-k8s-coredns--668d6bf9bc--6nhlr-eth0" Jan 22 00:42:53.213043 containerd[1953]: 2026-01-22 00:42:53.193 [INFO][4848] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="921bd1b14867eb4481e44a62b2d8e099a5227d494da7447fa195b60b4c985a7f" Namespace="kube-system" Pod="coredns-668d6bf9bc-6nhlr" WorkloadEndpoint="ip--172--31--26--54-k8s-coredns--668d6bf9bc--6nhlr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--54-k8s-coredns--668d6bf9bc--6nhlr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0036b825-99df-4406-a66f-e7d1814859a1", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 42, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-54", ContainerID:"921bd1b14867eb4481e44a62b2d8e099a5227d494da7447fa195b60b4c985a7f", Pod:"coredns-668d6bf9bc-6nhlr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid25edf507d9", MAC:"5e:c4:34:f4:14:92", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:42:53.213043 containerd[1953]: 2026-01-22 00:42:53.206 [INFO][4848] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="921bd1b14867eb4481e44a62b2d8e099a5227d494da7447fa195b60b4c985a7f" Namespace="kube-system" Pod="coredns-668d6bf9bc-6nhlr" WorkloadEndpoint="ip--172--31--26--54-k8s-coredns--668d6bf9bc--6nhlr-eth0" Jan 22 00:42:53.251158 containerd[1953]: time="2026-01-22T00:42:53.251061621Z" level=info msg="connecting to shim 921bd1b14867eb4481e44a62b2d8e099a5227d494da7447fa195b60b4c985a7f" address="unix:///run/containerd/s/5225bcb2e644881b2cc8d2edf20b6533e5548903e23df8eb4a5e72a0ed97ce6c" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:42:53.289160 systemd[1]: Started cri-containerd-921bd1b14867eb4481e44a62b2d8e099a5227d494da7447fa195b60b4c985a7f.scope - libcontainer container 921bd1b14867eb4481e44a62b2d8e099a5227d494da7447fa195b60b4c985a7f. Jan 22 00:42:53.308178 kernel: kauditd_printk_skb: 77 callbacks suppressed Jan 22 00:42:53.308301 kernel: audit: type=1334 audit(1769042573.305:604): prog-id=199 op=LOAD Jan 22 00:42:53.305000 audit: BPF prog-id=199 op=LOAD Jan 22 00:42:53.309835 kernel: audit: type=1334 audit(1769042573.305:605): prog-id=200 op=LOAD Jan 22 00:42:53.305000 audit: BPF prog-id=200 op=LOAD Jan 22 00:42:53.305000 audit[4892]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000228238 a2=98 a3=0 items=0 ppid=4880 pid=4892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:53.316271 kernel: audit: type=1300 audit(1769042573.305:605): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000228238 a2=98 a3=0 items=0 ppid=4880 pid=4892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:53.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932316264316231343836376562343438316534346136326232643865 Jan 22 00:42:53.322938 kernel: audit: type=1327 audit(1769042573.305:605): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932316264316231343836376562343438316534346136326232643865 Jan 22 00:42:53.305000 audit: BPF prog-id=200 op=UNLOAD Jan 22 00:42:53.325045 kernel: audit: type=1334 audit(1769042573.305:606): prog-id=200 op=UNLOAD Jan 22 00:42:53.305000 audit[4892]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4880 pid=4892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:53.331896 kernel: audit: type=1300 audit(1769042573.305:606): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4880 pid=4892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:53.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932316264316231343836376562343438316534346136326232643865 Jan 22 00:42:53.340235 kernel: audit: type=1327 audit(1769042573.305:606): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932316264316231343836376562343438316534346136326232643865 Jan 22 00:42:53.340407 kernel: audit: type=1334 audit(1769042573.305:607): prog-id=201 op=LOAD Jan 22 00:42:53.305000 audit: BPF prog-id=201 op=LOAD Jan 22 00:42:53.347064 kernel: audit: type=1300 audit(1769042573.305:607): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000228488 a2=98 a3=0 items=0 ppid=4880 pid=4892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:53.305000 audit[4892]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000228488 a2=98 a3=0 items=0 ppid=4880 pid=4892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:53.354475 kernel: audit: type=1327 audit(1769042573.305:607): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932316264316231343836376562343438316534346136326232643865 Jan 22 00:42:53.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932316264316231343836376562343438316534346136326232643865 Jan 22 00:42:53.305000 audit: BPF prog-id=202 op=LOAD Jan 22 00:42:53.305000 audit[4892]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000228218 a2=98 a3=0 items=0 ppid=4880 pid=4892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:53.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932316264316231343836376562343438316534346136326232643865 Jan 22 00:42:53.305000 audit: BPF prog-id=202 op=UNLOAD Jan 22 00:42:53.305000 audit[4892]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4880 pid=4892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:53.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932316264316231343836376562343438316534346136326232643865 Jan 22 00:42:53.305000 audit: BPF prog-id=201 op=UNLOAD Jan 22 00:42:53.305000 audit[4892]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4880 pid=4892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:53.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932316264316231343836376562343438316534346136326232643865 Jan 22 00:42:53.305000 audit: BPF prog-id=203 op=LOAD Jan 22 00:42:53.305000 audit[4892]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0002286e8 a2=98 a3=0 items=0 ppid=4880 pid=4892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:53.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932316264316231343836376562343438316534346136326232643865 Jan 22 00:42:53.361968 kubelet[3437]: E0122 00:42:53.361917 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf9469c9d-75c6s" podUID="0db24862-6144-45b1-8f39-39a11a3c80dd" Jan 22 00:42:53.370937 kubelet[3437]: E0122 00:42:53.368787 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4rjzm" podUID="ffc8de62-f696-4cfe-ab23-12f34741b8d0" Jan 22 00:42:53.422917 containerd[1953]: time="2026-01-22T00:42:53.422789857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6nhlr,Uid:0036b825-99df-4406-a66f-e7d1814859a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"921bd1b14867eb4481e44a62b2d8e099a5227d494da7447fa195b60b4c985a7f\"" Jan 22 00:42:53.436000 audit[4918]: NETFILTER_CFG table=filter:121 family=2 entries=22 op=nft_register_rule pid=4918 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:53.436000 audit[4918]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc36d7d530 a2=0 a3=7ffc36d7d51c items=0 ppid=3540 pid=4918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:53.436000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:53.438982 containerd[1953]: time="2026-01-22T00:42:53.438940713Z" level=info msg="CreateContainer within sandbox \"921bd1b14867eb4481e44a62b2d8e099a5227d494da7447fa195b60b4c985a7f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 22 00:42:53.442000 audit[4918]: NETFILTER_CFG table=nat:122 family=2 entries=12 op=nft_register_rule pid=4918 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:53.442000 audit[4918]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc36d7d530 a2=0 a3=0 items=0 ppid=3540 pid=4918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:53.442000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:53.461000 audit[4920]: NETFILTER_CFG table=filter:123 family=2 entries=22 op=nft_register_rule pid=4920 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:53.461000 audit[4920]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc4c2a0620 a2=0 a3=7ffc4c2a060c items=0 ppid=3540 pid=4920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:53.461000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:53.477000 audit[4920]: NETFILTER_CFG table=nat:124 family=2 entries=12 op=nft_register_rule pid=4920 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:53.477000 audit[4920]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc4c2a0620 a2=0 a3=0 items=0 ppid=3540 pid=4920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:53.477000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:53.481291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2128209585.mount: Deactivated successfully. Jan 22 00:42:53.485327 containerd[1953]: time="2026-01-22T00:42:53.481021482Z" level=info msg="Container f110efba5615a5c79c998470934e014fd8005c2bbb1f87ec5074bc2c3187cdfc: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:42:53.496174 containerd[1953]: time="2026-01-22T00:42:53.495753456Z" level=info msg="CreateContainer within sandbox \"921bd1b14867eb4481e44a62b2d8e099a5227d494da7447fa195b60b4c985a7f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f110efba5615a5c79c998470934e014fd8005c2bbb1f87ec5074bc2c3187cdfc\"" Jan 22 00:42:53.498145 containerd[1953]: time="2026-01-22T00:42:53.498111232Z" level=info msg="StartContainer for \"f110efba5615a5c79c998470934e014fd8005c2bbb1f87ec5074bc2c3187cdfc\"" Jan 22 00:42:53.500025 containerd[1953]: time="2026-01-22T00:42:53.499988934Z" level=info msg="connecting to shim f110efba5615a5c79c998470934e014fd8005c2bbb1f87ec5074bc2c3187cdfc" address="unix:///run/containerd/s/5225bcb2e644881b2cc8d2edf20b6533e5548903e23df8eb4a5e72a0ed97ce6c" protocol=ttrpc version=3 Jan 22 00:42:53.529333 systemd[1]: Started cri-containerd-f110efba5615a5c79c998470934e014fd8005c2bbb1f87ec5074bc2c3187cdfc.scope - libcontainer container f110efba5615a5c79c998470934e014fd8005c2bbb1f87ec5074bc2c3187cdfc. Jan 22 00:42:53.558000 audit: BPF prog-id=204 op=LOAD Jan 22 00:42:53.559000 audit: BPF prog-id=205 op=LOAD Jan 22 00:42:53.559000 audit[4921]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4880 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:53.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631313065666261353631356135633739633939383437303933346530 Jan 22 00:42:53.559000 audit: BPF prog-id=205 op=UNLOAD Jan 22 00:42:53.559000 audit[4921]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4880 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:53.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631313065666261353631356135633739633939383437303933346530 Jan 22 00:42:53.559000 audit: BPF prog-id=206 op=LOAD Jan 22 00:42:53.559000 audit[4921]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4880 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:53.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631313065666261353631356135633739633939383437303933346530 Jan 22 00:42:53.559000 audit: BPF prog-id=207 op=LOAD Jan 22 00:42:53.559000 audit[4921]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4880 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:53.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631313065666261353631356135633739633939383437303933346530 Jan 22 00:42:53.559000 audit: BPF prog-id=207 op=UNLOAD Jan 22 00:42:53.559000 audit[4921]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4880 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:53.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631313065666261353631356135633739633939383437303933346530 Jan 22 00:42:53.559000 audit: BPF prog-id=206 op=UNLOAD Jan 22 00:42:53.559000 audit[4921]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4880 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:53.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631313065666261353631356135633739633939383437303933346530 Jan 22 00:42:53.559000 audit: BPF prog-id=208 op=LOAD Jan 22 00:42:53.559000 audit[4921]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4880 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:53.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631313065666261353631356135633739633939383437303933346530 Jan 22 00:42:53.589369 containerd[1953]: time="2026-01-22T00:42:53.589212395Z" level=info msg="StartContainer for \"f110efba5615a5c79c998470934e014fd8005c2bbb1f87ec5074bc2c3187cdfc\" returns successfully" Jan 22 00:42:54.034600 containerd[1953]: time="2026-01-22T00:42:54.033896181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf9469c9d-jt79q,Uid:4921dab1-a0fc-4a4a-9ec2-f3f09160935a,Namespace:calico-apiserver,Attempt:0,}" Jan 22 00:42:54.034600 containerd[1953]: time="2026-01-22T00:42:54.034215856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qvwlf,Uid:9a122a32-e7c8-4162-bccb-4b71d5c37d97,Namespace:calico-system,Attempt:0,}" Jan 22 00:42:54.036712 containerd[1953]: time="2026-01-22T00:42:54.036391094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8sggz,Uid:10b09332-a889-486e-9821-a11a8220ea2e,Namespace:kube-system,Attempt:0,}" Jan 22 00:42:54.068057 kubelet[3437]: I0122 00:42:54.067968 3437 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 00:42:54.214854 systemd-networkd[1860]: calia72685caf17: Gained IPv6LL Jan 22 00:42:54.216188 systemd-networkd[1860]: califf8e7b8b89e: Gained IPv6LL Jan 22 00:42:54.394515 kubelet[3437]: E0122 00:42:54.394031 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf9469c9d-75c6s" podUID="0db24862-6144-45b1-8f39-39a11a3c80dd" Jan 22 00:42:54.394515 kubelet[3437]: E0122 00:42:54.394464 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4rjzm" podUID="ffc8de62-f696-4cfe-ab23-12f34741b8d0" Jan 22 00:42:54.458547 systemd-networkd[1860]: calib049bf96d60: Link UP Jan 22 00:42:54.458756 systemd-networkd[1860]: calib049bf96d60: Gained carrier Jan 22 00:42:54.475985 kubelet[3437]: I0122 00:42:54.422759 3437 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6nhlr" podStartSLOduration=43.422735014 podStartE2EDuration="43.422735014s" podCreationTimestamp="2026-01-22 00:42:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 00:42:54.421446783 +0000 UTC m=+48.653412930" watchObservedRunningTime="2026-01-22 00:42:54.422735014 +0000 UTC m=+48.654701164" Jan 22 00:42:54.514831 containerd[1953]: 2026-01-22 00:42:54.146 [INFO][4980] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 22 00:42:54.514831 containerd[1953]: 2026-01-22 00:42:54.183 [INFO][4980] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--54-k8s-coredns--668d6bf9bc--8sggz-eth0 coredns-668d6bf9bc- kube-system 10b09332-a889-486e-9821-a11a8220ea2e 832 0 2026-01-22 00:42:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-54 coredns-668d6bf9bc-8sggz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib049bf96d60 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="af728803ad585aaa4a9b7dba48da9c950cc34983103581a42a7df68602bd8117" Namespace="kube-system" Pod="coredns-668d6bf9bc-8sggz" WorkloadEndpoint="ip--172--31--26--54-k8s-coredns--668d6bf9bc--8sggz-" Jan 22 00:42:54.514831 containerd[1953]: 2026-01-22 00:42:54.183 [INFO][4980] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="af728803ad585aaa4a9b7dba48da9c950cc34983103581a42a7df68602bd8117" Namespace="kube-system" Pod="coredns-668d6bf9bc-8sggz" WorkloadEndpoint="ip--172--31--26--54-k8s-coredns--668d6bf9bc--8sggz-eth0" Jan 22 00:42:54.514831 containerd[1953]: 2026-01-22 00:42:54.306 [INFO][5003] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af728803ad585aaa4a9b7dba48da9c950cc34983103581a42a7df68602bd8117" HandleID="k8s-pod-network.af728803ad585aaa4a9b7dba48da9c950cc34983103581a42a7df68602bd8117" Workload="ip--172--31--26--54-k8s-coredns--668d6bf9bc--8sggz-eth0" Jan 22 00:42:54.514831 containerd[1953]: 2026-01-22 00:42:54.308 [INFO][5003] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="af728803ad585aaa4a9b7dba48da9c950cc34983103581a42a7df68602bd8117" HandleID="k8s-pod-network.af728803ad585aaa4a9b7dba48da9c950cc34983103581a42a7df68602bd8117" Workload="ip--172--31--26--54-k8s-coredns--668d6bf9bc--8sggz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039f1e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-54", "pod":"coredns-668d6bf9bc-8sggz", "timestamp":"2026-01-22 00:42:54.306546892 +0000 UTC"}, Hostname:"ip-172-31-26-54", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 00:42:54.514831 containerd[1953]: 2026-01-22 00:42:54.308 [INFO][5003] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:42:54.514831 containerd[1953]: 2026-01-22 00:42:54.309 [INFO][5003] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:42:54.514831 containerd[1953]: 2026-01-22 00:42:54.309 [INFO][5003] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-54' Jan 22 00:42:54.514831 containerd[1953]: 2026-01-22 00:42:54.332 [INFO][5003] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.af728803ad585aaa4a9b7dba48da9c950cc34983103581a42a7df68602bd8117" host="ip-172-31-26-54" Jan 22 00:42:54.514831 containerd[1953]: 2026-01-22 00:42:54.343 [INFO][5003] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-54" Jan 22 00:42:54.514831 containerd[1953]: 2026-01-22 00:42:54.358 [INFO][5003] ipam/ipam.go 511: Trying affinity for 192.168.74.0/26 host="ip-172-31-26-54" Jan 22 00:42:54.514831 containerd[1953]: 2026-01-22 00:42:54.364 [INFO][5003] ipam/ipam.go 158: Attempting to load block cidr=192.168.74.0/26 host="ip-172-31-26-54" Jan 22 00:42:54.514831 containerd[1953]: 2026-01-22 00:42:54.377 [INFO][5003] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.74.0/26 host="ip-172-31-26-54" Jan 22 00:42:54.514831 containerd[1953]: 2026-01-22 00:42:54.378 [INFO][5003] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.74.0/26 handle="k8s-pod-network.af728803ad585aaa4a9b7dba48da9c950cc34983103581a42a7df68602bd8117" host="ip-172-31-26-54" Jan 22 00:42:54.514831 containerd[1953]: 2026-01-22 00:42:54.385 [INFO][5003] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.af728803ad585aaa4a9b7dba48da9c950cc34983103581a42a7df68602bd8117 Jan 22 00:42:54.514831 containerd[1953]: 2026-01-22 00:42:54.399 [INFO][5003] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.74.0/26 handle="k8s-pod-network.af728803ad585aaa4a9b7dba48da9c950cc34983103581a42a7df68602bd8117" host="ip-172-31-26-54" Jan 22 00:42:54.514831 containerd[1953]: 2026-01-22 00:42:54.420 [INFO][5003] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.74.5/26] block=192.168.74.0/26 handle="k8s-pod-network.af728803ad585aaa4a9b7dba48da9c950cc34983103581a42a7df68602bd8117" host="ip-172-31-26-54" Jan 22 00:42:54.514831 containerd[1953]: 2026-01-22 00:42:54.421 [INFO][5003] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.74.5/26] handle="k8s-pod-network.af728803ad585aaa4a9b7dba48da9c950cc34983103581a42a7df68602bd8117" host="ip-172-31-26-54" Jan 22 00:42:54.514831 containerd[1953]: 2026-01-22 00:42:54.421 [INFO][5003] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:42:54.514831 containerd[1953]: 2026-01-22 00:42:54.423 [INFO][5003] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.74.5/26] IPv6=[] ContainerID="af728803ad585aaa4a9b7dba48da9c950cc34983103581a42a7df68602bd8117" HandleID="k8s-pod-network.af728803ad585aaa4a9b7dba48da9c950cc34983103581a42a7df68602bd8117" Workload="ip--172--31--26--54-k8s-coredns--668d6bf9bc--8sggz-eth0" Jan 22 00:42:54.517633 containerd[1953]: 2026-01-22 00:42:54.452 [INFO][4980] cni-plugin/k8s.go 418: Populated endpoint ContainerID="af728803ad585aaa4a9b7dba48da9c950cc34983103581a42a7df68602bd8117" Namespace="kube-system" Pod="coredns-668d6bf9bc-8sggz" WorkloadEndpoint="ip--172--31--26--54-k8s-coredns--668d6bf9bc--8sggz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--54-k8s-coredns--668d6bf9bc--8sggz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"10b09332-a889-486e-9821-a11a8220ea2e", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 42, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-54", ContainerID:"", Pod:"coredns-668d6bf9bc-8sggz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib049bf96d60", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:42:54.517633 containerd[1953]: 2026-01-22 00:42:54.453 [INFO][4980] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.5/32] ContainerID="af728803ad585aaa4a9b7dba48da9c950cc34983103581a42a7df68602bd8117" Namespace="kube-system" Pod="coredns-668d6bf9bc-8sggz" WorkloadEndpoint="ip--172--31--26--54-k8s-coredns--668d6bf9bc--8sggz-eth0" Jan 22 00:42:54.517633 containerd[1953]: 2026-01-22 00:42:54.453 [INFO][4980] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib049bf96d60 ContainerID="af728803ad585aaa4a9b7dba48da9c950cc34983103581a42a7df68602bd8117" Namespace="kube-system" Pod="coredns-668d6bf9bc-8sggz" WorkloadEndpoint="ip--172--31--26--54-k8s-coredns--668d6bf9bc--8sggz-eth0" Jan 22 00:42:54.517633 containerd[1953]: 2026-01-22 00:42:54.456 [INFO][4980] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af728803ad585aaa4a9b7dba48da9c950cc34983103581a42a7df68602bd8117" Namespace="kube-system" Pod="coredns-668d6bf9bc-8sggz" WorkloadEndpoint="ip--172--31--26--54-k8s-coredns--668d6bf9bc--8sggz-eth0" Jan 22 00:42:54.517633 containerd[1953]: 2026-01-22 00:42:54.457 [INFO][4980] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="af728803ad585aaa4a9b7dba48da9c950cc34983103581a42a7df68602bd8117" Namespace="kube-system" Pod="coredns-668d6bf9bc-8sggz" WorkloadEndpoint="ip--172--31--26--54-k8s-coredns--668d6bf9bc--8sggz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--54-k8s-coredns--668d6bf9bc--8sggz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"10b09332-a889-486e-9821-a11a8220ea2e", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 42, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-54", ContainerID:"af728803ad585aaa4a9b7dba48da9c950cc34983103581a42a7df68602bd8117", Pod:"coredns-668d6bf9bc-8sggz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib049bf96d60", MAC:"6e:b3:7c:ce:8e:b9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:42:54.517633 containerd[1953]: 2026-01-22 00:42:54.507 [INFO][4980] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="af728803ad585aaa4a9b7dba48da9c950cc34983103581a42a7df68602bd8117" Namespace="kube-system" Pod="coredns-668d6bf9bc-8sggz" WorkloadEndpoint="ip--172--31--26--54-k8s-coredns--668d6bf9bc--8sggz-eth0" Jan 22 00:42:54.584000 audit[5036]: NETFILTER_CFG table=filter:125 family=2 entries=21 op=nft_register_rule pid=5036 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:54.584000 audit[5036]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcf6850160 a2=0 a3=7ffcf685014c items=0 ppid=3540 pid=5036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:54.584000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:54.597341 containerd[1953]: time="2026-01-22T00:42:54.597287138Z" level=info msg="connecting to shim af728803ad585aaa4a9b7dba48da9c950cc34983103581a42a7df68602bd8117" address="unix:///run/containerd/s/5271ebb5162776d65b43427ebe6aaf54ca29a4341d553a5db4b384301ededeb2" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:42:54.636000 audit[5036]: NETFILTER_CFG table=nat:126 family=2 entries=19 op=nft_register_chain pid=5036 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:54.636000 audit[5036]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffcf6850160 a2=0 a3=7ffcf685014c items=0 ppid=3540 pid=5036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:54.636000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:54.702755 systemd-networkd[1860]: calia0e8c942a5e: Link UP Jan 22 00:42:54.710315 systemd-networkd[1860]: calia0e8c942a5e: Gained carrier Jan 22 00:42:54.716974 systemd[1]: Started cri-containerd-af728803ad585aaa4a9b7dba48da9c950cc34983103581a42a7df68602bd8117.scope - libcontainer container af728803ad585aaa4a9b7dba48da9c950cc34983103581a42a7df68602bd8117. Jan 22 00:42:54.752405 containerd[1953]: 2026-01-22 00:42:54.260 [INFO][4978] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 22 00:42:54.752405 containerd[1953]: 2026-01-22 00:42:54.295 [INFO][4978] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--54-k8s-csi--node--driver--qvwlf-eth0 csi-node-driver- calico-system 9a122a32-e7c8-4162-bccb-4b71d5c37d97 725 0 2026-01-22 00:42:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-26-54 csi-node-driver-qvwlf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia0e8c942a5e [] [] }} ContainerID="8e36fcc3919ed937890eefdec6e14407c69946b9a4ad7ad6e7e58af2f62c3d29" Namespace="calico-system" Pod="csi-node-driver-qvwlf" WorkloadEndpoint="ip--172--31--26--54-k8s-csi--node--driver--qvwlf-" Jan 22 00:42:54.752405 containerd[1953]: 2026-01-22 00:42:54.295 [INFO][4978] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e36fcc3919ed937890eefdec6e14407c69946b9a4ad7ad6e7e58af2f62c3d29" Namespace="calico-system" Pod="csi-node-driver-qvwlf" WorkloadEndpoint="ip--172--31--26--54-k8s-csi--node--driver--qvwlf-eth0" Jan 22 00:42:54.752405 containerd[1953]: 2026-01-22 00:42:54.414 [INFO][5015] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e36fcc3919ed937890eefdec6e14407c69946b9a4ad7ad6e7e58af2f62c3d29" HandleID="k8s-pod-network.8e36fcc3919ed937890eefdec6e14407c69946b9a4ad7ad6e7e58af2f62c3d29" Workload="ip--172--31--26--54-k8s-csi--node--driver--qvwlf-eth0" Jan 22 00:42:54.752405 containerd[1953]: 2026-01-22 00:42:54.417 [INFO][5015] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8e36fcc3919ed937890eefdec6e14407c69946b9a4ad7ad6e7e58af2f62c3d29" HandleID="k8s-pod-network.8e36fcc3919ed937890eefdec6e14407c69946b9a4ad7ad6e7e58af2f62c3d29" Workload="ip--172--31--26--54-k8s-csi--node--driver--qvwlf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000322ab0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-54", "pod":"csi-node-driver-qvwlf", "timestamp":"2026-01-22 00:42:54.414641265 +0000 UTC"}, Hostname:"ip-172-31-26-54", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 00:42:54.752405 containerd[1953]: 2026-01-22 00:42:54.417 [INFO][5015] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:42:54.752405 containerd[1953]: 2026-01-22 00:42:54.423 [INFO][5015] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:42:54.752405 containerd[1953]: 2026-01-22 00:42:54.425 [INFO][5015] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-54' Jan 22 00:42:54.752405 containerd[1953]: 2026-01-22 00:42:54.494 [INFO][5015] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8e36fcc3919ed937890eefdec6e14407c69946b9a4ad7ad6e7e58af2f62c3d29" host="ip-172-31-26-54" Jan 22 00:42:54.752405 containerd[1953]: 2026-01-22 00:42:54.512 [INFO][5015] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-54" Jan 22 00:42:54.752405 containerd[1953]: 2026-01-22 00:42:54.546 [INFO][5015] ipam/ipam.go 511: Trying affinity for 192.168.74.0/26 host="ip-172-31-26-54" Jan 22 00:42:54.752405 containerd[1953]: 2026-01-22 00:42:54.554 [INFO][5015] ipam/ipam.go 158: Attempting to load block cidr=192.168.74.0/26 host="ip-172-31-26-54" Jan 22 00:42:54.752405 containerd[1953]: 2026-01-22 00:42:54.563 [INFO][5015] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.74.0/26 host="ip-172-31-26-54" Jan 22 00:42:54.752405 containerd[1953]: 2026-01-22 00:42:54.563 [INFO][5015] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.74.0/26 handle="k8s-pod-network.8e36fcc3919ed937890eefdec6e14407c69946b9a4ad7ad6e7e58af2f62c3d29" host="ip-172-31-26-54" Jan 22 00:42:54.752405 containerd[1953]: 2026-01-22 00:42:54.591 [INFO][5015] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8e36fcc3919ed937890eefdec6e14407c69946b9a4ad7ad6e7e58af2f62c3d29 Jan 22 00:42:54.752405 containerd[1953]: 2026-01-22 00:42:54.606 [INFO][5015] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.74.0/26 handle="k8s-pod-network.8e36fcc3919ed937890eefdec6e14407c69946b9a4ad7ad6e7e58af2f62c3d29" host="ip-172-31-26-54" Jan 22 00:42:54.752405 containerd[1953]: 2026-01-22 00:42:54.632 [INFO][5015] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.74.6/26] block=192.168.74.0/26 handle="k8s-pod-network.8e36fcc3919ed937890eefdec6e14407c69946b9a4ad7ad6e7e58af2f62c3d29" host="ip-172-31-26-54" Jan 22 00:42:54.752405 containerd[1953]: 2026-01-22 00:42:54.632 [INFO][5015] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.74.6/26] handle="k8s-pod-network.8e36fcc3919ed937890eefdec6e14407c69946b9a4ad7ad6e7e58af2f62c3d29" host="ip-172-31-26-54" Jan 22 00:42:54.752405 containerd[1953]: 2026-01-22 00:42:54.632 [INFO][5015] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:42:54.752405 containerd[1953]: 2026-01-22 00:42:54.633 [INFO][5015] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.74.6/26] IPv6=[] ContainerID="8e36fcc3919ed937890eefdec6e14407c69946b9a4ad7ad6e7e58af2f62c3d29" HandleID="k8s-pod-network.8e36fcc3919ed937890eefdec6e14407c69946b9a4ad7ad6e7e58af2f62c3d29" Workload="ip--172--31--26--54-k8s-csi--node--driver--qvwlf-eth0" Jan 22 00:42:54.754505 containerd[1953]: 2026-01-22 00:42:54.645 [INFO][4978] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e36fcc3919ed937890eefdec6e14407c69946b9a4ad7ad6e7e58af2f62c3d29" Namespace="calico-system" Pod="csi-node-driver-qvwlf" WorkloadEndpoint="ip--172--31--26--54-k8s-csi--node--driver--qvwlf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--54-k8s-csi--node--driver--qvwlf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9a122a32-e7c8-4162-bccb-4b71d5c37d97", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 42, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-54", ContainerID:"", Pod:"csi-node-driver-qvwlf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.74.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia0e8c942a5e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:42:54.754505 containerd[1953]: 2026-01-22 00:42:54.645 [INFO][4978] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.6/32] ContainerID="8e36fcc3919ed937890eefdec6e14407c69946b9a4ad7ad6e7e58af2f62c3d29" Namespace="calico-system" Pod="csi-node-driver-qvwlf" WorkloadEndpoint="ip--172--31--26--54-k8s-csi--node--driver--qvwlf-eth0" Jan 22 00:42:54.754505 containerd[1953]: 2026-01-22 00:42:54.645 [INFO][4978] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia0e8c942a5e ContainerID="8e36fcc3919ed937890eefdec6e14407c69946b9a4ad7ad6e7e58af2f62c3d29" Namespace="calico-system" Pod="csi-node-driver-qvwlf" WorkloadEndpoint="ip--172--31--26--54-k8s-csi--node--driver--qvwlf-eth0" Jan 22 00:42:54.754505 containerd[1953]: 2026-01-22 00:42:54.715 [INFO][4978] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e36fcc3919ed937890eefdec6e14407c69946b9a4ad7ad6e7e58af2f62c3d29" Namespace="calico-system" Pod="csi-node-driver-qvwlf" WorkloadEndpoint="ip--172--31--26--54-k8s-csi--node--driver--qvwlf-eth0" Jan 22 00:42:54.754505 containerd[1953]: 2026-01-22 00:42:54.716 [INFO][4978] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e36fcc3919ed937890eefdec6e14407c69946b9a4ad7ad6e7e58af2f62c3d29" Namespace="calico-system" Pod="csi-node-driver-qvwlf" WorkloadEndpoint="ip--172--31--26--54-k8s-csi--node--driver--qvwlf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--54-k8s-csi--node--driver--qvwlf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9a122a32-e7c8-4162-bccb-4b71d5c37d97", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 42, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-54", ContainerID:"8e36fcc3919ed937890eefdec6e14407c69946b9a4ad7ad6e7e58af2f62c3d29", Pod:"csi-node-driver-qvwlf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.74.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia0e8c942a5e", MAC:"da:16:c9:9d:2d:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:42:54.754505 containerd[1953]: 2026-01-22 00:42:54.746 [INFO][4978] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e36fcc3919ed937890eefdec6e14407c69946b9a4ad7ad6e7e58af2f62c3d29" Namespace="calico-system" Pod="csi-node-driver-qvwlf" WorkloadEndpoint="ip--172--31--26--54-k8s-csi--node--driver--qvwlf-eth0" Jan 22 00:42:54.768000 audit: BPF prog-id=209 op=LOAD Jan 22 00:42:54.770000 audit: BPF prog-id=210 op=LOAD Jan 22 00:42:54.770000 audit[5056]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=5045 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:54.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166373238383033616435383561616134613962376462613438646139 Jan 22 00:42:54.770000 audit: BPF prog-id=210 op=UNLOAD Jan 22 00:42:54.770000 audit[5056]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5045 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:54.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166373238383033616435383561616134613962376462613438646139 Jan 22 00:42:54.771000 audit: BPF prog-id=211 op=LOAD Jan 22 00:42:54.771000 audit[5056]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=5045 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:54.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166373238383033616435383561616134613962376462613438646139 Jan 22 00:42:54.772000 audit: BPF prog-id=212 op=LOAD Jan 22 00:42:54.772000 audit[5056]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=5045 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:54.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166373238383033616435383561616134613962376462613438646139 Jan 22 00:42:54.772000 audit: BPF prog-id=212 op=UNLOAD Jan 22 00:42:54.772000 audit[5056]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5045 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:54.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166373238383033616435383561616134613962376462613438646139 Jan 22 00:42:54.772000 audit: BPF prog-id=211 op=UNLOAD Jan 22 00:42:54.772000 audit[5056]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5045 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:54.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166373238383033616435383561616134613962376462613438646139 Jan 22 00:42:54.772000 audit: BPF prog-id=213 op=LOAD Jan 22 00:42:54.772000 audit[5056]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=5045 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:54.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166373238383033616435383561616134613962376462613438646139 Jan 22 00:42:54.802832 systemd-networkd[1860]: cali3dcd1f8ba77: Link UP Jan 22 00:42:54.808495 systemd-networkd[1860]: cali3dcd1f8ba77: Gained carrier Jan 22 00:42:54.838324 containerd[1953]: time="2026-01-22T00:42:54.838276863Z" level=info msg="connecting to shim 8e36fcc3919ed937890eefdec6e14407c69946b9a4ad7ad6e7e58af2f62c3d29" address="unix:///run/containerd/s/0235881f9e0c32c7cfc08504542313678dcac27a1522f241e9986e409b211844" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:42:54.864084 containerd[1953]: 2026-01-22 00:42:54.253 [INFO][4970] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 22 00:42:54.864084 containerd[1953]: 2026-01-22 00:42:54.293 [INFO][4970] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--54-k8s-calico--apiserver--6bf9469c9d--jt79q-eth0 calico-apiserver-6bf9469c9d- calico-apiserver 4921dab1-a0fc-4a4a-9ec2-f3f09160935a 838 0 2026-01-22 00:42:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bf9469c9d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-26-54 calico-apiserver-6bf9469c9d-jt79q eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3dcd1f8ba77 [] [] }} ContainerID="d09832228ff55238d674edaa2a4b0375f756934d609de5de9c177a7709a44501" Namespace="calico-apiserver" Pod="calico-apiserver-6bf9469c9d-jt79q" WorkloadEndpoint="ip--172--31--26--54-k8s-calico--apiserver--6bf9469c9d--jt79q-" Jan 22 00:42:54.864084 containerd[1953]: 2026-01-22 00:42:54.293 [INFO][4970] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d09832228ff55238d674edaa2a4b0375f756934d609de5de9c177a7709a44501" Namespace="calico-apiserver" Pod="calico-apiserver-6bf9469c9d-jt79q" WorkloadEndpoint="ip--172--31--26--54-k8s-calico--apiserver--6bf9469c9d--jt79q-eth0" Jan 22 00:42:54.864084 containerd[1953]: 2026-01-22 00:42:54.425 [INFO][5017] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d09832228ff55238d674edaa2a4b0375f756934d609de5de9c177a7709a44501" HandleID="k8s-pod-network.d09832228ff55238d674edaa2a4b0375f756934d609de5de9c177a7709a44501" Workload="ip--172--31--26--54-k8s-calico--apiserver--6bf9469c9d--jt79q-eth0" Jan 22 00:42:54.864084 containerd[1953]: 2026-01-22 00:42:54.426 [INFO][5017] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d09832228ff55238d674edaa2a4b0375f756934d609de5de9c177a7709a44501" HandleID="k8s-pod-network.d09832228ff55238d674edaa2a4b0375f756934d609de5de9c177a7709a44501" Workload="ip--172--31--26--54-k8s-calico--apiserver--6bf9469c9d--jt79q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001027c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-26-54", "pod":"calico-apiserver-6bf9469c9d-jt79q", "timestamp":"2026-01-22 00:42:54.425245182 +0000 UTC"}, Hostname:"ip-172-31-26-54", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 00:42:54.864084 containerd[1953]: 2026-01-22 00:42:54.426 [INFO][5017] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:42:54.864084 containerd[1953]: 2026-01-22 00:42:54.633 [INFO][5017] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:42:54.864084 containerd[1953]: 2026-01-22 00:42:54.635 [INFO][5017] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-54' Jan 22 00:42:54.864084 containerd[1953]: 2026-01-22 00:42:54.686 [INFO][5017] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d09832228ff55238d674edaa2a4b0375f756934d609de5de9c177a7709a44501" host="ip-172-31-26-54" Jan 22 00:42:54.864084 containerd[1953]: 2026-01-22 00:42:54.704 [INFO][5017] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-54" Jan 22 00:42:54.864084 containerd[1953]: 2026-01-22 00:42:54.731 [INFO][5017] ipam/ipam.go 511: Trying affinity for 192.168.74.0/26 host="ip-172-31-26-54" Jan 22 00:42:54.864084 containerd[1953]: 2026-01-22 00:42:54.738 [INFO][5017] ipam/ipam.go 158: Attempting to load block cidr=192.168.74.0/26 host="ip-172-31-26-54" Jan 22 00:42:54.864084 containerd[1953]: 2026-01-22 00:42:54.751 [INFO][5017] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.74.0/26 host="ip-172-31-26-54" Jan 22 00:42:54.864084 containerd[1953]: 2026-01-22 00:42:54.751 [INFO][5017] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.74.0/26 handle="k8s-pod-network.d09832228ff55238d674edaa2a4b0375f756934d609de5de9c177a7709a44501" host="ip-172-31-26-54" Jan 22 00:42:54.864084 containerd[1953]: 2026-01-22 00:42:54.755 [INFO][5017] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d09832228ff55238d674edaa2a4b0375f756934d609de5de9c177a7709a44501 Jan 22 00:42:54.864084 containerd[1953]: 2026-01-22 00:42:54.766 [INFO][5017] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.74.0/26 handle="k8s-pod-network.d09832228ff55238d674edaa2a4b0375f756934d609de5de9c177a7709a44501" host="ip-172-31-26-54" Jan 22 00:42:54.864084 containerd[1953]: 2026-01-22 00:42:54.787 [INFO][5017] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.74.7/26] block=192.168.74.0/26 handle="k8s-pod-network.d09832228ff55238d674edaa2a4b0375f756934d609de5de9c177a7709a44501" host="ip-172-31-26-54" Jan 22 00:42:54.864084 containerd[1953]: 2026-01-22 00:42:54.787 [INFO][5017] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.74.7/26] handle="k8s-pod-network.d09832228ff55238d674edaa2a4b0375f756934d609de5de9c177a7709a44501" host="ip-172-31-26-54" Jan 22 00:42:54.864084 containerd[1953]: 2026-01-22 00:42:54.787 [INFO][5017] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:42:54.864084 containerd[1953]: 2026-01-22 00:42:54.787 [INFO][5017] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.74.7/26] IPv6=[] ContainerID="d09832228ff55238d674edaa2a4b0375f756934d609de5de9c177a7709a44501" HandleID="k8s-pod-network.d09832228ff55238d674edaa2a4b0375f756934d609de5de9c177a7709a44501" Workload="ip--172--31--26--54-k8s-calico--apiserver--6bf9469c9d--jt79q-eth0" Jan 22 00:42:54.866736 containerd[1953]: 2026-01-22 00:42:54.794 [INFO][4970] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d09832228ff55238d674edaa2a4b0375f756934d609de5de9c177a7709a44501" Namespace="calico-apiserver" Pod="calico-apiserver-6bf9469c9d-jt79q" WorkloadEndpoint="ip--172--31--26--54-k8s-calico--apiserver--6bf9469c9d--jt79q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--54-k8s-calico--apiserver--6bf9469c9d--jt79q-eth0", GenerateName:"calico-apiserver-6bf9469c9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"4921dab1-a0fc-4a4a-9ec2-f3f09160935a", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 42, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bf9469c9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-54", ContainerID:"", Pod:"calico-apiserver-6bf9469c9d-jt79q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3dcd1f8ba77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:42:54.866736 containerd[1953]: 2026-01-22 00:42:54.795 [INFO][4970] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.7/32] ContainerID="d09832228ff55238d674edaa2a4b0375f756934d609de5de9c177a7709a44501" Namespace="calico-apiserver" Pod="calico-apiserver-6bf9469c9d-jt79q" WorkloadEndpoint="ip--172--31--26--54-k8s-calico--apiserver--6bf9469c9d--jt79q-eth0" Jan 22 00:42:54.866736 containerd[1953]: 2026-01-22 00:42:54.795 [INFO][4970] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3dcd1f8ba77 ContainerID="d09832228ff55238d674edaa2a4b0375f756934d609de5de9c177a7709a44501" Namespace="calico-apiserver" Pod="calico-apiserver-6bf9469c9d-jt79q" WorkloadEndpoint="ip--172--31--26--54-k8s-calico--apiserver--6bf9469c9d--jt79q-eth0" Jan 22 00:42:54.866736 containerd[1953]: 2026-01-22 00:42:54.808 [INFO][4970] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d09832228ff55238d674edaa2a4b0375f756934d609de5de9c177a7709a44501" Namespace="calico-apiserver" Pod="calico-apiserver-6bf9469c9d-jt79q" WorkloadEndpoint="ip--172--31--26--54-k8s-calico--apiserver--6bf9469c9d--jt79q-eth0" Jan 22 00:42:54.866736 containerd[1953]: 2026-01-22 00:42:54.815 [INFO][4970] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d09832228ff55238d674edaa2a4b0375f756934d609de5de9c177a7709a44501" Namespace="calico-apiserver" Pod="calico-apiserver-6bf9469c9d-jt79q" WorkloadEndpoint="ip--172--31--26--54-k8s-calico--apiserver--6bf9469c9d--jt79q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--54-k8s-calico--apiserver--6bf9469c9d--jt79q-eth0", GenerateName:"calico-apiserver-6bf9469c9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"4921dab1-a0fc-4a4a-9ec2-f3f09160935a", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 42, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bf9469c9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-54", ContainerID:"d09832228ff55238d674edaa2a4b0375f756934d609de5de9c177a7709a44501", Pod:"calico-apiserver-6bf9469c9d-jt79q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3dcd1f8ba77", MAC:"46:a2:6a:9c:b5:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:42:54.866736 containerd[1953]: 2026-01-22 00:42:54.852 [INFO][4970] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d09832228ff55238d674edaa2a4b0375f756934d609de5de9c177a7709a44501" Namespace="calico-apiserver" Pod="calico-apiserver-6bf9469c9d-jt79q" WorkloadEndpoint="ip--172--31--26--54-k8s-calico--apiserver--6bf9469c9d--jt79q-eth0" Jan 22 00:42:54.902193 systemd[1]: Started cri-containerd-8e36fcc3919ed937890eefdec6e14407c69946b9a4ad7ad6e7e58af2f62c3d29.scope - libcontainer container 8e36fcc3919ed937890eefdec6e14407c69946b9a4ad7ad6e7e58af2f62c3d29. Jan 22 00:42:54.924002 containerd[1953]: time="2026-01-22T00:42:54.923836112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8sggz,Uid:10b09332-a889-486e-9821-a11a8220ea2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"af728803ad585aaa4a9b7dba48da9c950cc34983103581a42a7df68602bd8117\"" Jan 22 00:42:54.933238 containerd[1953]: time="2026-01-22T00:42:54.931621215Z" level=info msg="CreateContainer within sandbox \"af728803ad585aaa4a9b7dba48da9c950cc34983103581a42a7df68602bd8117\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 22 00:42:54.940444 containerd[1953]: time="2026-01-22T00:42:54.940393101Z" level=info msg="connecting to shim d09832228ff55238d674edaa2a4b0375f756934d609de5de9c177a7709a44501" address="unix:///run/containerd/s/454500c4e4a18f995175e43fed976cec68a3ede6bf2f35793d0d38e053ce6328" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:42:54.950821 containerd[1953]: time="2026-01-22T00:42:54.950776413Z" level=info msg="Container 676f29ba09eac9a3c7a9e460ddb3545b689a51431492ea28e47b19f5512e4031: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:42:54.967329 containerd[1953]: time="2026-01-22T00:42:54.967184289Z" level=info msg="CreateContainer within sandbox \"af728803ad585aaa4a9b7dba48da9c950cc34983103581a42a7df68602bd8117\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"676f29ba09eac9a3c7a9e460ddb3545b689a51431492ea28e47b19f5512e4031\"" Jan 22 00:42:54.971176 containerd[1953]: time="2026-01-22T00:42:54.970563059Z" level=info msg="StartContainer for \"676f29ba09eac9a3c7a9e460ddb3545b689a51431492ea28e47b19f5512e4031\"" Jan 22 00:42:54.974628 containerd[1953]: time="2026-01-22T00:42:54.974567568Z" level=info msg="connecting to shim 676f29ba09eac9a3c7a9e460ddb3545b689a51431492ea28e47b19f5512e4031" address="unix:///run/containerd/s/5271ebb5162776d65b43427ebe6aaf54ca29a4341d553a5db4b384301ededeb2" protocol=ttrpc version=3 Jan 22 00:42:55.009177 systemd[1]: Started cri-containerd-d09832228ff55238d674edaa2a4b0375f756934d609de5de9c177a7709a44501.scope - libcontainer container d09832228ff55238d674edaa2a4b0375f756934d609de5de9c177a7709a44501. Jan 22 00:42:55.032631 systemd[1]: Started cri-containerd-676f29ba09eac9a3c7a9e460ddb3545b689a51431492ea28e47b19f5512e4031.scope - libcontainer container 676f29ba09eac9a3c7a9e460ddb3545b689a51431492ea28e47b19f5512e4031. Jan 22 00:42:55.035502 containerd[1953]: time="2026-01-22T00:42:55.034249599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bd666cb6-pggl9,Uid:3c1a4692-58d4-49e9-ab29-e19d2d7ee1e4,Namespace:calico-system,Attempt:0,}" Jan 22 00:42:55.111269 systemd-networkd[1860]: calid25edf507d9: Gained IPv6LL Jan 22 00:42:55.115000 audit: BPF prog-id=214 op=LOAD Jan 22 00:42:55.126000 audit: BPF prog-id=215 op=LOAD Jan 22 00:42:55.126000 audit[5178]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=5045 pid=5178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637366632396261303965616339613363376139653436306464623335 Jan 22 00:42:55.126000 audit: BPF prog-id=215 op=UNLOAD Jan 22 00:42:55.126000 audit[5178]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5045 pid=5178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637366632396261303965616339613363376139653436306464623335 Jan 22 00:42:55.129000 audit: BPF prog-id=216 op=LOAD Jan 22 00:42:55.132000 audit: BPF prog-id=217 op=LOAD Jan 22 00:42:55.132000 audit[5178]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=5045 pid=5178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637366632396261303965616339613363376139653436306464623335 Jan 22 00:42:55.133000 audit: BPF prog-id=218 op=LOAD Jan 22 00:42:55.133000 audit[5178]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=5045 pid=5178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637366632396261303965616339613363376139653436306464623335 Jan 22 00:42:55.134000 audit: BPF prog-id=218 op=UNLOAD Jan 22 00:42:55.134000 audit[5178]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5045 pid=5178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637366632396261303965616339613363376139653436306464623335 Jan 22 00:42:55.135000 audit: BPF prog-id=217 op=UNLOAD Jan 22 00:42:55.135000 audit[5178]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5045 pid=5178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.135000 audit: BPF prog-id=219 op=LOAD Jan 22 00:42:55.135000 audit[5124]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5111 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865333666636333393139656439333738393065656664656336653134 Jan 22 00:42:55.137000 audit: BPF prog-id=219 op=UNLOAD Jan 22 00:42:55.137000 audit[5124]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5111 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637366632396261303965616339613363376139653436306464623335 Jan 22 00:42:55.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865333666636333393139656439333738393065656664656336653134 Jan 22 00:42:55.140000 audit: BPF prog-id=220 op=LOAD Jan 22 00:42:55.140000 audit[5124]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5111 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.140000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865333666636333393139656439333738393065656664656336653134 Jan 22 00:42:55.143000 audit: BPF prog-id=221 op=LOAD Jan 22 00:42:55.143000 audit[5124]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5111 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865333666636333393139656439333738393065656664656336653134 Jan 22 00:42:55.143000 audit: BPF prog-id=221 op=UNLOAD Jan 22 00:42:55.143000 audit[5124]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5111 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865333666636333393139656439333738393065656664656336653134 Jan 22 00:42:55.143000 audit: BPF prog-id=220 op=UNLOAD Jan 22 00:42:55.143000 audit[5124]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5111 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865333666636333393139656439333738393065656664656336653134 Jan 22 00:42:55.143000 audit: BPF prog-id=222 op=LOAD Jan 22 00:42:55.143000 audit[5124]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5111 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865333666636333393139656439333738393065656664656336653134 Jan 22 00:42:55.137000 audit: BPF prog-id=223 op=LOAD Jan 22 00:42:55.137000 audit[5178]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=5045 pid=5178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637366632396261303965616339613363376139653436306464623335 Jan 22 00:42:55.189000 audit: BPF prog-id=224 op=LOAD Jan 22 00:42:55.192000 audit: BPF prog-id=225 op=LOAD Jan 22 00:42:55.192000 audit[5169]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=5158 pid=5169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.192000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430393833323232386666353532333864363734656461613261346230 Jan 22 00:42:55.193000 audit: BPF prog-id=225 op=UNLOAD Jan 22 00:42:55.193000 audit[5169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5158 pid=5169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430393833323232386666353532333864363734656461613261346230 Jan 22 00:42:55.193000 audit: BPF prog-id=226 op=LOAD Jan 22 00:42:55.193000 audit[5169]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=5158 pid=5169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430393833323232386666353532333864363734656461613261346230 Jan 22 00:42:55.193000 audit: BPF prog-id=227 op=LOAD Jan 22 00:42:55.193000 audit[5169]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=5158 pid=5169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430393833323232386666353532333864363734656461613261346230 Jan 22 00:42:55.193000 audit: BPF prog-id=227 op=UNLOAD Jan 22 00:42:55.193000 audit[5169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5158 pid=5169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430393833323232386666353532333864363734656461613261346230 Jan 22 00:42:55.193000 audit: BPF prog-id=226 op=UNLOAD Jan 22 00:42:55.193000 audit[5169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5158 pid=5169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430393833323232386666353532333864363734656461613261346230 Jan 22 00:42:55.196850 containerd[1953]: time="2026-01-22T00:42:55.196818531Z" level=info msg="StartContainer for \"676f29ba09eac9a3c7a9e460ddb3545b689a51431492ea28e47b19f5512e4031\" returns successfully" Jan 22 00:42:55.193000 audit: BPF prog-id=228 op=LOAD Jan 22 00:42:55.193000 audit[5169]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=5158 pid=5169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430393833323232386666353532333864363734656461613261346230 Jan 22 00:42:55.278551 containerd[1953]: time="2026-01-22T00:42:55.276453852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qvwlf,Uid:9a122a32-e7c8-4162-bccb-4b71d5c37d97,Namespace:calico-system,Attempt:0,} returns sandbox id \"8e36fcc3919ed937890eefdec6e14407c69946b9a4ad7ad6e7e58af2f62c3d29\"" Jan 22 00:42:55.282164 containerd[1953]: time="2026-01-22T00:42:55.282129235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 22 00:42:55.417080 containerd[1953]: time="2026-01-22T00:42:55.417037034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf9469c9d-jt79q,Uid:4921dab1-a0fc-4a4a-9ec2-f3f09160935a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d09832228ff55238d674edaa2a4b0375f756934d609de5de9c177a7709a44501\"" Jan 22 00:42:55.435426 systemd-networkd[1860]: calid0298a9e94e: Link UP Jan 22 00:42:55.441446 systemd-networkd[1860]: calid0298a9e94e: Gained carrier Jan 22 00:42:55.481315 kubelet[3437]: I0122 00:42:55.481213 3437 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8sggz" podStartSLOduration=44.481172829 podStartE2EDuration="44.481172829s" podCreationTimestamp="2026-01-22 00:42:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 00:42:55.478380962 +0000 UTC m=+49.710347110" watchObservedRunningTime="2026-01-22 00:42:55.481172829 +0000 UTC m=+49.713138968" Jan 22 00:42:55.485946 containerd[1953]: 2026-01-22 00:42:55.157 [INFO][5213] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 22 00:42:55.485946 containerd[1953]: 2026-01-22 00:42:55.180 [INFO][5213] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--54-k8s-calico--kube--controllers--5bd666cb6--pggl9-eth0 calico-kube-controllers-5bd666cb6- calico-system 3c1a4692-58d4-49e9-ab29-e19d2d7ee1e4 840 0 2026-01-22 00:42:27 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5bd666cb6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-26-54 calico-kube-controllers-5bd666cb6-pggl9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid0298a9e94e [] [] }} ContainerID="1df10b5253627c08b16bd18ac0995a73f79076f75441fa47e3a66c33a33e865f" Namespace="calico-system" Pod="calico-kube-controllers-5bd666cb6-pggl9" WorkloadEndpoint="ip--172--31--26--54-k8s-calico--kube--controllers--5bd666cb6--pggl9-" Jan 22 00:42:55.485946 containerd[1953]: 2026-01-22 00:42:55.180 [INFO][5213] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1df10b5253627c08b16bd18ac0995a73f79076f75441fa47e3a66c33a33e865f" Namespace="calico-system" Pod="calico-kube-controllers-5bd666cb6-pggl9" WorkloadEndpoint="ip--172--31--26--54-k8s-calico--kube--controllers--5bd666cb6--pggl9-eth0" Jan 22 00:42:55.485946 containerd[1953]: 2026-01-22 00:42:55.317 [INFO][5239] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1df10b5253627c08b16bd18ac0995a73f79076f75441fa47e3a66c33a33e865f" HandleID="k8s-pod-network.1df10b5253627c08b16bd18ac0995a73f79076f75441fa47e3a66c33a33e865f" Workload="ip--172--31--26--54-k8s-calico--kube--controllers--5bd666cb6--pggl9-eth0" Jan 22 00:42:55.485946 containerd[1953]: 2026-01-22 00:42:55.317 [INFO][5239] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1df10b5253627c08b16bd18ac0995a73f79076f75441fa47e3a66c33a33e865f" HandleID="k8s-pod-network.1df10b5253627c08b16bd18ac0995a73f79076f75441fa47e3a66c33a33e865f" Workload="ip--172--31--26--54-k8s-calico--kube--controllers--5bd666cb6--pggl9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003dab50), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-54", "pod":"calico-kube-controllers-5bd666cb6-pggl9", "timestamp":"2026-01-22 00:42:55.317064292 +0000 UTC"}, Hostname:"ip-172-31-26-54", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 00:42:55.485946 containerd[1953]: 2026-01-22 00:42:55.317 [INFO][5239] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:42:55.485946 containerd[1953]: 2026-01-22 00:42:55.317 [INFO][5239] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:42:55.485946 containerd[1953]: 2026-01-22 00:42:55.317 [INFO][5239] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-54' Jan 22 00:42:55.485946 containerd[1953]: 2026-01-22 00:42:55.334 [INFO][5239] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1df10b5253627c08b16bd18ac0995a73f79076f75441fa47e3a66c33a33e865f" host="ip-172-31-26-54" Jan 22 00:42:55.485946 containerd[1953]: 2026-01-22 00:42:55.343 [INFO][5239] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-54" Jan 22 00:42:55.485946 containerd[1953]: 2026-01-22 00:42:55.370 [INFO][5239] ipam/ipam.go 511: Trying affinity for 192.168.74.0/26 host="ip-172-31-26-54" Jan 22 00:42:55.485946 containerd[1953]: 2026-01-22 00:42:55.378 [INFO][5239] ipam/ipam.go 158: Attempting to load block cidr=192.168.74.0/26 host="ip-172-31-26-54" Jan 22 00:42:55.485946 containerd[1953]: 2026-01-22 00:42:55.389 [INFO][5239] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.74.0/26 host="ip-172-31-26-54" Jan 22 00:42:55.485946 containerd[1953]: 2026-01-22 00:42:55.389 [INFO][5239] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.74.0/26 handle="k8s-pod-network.1df10b5253627c08b16bd18ac0995a73f79076f75441fa47e3a66c33a33e865f" host="ip-172-31-26-54" Jan 22 00:42:55.485946 containerd[1953]: 2026-01-22 00:42:55.393 [INFO][5239] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1df10b5253627c08b16bd18ac0995a73f79076f75441fa47e3a66c33a33e865f Jan 22 00:42:55.485946 containerd[1953]: 2026-01-22 00:42:55.400 [INFO][5239] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.74.0/26 handle="k8s-pod-network.1df10b5253627c08b16bd18ac0995a73f79076f75441fa47e3a66c33a33e865f" host="ip-172-31-26-54" Jan 22 00:42:55.485946 containerd[1953]: 2026-01-22 00:42:55.419 [INFO][5239] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.74.8/26] block=192.168.74.0/26 handle="k8s-pod-network.1df10b5253627c08b16bd18ac0995a73f79076f75441fa47e3a66c33a33e865f" host="ip-172-31-26-54" Jan 22 00:42:55.485946 containerd[1953]: 2026-01-22 00:42:55.420 [INFO][5239] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.74.8/26] handle="k8s-pod-network.1df10b5253627c08b16bd18ac0995a73f79076f75441fa47e3a66c33a33e865f" host="ip-172-31-26-54" Jan 22 00:42:55.485946 containerd[1953]: 2026-01-22 00:42:55.420 [INFO][5239] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:42:55.485946 containerd[1953]: 2026-01-22 00:42:55.420 [INFO][5239] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.74.8/26] IPv6=[] ContainerID="1df10b5253627c08b16bd18ac0995a73f79076f75441fa47e3a66c33a33e865f" HandleID="k8s-pod-network.1df10b5253627c08b16bd18ac0995a73f79076f75441fa47e3a66c33a33e865f" Workload="ip--172--31--26--54-k8s-calico--kube--controllers--5bd666cb6--pggl9-eth0" Jan 22 00:42:55.487236 containerd[1953]: 2026-01-22 00:42:55.426 [INFO][5213] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1df10b5253627c08b16bd18ac0995a73f79076f75441fa47e3a66c33a33e865f" Namespace="calico-system" Pod="calico-kube-controllers-5bd666cb6-pggl9" WorkloadEndpoint="ip--172--31--26--54-k8s-calico--kube--controllers--5bd666cb6--pggl9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--54-k8s-calico--kube--controllers--5bd666cb6--pggl9-eth0", GenerateName:"calico-kube-controllers-5bd666cb6-", Namespace:"calico-system", SelfLink:"", UID:"3c1a4692-58d4-49e9-ab29-e19d2d7ee1e4", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 42, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bd666cb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-54", ContainerID:"", Pod:"calico-kube-controllers-5bd666cb6-pggl9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid0298a9e94e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:42:55.487236 containerd[1953]: 2026-01-22 00:42:55.427 [INFO][5213] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.8/32] ContainerID="1df10b5253627c08b16bd18ac0995a73f79076f75441fa47e3a66c33a33e865f" Namespace="calico-system" Pod="calico-kube-controllers-5bd666cb6-pggl9" WorkloadEndpoint="ip--172--31--26--54-k8s-calico--kube--controllers--5bd666cb6--pggl9-eth0" Jan 22 00:42:55.487236 containerd[1953]: 2026-01-22 00:42:55.427 [INFO][5213] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid0298a9e94e ContainerID="1df10b5253627c08b16bd18ac0995a73f79076f75441fa47e3a66c33a33e865f" Namespace="calico-system" Pod="calico-kube-controllers-5bd666cb6-pggl9" WorkloadEndpoint="ip--172--31--26--54-k8s-calico--kube--controllers--5bd666cb6--pggl9-eth0" Jan 22 00:42:55.487236 containerd[1953]: 2026-01-22 00:42:55.450 [INFO][5213] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1df10b5253627c08b16bd18ac0995a73f79076f75441fa47e3a66c33a33e865f" Namespace="calico-system" Pod="calico-kube-controllers-5bd666cb6-pggl9" WorkloadEndpoint="ip--172--31--26--54-k8s-calico--kube--controllers--5bd666cb6--pggl9-eth0" Jan 22 00:42:55.487236 containerd[1953]: 2026-01-22 00:42:55.455 [INFO][5213] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1df10b5253627c08b16bd18ac0995a73f79076f75441fa47e3a66c33a33e865f" Namespace="calico-system" Pod="calico-kube-controllers-5bd666cb6-pggl9" WorkloadEndpoint="ip--172--31--26--54-k8s-calico--kube--controllers--5bd666cb6--pggl9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--54-k8s-calico--kube--controllers--5bd666cb6--pggl9-eth0", GenerateName:"calico-kube-controllers-5bd666cb6-", Namespace:"calico-system", SelfLink:"", UID:"3c1a4692-58d4-49e9-ab29-e19d2d7ee1e4", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 42, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bd666cb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-54", ContainerID:"1df10b5253627c08b16bd18ac0995a73f79076f75441fa47e3a66c33a33e865f", Pod:"calico-kube-controllers-5bd666cb6-pggl9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid0298a9e94e", MAC:"1e:9c:8b:2e:98:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:42:55.487236 containerd[1953]: 2026-01-22 00:42:55.474 [INFO][5213] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1df10b5253627c08b16bd18ac0995a73f79076f75441fa47e3a66c33a33e865f" Namespace="calico-system" Pod="calico-kube-controllers-5bd666cb6-pggl9" WorkloadEndpoint="ip--172--31--26--54-k8s-calico--kube--controllers--5bd666cb6--pggl9-eth0" Jan 22 00:42:55.547296 containerd[1953]: time="2026-01-22T00:42:55.547150634Z" level=info msg="connecting to shim 1df10b5253627c08b16bd18ac0995a73f79076f75441fa47e3a66c33a33e865f" address="unix:///run/containerd/s/de94be4891b072589525860c46a7c6df3529b066fd25c292d73ab1579ac0ba8a" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:42:55.574150 containerd[1953]: time="2026-01-22T00:42:55.572996853Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:42:55.575107 containerd[1953]: time="2026-01-22T00:42:55.575044974Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 22 00:42:55.575226 containerd[1953]: time="2026-01-22T00:42:55.575177126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 22 00:42:55.578127 kubelet[3437]: E0122 00:42:55.577563 3437 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 00:42:55.578127 kubelet[3437]: E0122 00:42:55.577622 3437 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 00:42:55.578127 kubelet[3437]: E0122 00:42:55.577858 3437 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ng7wz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qvwlf_calico-system(9a122a32-e7c8-4162-bccb-4b71d5c37d97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 22 00:42:55.578888 containerd[1953]: time="2026-01-22T00:42:55.578680217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 00:42:55.616823 systemd[1]: Started cri-containerd-1df10b5253627c08b16bd18ac0995a73f79076f75441fa47e3a66c33a33e865f.scope - libcontainer container 1df10b5253627c08b16bd18ac0995a73f79076f75441fa47e3a66c33a33e865f. Jan 22 00:42:55.697000 audit[5279]: NETFILTER_CFG table=filter:127 family=2 entries=17 op=nft_register_rule pid=5279 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:55.697000 audit[5279]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffd8d89660 a2=0 a3=7fffd8d8964c items=0 ppid=3540 pid=5279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.697000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:55.707000 audit[5279]: NETFILTER_CFG table=nat:128 family=2 entries=35 op=nft_register_chain pid=5279 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:55.707000 audit[5279]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fffd8d89660 a2=0 a3=7fffd8d8964c items=0 ppid=3540 pid=5279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.707000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:55.744000 audit: BPF prog-id=229 op=LOAD Jan 22 00:42:55.745000 audit: BPF prog-id=230 op=LOAD Jan 22 00:42:55.745000 audit[5301]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000228238 a2=98 a3=0 items=0 ppid=5289 pid=5301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.745000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164663130623532353336323763303862313662643138616330393935 Jan 22 00:42:55.746000 audit: BPF prog-id=230 op=UNLOAD Jan 22 00:42:55.746000 audit[5301]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5289 pid=5301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164663130623532353336323763303862313662643138616330393935 Jan 22 00:42:55.746000 audit: BPF prog-id=231 op=LOAD Jan 22 00:42:55.746000 audit[5301]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000228488 a2=98 a3=0 items=0 ppid=5289 pid=5301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164663130623532353336323763303862313662643138616330393935 Jan 22 00:42:55.746000 audit: BPF prog-id=232 op=LOAD Jan 22 00:42:55.746000 audit[5301]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000228218 a2=98 a3=0 items=0 ppid=5289 pid=5301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164663130623532353336323763303862313662643138616330393935 Jan 22 00:42:55.746000 audit: BPF prog-id=232 op=UNLOAD Jan 22 00:42:55.746000 audit[5301]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5289 pid=5301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164663130623532353336323763303862313662643138616330393935 Jan 22 00:42:55.746000 audit: BPF prog-id=231 op=UNLOAD Jan 22 00:42:55.746000 audit[5301]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5289 pid=5301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164663130623532353336323763303862313662643138616330393935 Jan 22 00:42:55.746000 audit: BPF prog-id=233 op=LOAD Jan 22 00:42:55.746000 audit[5301]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0002286e8 a2=98 a3=0 items=0 ppid=5289 pid=5301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164663130623532353336323763303862313662643138616330393935 Jan 22 00:42:55.850686 containerd[1953]: time="2026-01-22T00:42:55.850645243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bd666cb6-pggl9,Uid:3c1a4692-58d4-49e9-ab29-e19d2d7ee1e4,Namespace:calico-system,Attempt:0,} returns sandbox id \"1df10b5253627c08b16bd18ac0995a73f79076f75441fa47e3a66c33a33e865f\"" Jan 22 00:42:55.923118 containerd[1953]: time="2026-01-22T00:42:55.923078001Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:42:55.925473 containerd[1953]: time="2026-01-22T00:42:55.925431385Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 00:42:55.926200 containerd[1953]: time="2026-01-22T00:42:55.925516346Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 00:42:55.926248 kubelet[3437]: E0122 00:42:55.925780 3437 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:42:55.926248 kubelet[3437]: E0122 00:42:55.925831 3437 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:42:55.927143 containerd[1953]: time="2026-01-22T00:42:55.926654108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 22 00:42:55.927331 kubelet[3437]: E0122 00:42:55.926413 3437 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c5pzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bf9469c9d-jt79q_calico-apiserver(4921dab1-a0fc-4a4a-9ec2-f3f09160935a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 00:42:55.929718 kubelet[3437]: E0122 00:42:55.929456 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf9469c9d-jt79q" podUID="4921dab1-a0fc-4a4a-9ec2-f3f09160935a" Jan 22 00:42:55.973000 audit: BPF prog-id=234 op=LOAD Jan 22 00:42:55.973000 audit[5339]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcfca48ea0 a2=98 a3=1fffffffffffffff items=0 ppid=5241 pid=5339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.973000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 22 00:42:55.974000 audit: BPF prog-id=234 op=UNLOAD Jan 22 00:42:55.974000 audit[5339]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcfca48e70 a3=0 items=0 ppid=5241 pid=5339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.974000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 22 00:42:55.975000 audit: BPF prog-id=235 op=LOAD Jan 22 00:42:55.975000 audit[5339]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcfca48d80 a2=94 a3=3 items=0 ppid=5241 pid=5339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.975000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 22 00:42:55.975000 audit: BPF prog-id=235 op=UNLOAD Jan 22 00:42:55.975000 audit[5339]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcfca48d80 a2=94 a3=3 items=0 ppid=5241 pid=5339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.975000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 22 00:42:55.975000 audit: BPF prog-id=236 op=LOAD Jan 22 00:42:55.975000 audit[5339]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcfca48dc0 a2=94 a3=7ffcfca48fa0 items=0 ppid=5241 pid=5339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.975000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 22 00:42:55.975000 audit: BPF prog-id=236 op=UNLOAD Jan 22 00:42:55.975000 audit[5339]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcfca48dc0 a2=94 a3=7ffcfca48fa0 items=0 ppid=5241 pid=5339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.975000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 22 00:42:55.979000 audit: BPF prog-id=237 op=LOAD Jan 22 00:42:55.979000 audit[5340]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffec9116440 a2=98 a3=3 items=0 ppid=5241 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.979000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:42:55.979000 audit: BPF prog-id=237 op=UNLOAD Jan 22 00:42:55.979000 audit[5340]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffec9116410 a3=0 items=0 ppid=5241 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.979000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:42:55.980000 audit: BPF prog-id=238 op=LOAD Jan 22 00:42:55.980000 audit[5340]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffec9116230 a2=94 a3=54428f items=0 ppid=5241 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.980000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:42:55.980000 audit: BPF prog-id=238 op=UNLOAD Jan 22 00:42:55.980000 audit[5340]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffec9116230 a2=94 a3=54428f items=0 ppid=5241 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.980000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:42:55.980000 audit: BPF prog-id=239 op=LOAD Jan 22 00:42:55.980000 audit[5340]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffec9116260 a2=94 a3=2 items=0 ppid=5241 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.980000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:42:55.980000 audit: BPF prog-id=239 op=UNLOAD Jan 22 00:42:55.980000 audit[5340]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffec9116260 a2=0 a3=2 items=0 ppid=5241 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:55.980000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:42:56.138964 systemd-networkd[1860]: calib049bf96d60: Gained IPv6LL Jan 22 00:42:56.198000 audit: BPF prog-id=240 op=LOAD Jan 22 00:42:56.198000 audit[5340]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffec9116120 a2=94 a3=1 items=0 ppid=5241 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.198000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:42:56.198000 audit: BPF prog-id=240 op=UNLOAD Jan 22 00:42:56.198000 audit[5340]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffec9116120 a2=94 a3=1 items=0 ppid=5241 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.198000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:42:56.211000 audit: BPF prog-id=241 op=LOAD Jan 22 00:42:56.211000 audit[5340]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffec9116110 a2=94 a3=4 items=0 ppid=5241 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.211000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:42:56.211000 audit: BPF prog-id=241 op=UNLOAD Jan 22 00:42:56.211000 audit[5340]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffec9116110 a2=0 a3=4 items=0 ppid=5241 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.211000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:42:56.212000 audit: BPF prog-id=242 op=LOAD Jan 22 00:42:56.212000 audit[5340]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffec9115f70 a2=94 a3=5 items=0 ppid=5241 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.212000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:42:56.212000 audit: BPF prog-id=242 op=UNLOAD Jan 22 00:42:56.212000 audit[5340]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffec9115f70 a2=0 a3=5 items=0 ppid=5241 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.212000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:42:56.212000 audit: BPF prog-id=243 op=LOAD Jan 22 00:42:56.212000 audit[5340]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffec9116190 a2=94 a3=6 items=0 ppid=5241 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.212000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:42:56.212000 audit: BPF prog-id=243 op=UNLOAD Jan 22 00:42:56.212000 audit[5340]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffec9116190 a2=0 a3=6 items=0 ppid=5241 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.212000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:42:56.213000 audit: BPF prog-id=244 op=LOAD Jan 22 00:42:56.213000 audit[5340]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffec9115940 a2=94 a3=88 items=0 ppid=5241 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.213000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:42:56.213000 audit: BPF prog-id=245 op=LOAD Jan 22 00:42:56.213000 audit[5340]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffec91157c0 a2=94 a3=2 items=0 ppid=5241 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.213000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:42:56.213000 audit: BPF prog-id=245 op=UNLOAD Jan 22 00:42:56.213000 audit[5340]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffec91157f0 a2=0 a3=7ffec91158f0 items=0 ppid=5241 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.213000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:42:56.214000 audit: BPF prog-id=244 op=UNLOAD Jan 22 00:42:56.214000 audit[5340]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=3a3a1d10 a2=0 a3=2396c3236b072dd items=0 ppid=5241 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.214000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:42:56.242266 containerd[1953]: time="2026-01-22T00:42:56.242099277Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:42:56.244759 containerd[1953]: time="2026-01-22T00:42:56.244598471Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 22 00:42:56.244759 containerd[1953]: time="2026-01-22T00:42:56.244698160Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 22 00:42:56.245133 kubelet[3437]: E0122 00:42:56.245034 3437 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 00:42:56.245133 kubelet[3437]: E0122 00:42:56.245092 3437 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 00:42:56.245750 kubelet[3437]: E0122 00:42:56.245697 3437 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ng7wz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qvwlf_calico-system(9a122a32-e7c8-4162-bccb-4b71d5c37d97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 22 00:42:56.247414 kubelet[3437]: E0122 00:42:56.247362 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qvwlf" podUID="9a122a32-e7c8-4162-bccb-4b71d5c37d97" Jan 22 00:42:56.247691 containerd[1953]: time="2026-01-22T00:42:56.247603658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 22 00:42:56.262041 systemd-networkd[1860]: cali3dcd1f8ba77: Gained IPv6LL Jan 22 00:42:56.264000 audit: BPF prog-id=246 op=LOAD Jan 22 00:42:56.264000 audit[5345]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd62143f90 a2=98 a3=1999999999999999 items=0 ppid=5241 pid=5345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.264000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 22 00:42:56.264000 audit: BPF prog-id=246 op=UNLOAD Jan 22 00:42:56.264000 audit[5345]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd62143f60 a3=0 items=0 ppid=5241 pid=5345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.264000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 22 00:42:56.264000 audit: BPF prog-id=247 op=LOAD Jan 22 00:42:56.264000 audit[5345]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd62143e70 a2=94 a3=ffff items=0 ppid=5241 pid=5345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.264000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 22 00:42:56.264000 audit: BPF prog-id=247 op=UNLOAD Jan 22 00:42:56.264000 audit[5345]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd62143e70 a2=94 a3=ffff items=0 ppid=5241 pid=5345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.264000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 22 00:42:56.264000 audit: BPF prog-id=248 op=LOAD Jan 22 00:42:56.264000 audit[5345]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd62143eb0 a2=94 a3=7ffd62144090 items=0 ppid=5241 pid=5345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.264000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 22 00:42:56.264000 audit: BPF prog-id=248 op=UNLOAD Jan 22 00:42:56.264000 audit[5345]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd62143eb0 a2=94 a3=7ffd62144090 items=0 ppid=5241 pid=5345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.264000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 22 00:42:56.342307 systemd-networkd[1860]: vxlan.calico: Link UP Jan 22 00:42:56.342314 systemd-networkd[1860]: vxlan.calico: Gained carrier Jan 22 00:42:56.343325 (udev-worker)[4440]: Network interface NamePolicy= disabled on kernel command line. Jan 22 00:42:56.385000 audit: BPF prog-id=249 op=LOAD Jan 22 00:42:56.385000 audit[5370]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdd5fd3b80 a2=98 a3=0 items=0 ppid=5241 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.385000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:42:56.388000 audit: BPF prog-id=249 op=UNLOAD Jan 22 00:42:56.388000 audit[5370]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffdd5fd3b50 a3=0 items=0 ppid=5241 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.388000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:42:56.389000 audit: BPF prog-id=250 op=LOAD Jan 22 00:42:56.389000 audit[5370]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdd5fd3990 a2=94 a3=54428f items=0 ppid=5241 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.389000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:42:56.390000 audit: BPF prog-id=250 op=UNLOAD Jan 22 00:42:56.390000 audit[5370]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdd5fd3990 a2=94 a3=54428f items=0 ppid=5241 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.390000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:42:56.390000 audit: BPF prog-id=251 op=LOAD Jan 22 00:42:56.390000 audit[5370]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdd5fd39c0 a2=94 a3=2 items=0 ppid=5241 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.390000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:42:56.390000 audit: BPF prog-id=251 op=UNLOAD Jan 22 00:42:56.390000 audit[5370]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdd5fd39c0 a2=0 a3=2 items=0 ppid=5241 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.390000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:42:56.390000 audit: BPF prog-id=252 op=LOAD Jan 22 00:42:56.390000 audit[5370]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdd5fd3770 a2=94 a3=4 items=0 ppid=5241 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.390000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:42:56.390000 audit: BPF prog-id=252 op=UNLOAD Jan 22 00:42:56.390000 audit[5370]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdd5fd3770 a2=94 a3=4 items=0 ppid=5241 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.390000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:42:56.390000 audit: BPF prog-id=253 op=LOAD Jan 22 00:42:56.390000 audit[5370]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdd5fd3870 a2=94 a3=7ffdd5fd39f0 items=0 ppid=5241 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.390000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:42:56.391000 audit: BPF prog-id=253 op=UNLOAD Jan 22 00:42:56.391000 audit[5370]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdd5fd3870 a2=0 a3=7ffdd5fd39f0 items=0 ppid=5241 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.391000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:42:56.391000 audit: BPF prog-id=254 op=LOAD Jan 22 00:42:56.391000 audit[5370]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdd5fd2fa0 a2=94 a3=2 items=0 ppid=5241 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.391000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:42:56.393000 audit: BPF prog-id=254 op=UNLOAD Jan 22 00:42:56.393000 audit[5370]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdd5fd2fa0 a2=0 a3=2 items=0 ppid=5241 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.393000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:42:56.393000 audit: BPF prog-id=255 op=LOAD Jan 22 00:42:56.393000 audit[5370]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdd5fd30a0 a2=94 a3=30 items=0 ppid=5241 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.393000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:42:56.408000 audit: BPF prog-id=256 op=LOAD Jan 22 00:42:56.408000 audit[5373]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff41215ee0 a2=98 a3=0 items=0 ppid=5241 pid=5373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.408000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:42:56.408000 audit: BPF prog-id=256 op=UNLOAD Jan 22 00:42:56.408000 audit[5373]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff41215eb0 a3=0 items=0 ppid=5241 pid=5373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.408000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:42:56.408000 audit: BPF prog-id=257 op=LOAD Jan 22 00:42:56.408000 audit[5373]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff41215cd0 a2=94 a3=54428f items=0 ppid=5241 pid=5373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.408000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:42:56.408000 audit: BPF prog-id=257 op=UNLOAD Jan 22 00:42:56.408000 audit[5373]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff41215cd0 a2=94 a3=54428f items=0 ppid=5241 pid=5373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.408000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:42:56.408000 audit: BPF prog-id=258 op=LOAD Jan 22 00:42:56.408000 audit[5373]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff41215d00 a2=94 a3=2 items=0 ppid=5241 pid=5373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.408000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:42:56.408000 audit: BPF prog-id=258 op=UNLOAD Jan 22 00:42:56.408000 audit[5373]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff41215d00 a2=0 a3=2 items=0 ppid=5241 pid=5373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.408000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:42:56.446360 kubelet[3437]: E0122 00:42:56.446082 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf9469c9d-jt79q" podUID="4921dab1-a0fc-4a4a-9ec2-f3f09160935a" Jan 22 00:42:56.448367 kubelet[3437]: E0122 00:42:56.448227 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qvwlf" podUID="9a122a32-e7c8-4162-bccb-4b71d5c37d97" Jan 22 00:42:56.521189 containerd[1953]: time="2026-01-22T00:42:56.521142352Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:42:56.524116 containerd[1953]: time="2026-01-22T00:42:56.524057645Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 22 00:42:56.524248 containerd[1953]: time="2026-01-22T00:42:56.524185581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 22 00:42:56.524493 kubelet[3437]: E0122 00:42:56.524393 3437 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 00:42:56.524895 kubelet[3437]: E0122 00:42:56.524497 3437 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 00:42:56.524895 kubelet[3437]: E0122 00:42:56.524679 3437 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hwx64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5bd666cb6-pggl9_calico-system(3c1a4692-58d4-49e9-ab29-e19d2d7ee1e4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 22 00:42:56.526486 kubelet[3437]: E0122 00:42:56.526448 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bd666cb6-pggl9" podUID="3c1a4692-58d4-49e9-ab29-e19d2d7ee1e4" Jan 22 00:42:56.527000 audit[5375]: NETFILTER_CFG table=filter:129 family=2 entries=14 op=nft_register_rule pid=5375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:56.527000 audit[5375]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff9c03c2a0 a2=0 a3=7fff9c03c28c items=0 ppid=3540 pid=5375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.527000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:56.562000 audit[5375]: NETFILTER_CFG table=nat:130 family=2 entries=56 op=nft_register_chain pid=5375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:42:56.562000 audit[5375]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff9c03c2a0 a2=0 a3=7fff9c03c28c items=0 ppid=3540 pid=5375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.562000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:42:56.690000 audit: BPF prog-id=259 op=LOAD Jan 22 00:42:56.690000 audit[5373]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff41215bc0 a2=94 a3=1 items=0 ppid=5241 pid=5373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.690000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:42:56.690000 audit: BPF prog-id=259 op=UNLOAD Jan 22 00:42:56.690000 audit[5373]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff41215bc0 a2=94 a3=1 items=0 ppid=5241 pid=5373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.690000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:42:56.704000 audit: BPF prog-id=260 op=LOAD Jan 22 00:42:56.704000 audit[5373]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff41215bb0 a2=94 a3=4 items=0 ppid=5241 pid=5373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.704000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:42:56.705000 audit: BPF prog-id=260 op=UNLOAD Jan 22 00:42:56.705000 audit[5373]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff41215bb0 a2=0 a3=4 items=0 ppid=5241 pid=5373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.705000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:42:56.705000 audit: BPF prog-id=261 op=LOAD Jan 22 00:42:56.705000 audit[5373]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff41215a10 a2=94 a3=5 items=0 ppid=5241 pid=5373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.705000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:42:56.706000 audit: BPF prog-id=261 op=UNLOAD Jan 22 00:42:56.706000 audit[5373]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff41215a10 a2=0 a3=5 items=0 ppid=5241 pid=5373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.706000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:42:56.706000 audit: BPF prog-id=262 op=LOAD Jan 22 00:42:56.706000 audit[5373]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff41215c30 a2=94 a3=6 items=0 ppid=5241 pid=5373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.706000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:42:56.706000 audit: BPF prog-id=262 op=UNLOAD Jan 22 00:42:56.706000 audit[5373]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff41215c30 a2=0 a3=6 items=0 ppid=5241 pid=5373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.706000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:42:56.706000 audit: BPF prog-id=263 op=LOAD Jan 22 00:42:56.706000 audit[5373]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff412153e0 a2=94 a3=88 items=0 ppid=5241 pid=5373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.706000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:42:56.706000 audit: BPF prog-id=264 op=LOAD Jan 22 00:42:56.706000 audit[5373]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff41215260 a2=94 a3=2 items=0 ppid=5241 pid=5373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.706000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:42:56.706000 audit: BPF prog-id=264 op=UNLOAD Jan 22 00:42:56.706000 audit[5373]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff41215290 a2=0 a3=7fff41215390 items=0 ppid=5241 pid=5373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.706000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:42:56.707000 audit: BPF prog-id=263 op=UNLOAD Jan 22 00:42:56.707000 audit[5373]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=2d214d10 a2=0 a3=a7c19bb6b2c82dfb items=0 ppid=5241 pid=5373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.707000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:42:56.710149 systemd-networkd[1860]: calia0e8c942a5e: Gained IPv6LL Jan 22 00:42:56.716000 audit: BPF prog-id=255 op=UNLOAD Jan 22 00:42:56.716000 audit[5241]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c00101e0c0 a2=0 a3=0 items=0 ppid=4518 pid=5241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.716000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 22 00:42:56.817000 audit[5406]: NETFILTER_CFG table=mangle:131 family=2 entries=16 op=nft_register_chain pid=5406 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:42:56.817000 audit[5406]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffedec62100 a2=0 a3=7ffedec620ec items=0 ppid=5241 pid=5406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.817000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 00:42:56.831000 audit[5404]: NETFILTER_CFG table=raw:132 family=2 entries=21 op=nft_register_chain pid=5404 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:42:56.831000 audit[5404]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffc3880d3f0 a2=0 a3=7ffc3880d3dc items=0 ppid=5241 pid=5404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.831000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 00:42:56.844000 audit[5410]: NETFILTER_CFG table=nat:133 family=2 entries=15 op=nft_register_chain pid=5410 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:42:56.844000 audit[5410]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe5fff6a50 a2=0 a3=7ffe5fff6a3c items=0 ppid=5241 pid=5410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.844000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 00:42:56.859000 audit[5409]: NETFILTER_CFG table=filter:134 family=2 entries=333 op=nft_register_chain pid=5409 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:42:56.859000 audit[5409]: SYSCALL arch=c000003e syscall=46 success=yes exit=196320 a0=3 a1=7ffe105f06b0 a2=0 a3=7ffe105f069c items=0 ppid=5241 pid=5409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:42:56.859000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 00:42:57.031308 systemd-networkd[1860]: calid0298a9e94e: Gained IPv6LL Jan 22 00:42:57.445958 kubelet[3437]: E0122 00:42:57.445904 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bd666cb6-pggl9" podUID="3c1a4692-58d4-49e9-ab29-e19d2d7ee1e4" Jan 22 00:42:57.446207 kubelet[3437]: E0122 00:42:57.446130 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf9469c9d-jt79q" podUID="4921dab1-a0fc-4a4a-9ec2-f3f09160935a" Jan 22 00:42:57.606964 systemd-networkd[1860]: vxlan.calico: Gained IPv6LL Jan 22 00:43:00.055855 ntpd[2222]: Listen normally on 6 vxlan.calico 192.168.74.0:123 Jan 22 00:43:00.056040 ntpd[2222]: Listen normally on 7 cali3cd2fcf1209 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 22 00:43:00.069572 ntpd[2222]: 22 Jan 00:43:00 ntpd[2222]: Listen normally on 6 vxlan.calico 192.168.74.0:123 Jan 22 00:43:00.069572 ntpd[2222]: 22 Jan 00:43:00 ntpd[2222]: Listen normally on 7 cali3cd2fcf1209 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 22 00:43:00.069572 ntpd[2222]: 22 Jan 00:43:00 ntpd[2222]: Listen normally on 8 calia72685caf17 [fe80::ecee:eeff:feee:eeee%5]:123 Jan 22 00:43:00.069572 ntpd[2222]: 22 Jan 00:43:00 ntpd[2222]: Listen normally on 9 califf8e7b8b89e [fe80::ecee:eeff:feee:eeee%6]:123 Jan 22 00:43:00.069572 ntpd[2222]: 22 Jan 00:43:00 ntpd[2222]: Listen normally on 10 calid25edf507d9 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 22 00:43:00.069572 ntpd[2222]: 22 Jan 00:43:00 ntpd[2222]: Listen normally on 11 calib049bf96d60 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 22 00:43:00.069572 ntpd[2222]: 22 Jan 00:43:00 ntpd[2222]: Listen normally on 12 calia0e8c942a5e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 22 00:43:00.069572 ntpd[2222]: 22 Jan 00:43:00 ntpd[2222]: Listen normally on 13 cali3dcd1f8ba77 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 22 00:43:00.069572 ntpd[2222]: 22 Jan 00:43:00 ntpd[2222]: Listen normally on 14 calid0298a9e94e [fe80::ecee:eeff:feee:eeee%11]:123 Jan 22 00:43:00.069572 ntpd[2222]: 22 Jan 00:43:00 ntpd[2222]: Listen normally on 15 vxlan.calico [fe80::6458:5cff:fe30:c31d%12]:123 Jan 22 00:43:00.056074 ntpd[2222]: Listen normally on 8 calia72685caf17 [fe80::ecee:eeff:feee:eeee%5]:123 Jan 22 00:43:00.056108 ntpd[2222]: Listen normally on 9 califf8e7b8b89e [fe80::ecee:eeff:feee:eeee%6]:123 Jan 22 00:43:00.056636 ntpd[2222]: Listen normally on 10 calid25edf507d9 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 22 00:43:00.056677 ntpd[2222]: Listen normally on 11 calib049bf96d60 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 22 00:43:00.056705 ntpd[2222]: Listen normally on 12 calia0e8c942a5e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 22 00:43:00.056733 ntpd[2222]: Listen normally on 13 cali3dcd1f8ba77 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 22 00:43:00.057060 ntpd[2222]: Listen normally on 14 calid0298a9e94e [fe80::ecee:eeff:feee:eeee%11]:123 Jan 22 00:43:00.057103 ntpd[2222]: Listen normally on 15 vxlan.calico [fe80::6458:5cff:fe30:c31d%12]:123 Jan 22 00:43:05.028587 containerd[1953]: time="2026-01-22T00:43:05.028545264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 22 00:43:05.289957 containerd[1953]: time="2026-01-22T00:43:05.289786365Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:43:05.292496 containerd[1953]: time="2026-01-22T00:43:05.292411862Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 22 00:43:05.293515 containerd[1953]: time="2026-01-22T00:43:05.292416474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 22 00:43:05.293596 kubelet[3437]: E0122 00:43:05.292721 3437 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 00:43:05.293596 kubelet[3437]: E0122 00:43:05.292801 3437 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 00:43:05.293596 kubelet[3437]: E0122 00:43:05.293013 3437 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-frzdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4rjzm_calico-system(ffc8de62-f696-4cfe-ab23-12f34741b8d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 22 00:43:05.294491 kubelet[3437]: E0122 00:43:05.294270 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4rjzm" podUID="ffc8de62-f696-4cfe-ab23-12f34741b8d0" Jan 22 00:43:06.034555 containerd[1953]: time="2026-01-22T00:43:06.034395888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 00:43:06.486059 containerd[1953]: time="2026-01-22T00:43:06.486012231Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:43:06.488156 containerd[1953]: time="2026-01-22T00:43:06.488109574Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 00:43:06.488595 containerd[1953]: time="2026-01-22T00:43:06.488134422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 00:43:06.488651 kubelet[3437]: E0122 00:43:06.488345 3437 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:43:06.488651 kubelet[3437]: E0122 00:43:06.488389 3437 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:43:06.488651 kubelet[3437]: E0122 00:43:06.488510 3437 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ksv99,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bf9469c9d-75c6s_calico-apiserver(0db24862-6144-45b1-8f39-39a11a3c80dd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 00:43:06.490502 kubelet[3437]: E0122 00:43:06.490339 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf9469c9d-75c6s" podUID="0db24862-6144-45b1-8f39-39a11a3c80dd" Jan 22 00:43:07.028712 containerd[1953]: time="2026-01-22T00:43:07.028674362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 22 00:43:07.376625 containerd[1953]: time="2026-01-22T00:43:07.376578535Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:43:07.378775 containerd[1953]: time="2026-01-22T00:43:07.378725321Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 22 00:43:07.379039 containerd[1953]: time="2026-01-22T00:43:07.378811277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 22 00:43:07.379229 kubelet[3437]: E0122 00:43:07.379007 3437 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 00:43:07.379229 kubelet[3437]: E0122 00:43:07.379055 3437 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 00:43:07.379358 kubelet[3437]: E0122 00:43:07.379238 3437 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1493b4b6e39948d398b0f1f06536a205,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q7cwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65dd85ff-jgxjj_calico-system(96b1517d-4481-408c-9294-a20121dee9ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 22 00:43:07.382153 containerd[1953]: time="2026-01-22T00:43:07.382075440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 22 00:43:07.654905 containerd[1953]: time="2026-01-22T00:43:07.654743977Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:43:07.657384 containerd[1953]: time="2026-01-22T00:43:07.657283295Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 22 00:43:07.657579 containerd[1953]: time="2026-01-22T00:43:07.657337553Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 22 00:43:07.657630 kubelet[3437]: E0122 00:43:07.657528 3437 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 00:43:07.658655 kubelet[3437]: E0122 00:43:07.657746 3437 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 00:43:07.663918 kubelet[3437]: E0122 00:43:07.663856 3437 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q7cwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65dd85ff-jgxjj_calico-system(96b1517d-4481-408c-9294-a20121dee9ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 22 00:43:07.665192 kubelet[3437]: E0122 00:43:07.665143 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65dd85ff-jgxjj" podUID="96b1517d-4481-408c-9294-a20121dee9ff" Jan 22 00:43:09.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.26.54:22-68.220.241.50:34562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:09.672107 systemd[1]: Started sshd@9-172.31.26.54:22-68.220.241.50:34562.service - OpenSSH per-connection server daemon (68.220.241.50:34562). Jan 22 00:43:09.673502 kernel: kauditd_printk_skb: 372 callbacks suppressed Jan 22 00:43:09.673561 kernel: audit: type=1130 audit(1769042589.671:736): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.26.54:22-68.220.241.50:34562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:10.030432 containerd[1953]: time="2026-01-22T00:43:10.029982504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 22 00:43:10.155997 sshd[5441]: Accepted publickey for core from 68.220.241.50 port 34562 ssh2: RSA SHA256:guZGX9gbNcoOyrr8VXliJQHZZzuPYZGvC0Dn+A+42nM Jan 22 00:43:10.154000 audit[5441]: USER_ACCT pid=5441 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:10.160081 sshd-session[5441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:43:10.156000 audit[5441]: CRED_ACQ pid=5441 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:10.162426 kernel: audit: type=1101 audit(1769042590.154:737): pid=5441 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:10.162502 kernel: audit: type=1103 audit(1769042590.156:738): pid=5441 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:10.169538 kernel: audit: type=1006 audit(1769042590.156:739): pid=5441 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 22 00:43:10.170921 kernel: audit: type=1300 audit(1769042590.156:739): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd84da3040 a2=3 a3=0 items=0 ppid=1 pid=5441 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:43:10.156000 audit[5441]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd84da3040 a2=3 a3=0 items=0 ppid=1 pid=5441 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:43:10.175447 kernel: audit: type=1327 audit(1769042590.156:739): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:43:10.156000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:43:10.183689 systemd-logind[1939]: New session 10 of user core. Jan 22 00:43:10.188133 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 22 00:43:10.191000 audit[5441]: USER_START pid=5441 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:10.199053 kernel: audit: type=1105 audit(1769042590.191:740): pid=5441 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:10.199391 kernel: audit: type=1103 audit(1769042590.196:741): pid=5444 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:10.196000 audit[5444]: CRED_ACQ pid=5444 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:10.287598 containerd[1953]: time="2026-01-22T00:43:10.287443154Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:43:10.289687 containerd[1953]: time="2026-01-22T00:43:10.289624081Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 22 00:43:10.289827 containerd[1953]: time="2026-01-22T00:43:10.289712990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 22 00:43:10.289961 kubelet[3437]: E0122 00:43:10.289921 3437 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 00:43:10.290243 kubelet[3437]: E0122 00:43:10.289983 3437 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 00:43:10.290369 containerd[1953]: time="2026-01-22T00:43:10.290349045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 00:43:10.290692 kubelet[3437]: E0122 00:43:10.290633 3437 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hwx64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5bd666cb6-pggl9_calico-system(3c1a4692-58d4-49e9-ab29-e19d2d7ee1e4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 22 00:43:10.292023 kubelet[3437]: E0122 00:43:10.291990 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bd666cb6-pggl9" podUID="3c1a4692-58d4-49e9-ab29-e19d2d7ee1e4" Jan 22 00:43:10.573024 containerd[1953]: time="2026-01-22T00:43:10.572980593Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:43:10.575403 containerd[1953]: time="2026-01-22T00:43:10.575343183Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 00:43:10.575514 containerd[1953]: time="2026-01-22T00:43:10.575430827Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 00:43:10.575668 kubelet[3437]: E0122 00:43:10.575580 3437 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:43:10.575668 kubelet[3437]: E0122 00:43:10.575642 3437 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:43:10.575796 kubelet[3437]: E0122 00:43:10.575753 3437 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c5pzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bf9469c9d-jt79q_calico-apiserver(4921dab1-a0fc-4a4a-9ec2-f3f09160935a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 00:43:10.577848 kubelet[3437]: E0122 00:43:10.577801 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf9469c9d-jt79q" podUID="4921dab1-a0fc-4a4a-9ec2-f3f09160935a" Jan 22 00:43:11.029768 containerd[1953]: time="2026-01-22T00:43:11.029642346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 22 00:43:11.336274 containerd[1953]: time="2026-01-22T00:43:11.336180238Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:43:11.338414 containerd[1953]: time="2026-01-22T00:43:11.338353667Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 22 00:43:11.338535 containerd[1953]: time="2026-01-22T00:43:11.338438892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 22 00:43:11.338701 kubelet[3437]: E0122 00:43:11.338606 3437 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 00:43:11.338701 kubelet[3437]: E0122 00:43:11.338682 3437 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 00:43:11.339284 kubelet[3437]: E0122 00:43:11.338951 3437 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ng7wz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qvwlf_calico-system(9a122a32-e7c8-4162-bccb-4b71d5c37d97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 22 00:43:11.342212 containerd[1953]: time="2026-01-22T00:43:11.341265202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 22 00:43:11.347064 sshd[5444]: Connection closed by 68.220.241.50 port 34562 Jan 22 00:43:11.348349 sshd-session[5441]: pam_unix(sshd:session): session closed for user core Jan 22 00:43:11.356000 audit[5441]: USER_END pid=5441 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:11.365269 kernel: audit: type=1106 audit(1769042591.356:742): pid=5441 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:11.372302 systemd[1]: sshd@9-172.31.26.54:22-68.220.241.50:34562.service: Deactivated successfully. Jan 22 00:43:11.375549 systemd[1]: session-10.scope: Deactivated successfully. Jan 22 00:43:11.382352 kernel: audit: type=1104 audit(1769042591.364:743): pid=5441 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:11.364000 audit[5441]: CRED_DISP pid=5441 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:11.385562 systemd-logind[1939]: Session 10 logged out. Waiting for processes to exit. Jan 22 00:43:11.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.26.54:22-68.220.241.50:34562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:11.395460 systemd-logind[1939]: Removed session 10. Jan 22 00:43:11.664395 containerd[1953]: time="2026-01-22T00:43:11.664243523Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:43:11.666625 containerd[1953]: time="2026-01-22T00:43:11.666566296Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 22 00:43:11.666625 containerd[1953]: time="2026-01-22T00:43:11.666583388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 22 00:43:11.667670 kubelet[3437]: E0122 00:43:11.666832 3437 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 00:43:11.667670 kubelet[3437]: E0122 00:43:11.666902 3437 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 00:43:11.667670 kubelet[3437]: E0122 00:43:11.667021 3437 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ng7wz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qvwlf_calico-system(9a122a32-e7c8-4162-bccb-4b71d5c37d97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 22 00:43:11.668797 kubelet[3437]: E0122 00:43:11.668753 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qvwlf" podUID="9a122a32-e7c8-4162-bccb-4b71d5c37d97" Jan 22 00:43:16.437074 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 00:43:16.437174 kernel: audit: type=1130 audit(1769042596.433:745): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.26.54:22-68.220.241.50:50602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:16.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.26.54:22-68.220.241.50:50602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:16.434237 systemd[1]: Started sshd@10-172.31.26.54:22-68.220.241.50:50602.service - OpenSSH per-connection server daemon (68.220.241.50:50602). Jan 22 00:43:16.917672 kernel: audit: type=1101 audit(1769042596.905:746): pid=5459 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:16.905000 audit[5459]: USER_ACCT pid=5459 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:16.908762 sshd-session[5459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:43:16.918316 sshd[5459]: Accepted publickey for core from 68.220.241.50 port 50602 ssh2: RSA SHA256:guZGX9gbNcoOyrr8VXliJQHZZzuPYZGvC0Dn+A+42nM Jan 22 00:43:16.907000 audit[5459]: CRED_ACQ pid=5459 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:16.930591 kernel: audit: type=1103 audit(1769042596.907:747): pid=5459 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:16.930701 kernel: audit: type=1006 audit(1769042596.907:748): pid=5459 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 22 00:43:16.907000 audit[5459]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd48cfa260 a2=3 a3=0 items=0 ppid=1 pid=5459 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:43:16.907000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:43:16.939459 kernel: audit: type=1300 audit(1769042596.907:748): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd48cfa260 a2=3 a3=0 items=0 ppid=1 pid=5459 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:43:16.941018 kernel: audit: type=1327 audit(1769042596.907:748): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:43:16.943249 systemd-logind[1939]: New session 11 of user core. Jan 22 00:43:16.948145 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 22 00:43:16.950000 audit[5459]: USER_START pid=5459 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:16.956000 audit[5470]: CRED_ACQ pid=5470 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:16.958694 kernel: audit: type=1105 audit(1769042596.950:749): pid=5459 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:16.958768 kernel: audit: type=1103 audit(1769042596.956:750): pid=5470 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:17.237273 sshd[5470]: Connection closed by 68.220.241.50 port 50602 Jan 22 00:43:17.238197 sshd-session[5459]: pam_unix(sshd:session): session closed for user core Jan 22 00:43:17.253342 kernel: audit: type=1106 audit(1769042597.240:751): pid=5459 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:17.240000 audit[5459]: USER_END pid=5459 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:17.246363 systemd[1]: sshd@10-172.31.26.54:22-68.220.241.50:50602.service: Deactivated successfully. Jan 22 00:43:17.249377 systemd[1]: session-11.scope: Deactivated successfully. Jan 22 00:43:17.254147 systemd-logind[1939]: Session 11 logged out. Waiting for processes to exit. Jan 22 00:43:17.240000 audit[5459]: CRED_DISP pid=5459 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:17.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.26.54:22-68.220.241.50:50602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:17.261011 kernel: audit: type=1104 audit(1769042597.240:752): pid=5459 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:17.261502 systemd-logind[1939]: Removed session 11. Jan 22 00:43:18.029704 kubelet[3437]: E0122 00:43:18.029640 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4rjzm" podUID="ffc8de62-f696-4cfe-ab23-12f34741b8d0" Jan 22 00:43:19.028789 kubelet[3437]: E0122 00:43:19.028714 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf9469c9d-75c6s" podUID="0db24862-6144-45b1-8f39-39a11a3c80dd" Jan 22 00:43:21.028918 kubelet[3437]: E0122 00:43:21.028663 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bd666cb6-pggl9" podUID="3c1a4692-58d4-49e9-ab29-e19d2d7ee1e4" Jan 22 00:43:22.338512 systemd[1]: Started sshd@11-172.31.26.54:22-68.220.241.50:50612.service - OpenSSH per-connection server daemon (68.220.241.50:50612). Jan 22 00:43:22.342150 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 00:43:22.342199 kernel: audit: type=1130 audit(1769042602.338:754): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.26.54:22-68.220.241.50:50612 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:22.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.26.54:22-68.220.241.50:50612 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:22.851000 audit[5511]: USER_ACCT pid=5511 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:22.853138 sshd[5511]: Accepted publickey for core from 68.220.241.50 port 50612 ssh2: RSA SHA256:guZGX9gbNcoOyrr8VXliJQHZZzuPYZGvC0Dn+A+42nM Jan 22 00:43:22.857697 sshd-session[5511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:43:22.855000 audit[5511]: CRED_ACQ pid=5511 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:22.859512 kernel: audit: type=1101 audit(1769042602.851:755): pid=5511 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:22.859573 kernel: audit: type=1103 audit(1769042602.855:756): pid=5511 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:22.865920 kernel: audit: type=1006 audit(1769042602.855:757): pid=5511 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 22 00:43:22.855000 audit[5511]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff57a07cf0 a2=3 a3=0 items=0 ppid=1 pid=5511 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:43:22.867271 kernel: audit: type=1300 audit(1769042602.855:757): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff57a07cf0 a2=3 a3=0 items=0 ppid=1 pid=5511 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:43:22.870522 systemd-logind[1939]: New session 12 of user core. Jan 22 00:43:22.855000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:43:22.872144 kernel: audit: type=1327 audit(1769042602.855:757): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:43:22.875519 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 22 00:43:22.877000 audit[5511]: USER_START pid=5511 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:22.883000 audit[5515]: CRED_ACQ pid=5515 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:22.885518 kernel: audit: type=1105 audit(1769042602.877:758): pid=5511 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:22.885596 kernel: audit: type=1103 audit(1769042602.883:759): pid=5515 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:23.031336 kubelet[3437]: E0122 00:43:23.031240 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65dd85ff-jgxjj" podUID="96b1517d-4481-408c-9294-a20121dee9ff" Jan 22 00:43:23.270175 sshd[5515]: Connection closed by 68.220.241.50 port 50612 Jan 22 00:43:23.271393 sshd-session[5511]: pam_unix(sshd:session): session closed for user core Jan 22 00:43:23.274000 audit[5511]: USER_END pid=5511 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:23.279356 systemd[1]: sshd@11-172.31.26.54:22-68.220.241.50:50612.service: Deactivated successfully. Jan 22 00:43:23.274000 audit[5511]: CRED_DISP pid=5511 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:23.282245 systemd[1]: session-12.scope: Deactivated successfully. Jan 22 00:43:23.283163 kernel: audit: type=1106 audit(1769042603.274:760): pid=5511 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:23.283226 kernel: audit: type=1104 audit(1769042603.274:761): pid=5511 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:23.285348 systemd-logind[1939]: Session 12 logged out. Waiting for processes to exit. Jan 22 00:43:23.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.26.54:22-68.220.241.50:50612 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:23.286659 systemd-logind[1939]: Removed session 12. Jan 22 00:43:23.374390 systemd[1]: Started sshd@12-172.31.26.54:22-68.220.241.50:34178.service - OpenSSH per-connection server daemon (68.220.241.50:34178). Jan 22 00:43:23.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.26.54:22-68.220.241.50:34178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:23.873000 audit[5528]: USER_ACCT pid=5528 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:23.875075 sshd[5528]: Accepted publickey for core from 68.220.241.50 port 34178 ssh2: RSA SHA256:guZGX9gbNcoOyrr8VXliJQHZZzuPYZGvC0Dn+A+42nM Jan 22 00:43:23.875000 audit[5528]: CRED_ACQ pid=5528 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:23.875000 audit[5528]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff54c9be90 a2=3 a3=0 items=0 ppid=1 pid=5528 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:43:23.875000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:43:23.876761 sshd-session[5528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:43:23.882642 systemd-logind[1939]: New session 13 of user core. Jan 22 00:43:23.895299 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 22 00:43:23.897000 audit[5528]: USER_START pid=5528 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:23.900000 audit[5532]: CRED_ACQ pid=5532 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:24.031153 kubelet[3437]: E0122 00:43:24.031059 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf9469c9d-jt79q" podUID="4921dab1-a0fc-4a4a-9ec2-f3f09160935a" Jan 22 00:43:24.033129 kubelet[3437]: E0122 00:43:24.033078 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qvwlf" podUID="9a122a32-e7c8-4162-bccb-4b71d5c37d97" Jan 22 00:43:24.336002 sshd[5532]: Connection closed by 68.220.241.50 port 34178 Jan 22 00:43:24.339507 sshd-session[5528]: pam_unix(sshd:session): session closed for user core Jan 22 00:43:24.340000 audit[5528]: USER_END pid=5528 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:24.340000 audit[5528]: CRED_DISP pid=5528 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:24.345552 systemd[1]: sshd@12-172.31.26.54:22-68.220.241.50:34178.service: Deactivated successfully. Jan 22 00:43:24.346230 systemd-logind[1939]: Session 13 logged out. Waiting for processes to exit. Jan 22 00:43:24.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.26.54:22-68.220.241.50:34178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:24.350128 systemd[1]: session-13.scope: Deactivated successfully. Jan 22 00:43:24.352548 systemd-logind[1939]: Removed session 13. Jan 22 00:43:24.431902 systemd[1]: Started sshd@13-172.31.26.54:22-68.220.241.50:34186.service - OpenSSH per-connection server daemon (68.220.241.50:34186). Jan 22 00:43:24.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.26.54:22-68.220.241.50:34186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:24.933000 audit[5542]: USER_ACCT pid=5542 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:24.935347 sshd[5542]: Accepted publickey for core from 68.220.241.50 port 34186 ssh2: RSA SHA256:guZGX9gbNcoOyrr8VXliJQHZZzuPYZGvC0Dn+A+42nM Jan 22 00:43:24.935000 audit[5542]: CRED_ACQ pid=5542 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:24.936000 audit[5542]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe9e433b20 a2=3 a3=0 items=0 ppid=1 pid=5542 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:43:24.936000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:43:24.937458 sshd-session[5542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:43:24.943865 systemd-logind[1939]: New session 14 of user core. Jan 22 00:43:24.950258 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 22 00:43:24.954000 audit[5542]: USER_START pid=5542 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:24.957000 audit[5545]: CRED_ACQ pid=5545 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:25.277770 sshd[5545]: Connection closed by 68.220.241.50 port 34186 Jan 22 00:43:25.279221 sshd-session[5542]: pam_unix(sshd:session): session closed for user core Jan 22 00:43:25.279000 audit[5542]: USER_END pid=5542 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:25.279000 audit[5542]: CRED_DISP pid=5542 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:25.283796 systemd[1]: sshd@13-172.31.26.54:22-68.220.241.50:34186.service: Deactivated successfully. Jan 22 00:43:25.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.26.54:22-68.220.241.50:34186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:25.286213 systemd[1]: session-14.scope: Deactivated successfully. Jan 22 00:43:25.287842 systemd-logind[1939]: Session 14 logged out. Waiting for processes to exit. Jan 22 00:43:25.289169 systemd-logind[1939]: Removed session 14. Jan 22 00:43:30.372276 systemd[1]: Started sshd@14-172.31.26.54:22-68.220.241.50:34202.service - OpenSSH per-connection server daemon (68.220.241.50:34202). Jan 22 00:43:30.373735 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 22 00:43:30.373812 kernel: audit: type=1130 audit(1769042610.371:781): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.26.54:22-68.220.241.50:34202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:30.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.26.54:22-68.220.241.50:34202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:30.918000 audit[5563]: USER_ACCT pid=5563 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:30.921059 sshd[5563]: Accepted publickey for core from 68.220.241.50 port 34202 ssh2: RSA SHA256:guZGX9gbNcoOyrr8VXliJQHZZzuPYZGvC0Dn+A+42nM Jan 22 00:43:30.919000 audit[5563]: CRED_ACQ pid=5563 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:30.925164 sshd-session[5563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:43:30.926025 kernel: audit: type=1101 audit(1769042610.918:782): pid=5563 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:30.926768 kernel: audit: type=1103 audit(1769042610.919:783): pid=5563 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:30.919000 audit[5563]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff490e6ff0 a2=3 a3=0 items=0 ppid=1 pid=5563 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:43:30.934719 kernel: audit: type=1006 audit(1769042610.919:784): pid=5563 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 22 00:43:30.934791 kernel: audit: type=1300 audit(1769042610.919:784): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff490e6ff0 a2=3 a3=0 items=0 ppid=1 pid=5563 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:43:30.937842 systemd-logind[1939]: New session 15 of user core. Jan 22 00:43:30.919000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:43:30.941920 kernel: audit: type=1327 audit(1769042610.919:784): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:43:30.947291 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 22 00:43:30.950000 audit[5563]: USER_START pid=5563 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:30.958951 kernel: audit: type=1105 audit(1769042610.950:785): pid=5563 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:30.958000 audit[5566]: CRED_ACQ pid=5566 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:30.964947 kernel: audit: type=1103 audit(1769042610.958:786): pid=5566 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:31.405919 sshd[5566]: Connection closed by 68.220.241.50 port 34202 Jan 22 00:43:31.406204 sshd-session[5563]: pam_unix(sshd:session): session closed for user core Jan 22 00:43:31.408000 audit[5563]: USER_END pid=5563 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:31.416981 kernel: audit: type=1106 audit(1769042611.408:787): pid=5563 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:31.417316 systemd[1]: sshd@14-172.31.26.54:22-68.220.241.50:34202.service: Deactivated successfully. Jan 22 00:43:31.424470 kernel: audit: type=1104 audit(1769042611.409:788): pid=5563 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:31.409000 audit[5563]: CRED_DISP pid=5563 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:31.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.26.54:22-68.220.241.50:34202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:31.420729 systemd[1]: session-15.scope: Deactivated successfully. Jan 22 00:43:31.425655 systemd-logind[1939]: Session 15 logged out. Waiting for processes to exit. Jan 22 00:43:31.426812 systemd-logind[1939]: Removed session 15. Jan 22 00:43:32.029665 containerd[1953]: time="2026-01-22T00:43:32.029358537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 22 00:43:32.313416 containerd[1953]: time="2026-01-22T00:43:32.309503613Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:43:32.315746 containerd[1953]: time="2026-01-22T00:43:32.315621272Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 22 00:43:32.315866 containerd[1953]: time="2026-01-22T00:43:32.315765366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 22 00:43:32.316001 kubelet[3437]: E0122 00:43:32.315960 3437 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 00:43:32.316989 kubelet[3437]: E0122 00:43:32.316007 3437 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 00:43:32.316989 kubelet[3437]: E0122 00:43:32.316134 3437 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-frzdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4rjzm_calico-system(ffc8de62-f696-4cfe-ab23-12f34741b8d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 22 00:43:32.317674 kubelet[3437]: E0122 00:43:32.317625 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4rjzm" podUID="ffc8de62-f696-4cfe-ab23-12f34741b8d0" Jan 22 00:43:33.028929 containerd[1953]: time="2026-01-22T00:43:33.028768031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 00:43:33.425895 containerd[1953]: time="2026-01-22T00:43:33.425831539Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:43:33.428895 containerd[1953]: time="2026-01-22T00:43:33.428314408Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 00:43:33.428895 containerd[1953]: time="2026-01-22T00:43:33.428340189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 00:43:33.429276 kubelet[3437]: E0122 00:43:33.429245 3437 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:43:33.430269 kubelet[3437]: E0122 00:43:33.429911 3437 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:43:33.430269 kubelet[3437]: E0122 00:43:33.430076 3437 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ksv99,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bf9469c9d-75c6s_calico-apiserver(0db24862-6144-45b1-8f39-39a11a3c80dd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 00:43:33.431466 kubelet[3437]: E0122 00:43:33.431342 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf9469c9d-75c6s" podUID="0db24862-6144-45b1-8f39-39a11a3c80dd" Jan 22 00:43:36.033266 containerd[1953]: time="2026-01-22T00:43:36.033198174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 00:43:36.291410 containerd[1953]: time="2026-01-22T00:43:36.291279902Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:43:36.293513 containerd[1953]: time="2026-01-22T00:43:36.293450314Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 00:43:36.293638 containerd[1953]: time="2026-01-22T00:43:36.293539624Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 00:43:36.293846 kubelet[3437]: E0122 00:43:36.293802 3437 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:43:36.294192 kubelet[3437]: E0122 00:43:36.293850 3437 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:43:36.294192 kubelet[3437]: E0122 00:43:36.294069 3437 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c5pzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bf9469c9d-jt79q_calico-apiserver(4921dab1-a0fc-4a4a-9ec2-f3f09160935a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 00:43:36.294657 containerd[1953]: time="2026-01-22T00:43:36.294633656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 22 00:43:36.297209 kubelet[3437]: E0122 00:43:36.296858 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf9469c9d-jt79q" podUID="4921dab1-a0fc-4a4a-9ec2-f3f09160935a" Jan 22 00:43:36.504629 systemd[1]: Started sshd@15-172.31.26.54:22-68.220.241.50:58980.service - OpenSSH per-connection server daemon (68.220.241.50:58980). Jan 22 00:43:36.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.26.54:22-68.220.241.50:58980 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:36.507743 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 00:43:36.510141 kernel: audit: type=1130 audit(1769042616.503:790): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.26.54:22-68.220.241.50:58980 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:36.544978 containerd[1953]: time="2026-01-22T00:43:36.544450901Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:43:36.547395 containerd[1953]: time="2026-01-22T00:43:36.547259179Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 22 00:43:36.547395 containerd[1953]: time="2026-01-22T00:43:36.547365348Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 22 00:43:36.547754 kubelet[3437]: E0122 00:43:36.547718 3437 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 00:43:36.547929 kubelet[3437]: E0122 00:43:36.547908 3437 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 00:43:36.549203 kubelet[3437]: E0122 00:43:36.549126 3437 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hwx64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5bd666cb6-pggl9_calico-system(3c1a4692-58d4-49e9-ab29-e19d2d7ee1e4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 22 00:43:36.551336 kubelet[3437]: E0122 00:43:36.551138 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bd666cb6-pggl9" podUID="3c1a4692-58d4-49e9-ab29-e19d2d7ee1e4" Jan 22 00:43:37.001401 sshd[5578]: Accepted publickey for core from 68.220.241.50 port 58980 ssh2: RSA SHA256:guZGX9gbNcoOyrr8VXliJQHZZzuPYZGvC0Dn+A+42nM Jan 22 00:43:37.000000 audit[5578]: USER_ACCT pid=5578 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:37.007918 kernel: audit: type=1101 audit(1769042617.000:791): pid=5578 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:37.007000 audit[5578]: CRED_ACQ pid=5578 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:37.008762 sshd-session[5578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:43:37.015033 kernel: audit: type=1103 audit(1769042617.007:792): pid=5578 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:37.019167 kernel: audit: type=1006 audit(1769042617.007:793): pid=5578 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 22 00:43:37.019306 kernel: audit: type=1300 audit(1769042617.007:793): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0011f780 a2=3 a3=0 items=0 ppid=1 pid=5578 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:43:37.007000 audit[5578]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0011f780 a2=3 a3=0 items=0 ppid=1 pid=5578 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:43:37.007000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:43:37.029003 kernel: audit: type=1327 audit(1769042617.007:793): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:43:37.031757 containerd[1953]: time="2026-01-22T00:43:37.031710312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 22 00:43:37.042539 systemd-logind[1939]: New session 16 of user core. Jan 22 00:43:37.050009 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 22 00:43:37.054000 audit[5578]: USER_START pid=5578 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:37.062910 kernel: audit: type=1105 audit(1769042617.054:794): pid=5578 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:37.066000 audit[5589]: CRED_ACQ pid=5589 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:37.075900 kernel: audit: type=1103 audit(1769042617.066:795): pid=5589 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:37.342624 containerd[1953]: time="2026-01-22T00:43:37.342554273Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:43:37.345053 containerd[1953]: time="2026-01-22T00:43:37.344999847Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 22 00:43:37.345332 containerd[1953]: time="2026-01-22T00:43:37.345094941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 22 00:43:37.345402 kubelet[3437]: E0122 00:43:37.345255 3437 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 00:43:37.345402 kubelet[3437]: E0122 00:43:37.345297 3437 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 00:43:37.346703 kubelet[3437]: E0122 00:43:37.345538 3437 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ng7wz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qvwlf_calico-system(9a122a32-e7c8-4162-bccb-4b71d5c37d97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 22 00:43:37.349267 containerd[1953]: time="2026-01-22T00:43:37.349223193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 22 00:43:37.362860 sshd[5589]: Connection closed by 68.220.241.50 port 58980 Jan 22 00:43:37.363953 sshd-session[5578]: pam_unix(sshd:session): session closed for user core Jan 22 00:43:37.364000 audit[5578]: USER_END pid=5578 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:37.370237 systemd-logind[1939]: Session 16 logged out. Waiting for processes to exit. Jan 22 00:43:37.374929 kernel: audit: type=1106 audit(1769042617.364:796): pid=5578 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:37.380483 kernel: audit: type=1104 audit(1769042617.364:797): pid=5578 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:37.364000 audit[5578]: CRED_DISP pid=5578 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:37.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.26.54:22-68.220.241.50:58980 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:37.371646 systemd[1]: sshd@15-172.31.26.54:22-68.220.241.50:58980.service: Deactivated successfully. Jan 22 00:43:37.374536 systemd[1]: session-16.scope: Deactivated successfully. Jan 22 00:43:37.383401 systemd-logind[1939]: Removed session 16. Jan 22 00:43:37.855171 containerd[1953]: time="2026-01-22T00:43:37.855099850Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:43:37.857290 containerd[1953]: time="2026-01-22T00:43:37.857220468Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 22 00:43:37.857444 containerd[1953]: time="2026-01-22T00:43:37.857308000Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 22 00:43:37.857480 kubelet[3437]: E0122 00:43:37.857439 3437 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 00:43:37.857522 kubelet[3437]: E0122 00:43:37.857487 3437 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 00:43:37.857618 kubelet[3437]: E0122 00:43:37.857586 3437 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ng7wz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qvwlf_calico-system(9a122a32-e7c8-4162-bccb-4b71d5c37d97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 22 00:43:37.858864 kubelet[3437]: E0122 00:43:37.858802 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qvwlf" podUID="9a122a32-e7c8-4162-bccb-4b71d5c37d97" Jan 22 00:43:38.034695 containerd[1953]: time="2026-01-22T00:43:38.034496779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 22 00:43:38.351406 containerd[1953]: time="2026-01-22T00:43:38.351336978Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:43:38.354012 containerd[1953]: time="2026-01-22T00:43:38.353945648Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 22 00:43:38.354149 containerd[1953]: time="2026-01-22T00:43:38.354064005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 22 00:43:38.354384 kubelet[3437]: E0122 00:43:38.354329 3437 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 00:43:38.355141 kubelet[3437]: E0122 00:43:38.354397 3437 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 00:43:38.355141 kubelet[3437]: E0122 00:43:38.354554 3437 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1493b4b6e39948d398b0f1f06536a205,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q7cwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65dd85ff-jgxjj_calico-system(96b1517d-4481-408c-9294-a20121dee9ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 22 00:43:38.357906 containerd[1953]: time="2026-01-22T00:43:38.357846989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 22 00:43:38.618069 containerd[1953]: time="2026-01-22T00:43:38.617834418Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:43:38.619899 containerd[1953]: time="2026-01-22T00:43:38.619821024Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 22 00:43:38.620037 containerd[1953]: time="2026-01-22T00:43:38.619935181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 22 00:43:38.620260 kubelet[3437]: E0122 00:43:38.620215 3437 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 00:43:38.620461 kubelet[3437]: E0122 00:43:38.620264 3437 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 00:43:38.620461 kubelet[3437]: E0122 00:43:38.620373 3437 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q7cwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65dd85ff-jgxjj_calico-system(96b1517d-4481-408c-9294-a20121dee9ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 22 00:43:38.621806 kubelet[3437]: E0122 00:43:38.621746 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65dd85ff-jgxjj" podUID="96b1517d-4481-408c-9294-a20121dee9ff" Jan 22 00:43:42.457899 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 00:43:42.458038 kernel: audit: type=1130 audit(1769042622.449:799): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.26.54:22-68.220.241.50:58994 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:42.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.26.54:22-68.220.241.50:58994 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:42.449376 systemd[1]: Started sshd@16-172.31.26.54:22-68.220.241.50:58994.service - OpenSSH per-connection server daemon (68.220.241.50:58994). Jan 22 00:43:42.944000 audit[5604]: USER_ACCT pid=5604 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:42.948068 sshd[5604]: Accepted publickey for core from 68.220.241.50 port 58994 ssh2: RSA SHA256:guZGX9gbNcoOyrr8VXliJQHZZzuPYZGvC0Dn+A+42nM Jan 22 00:43:42.952904 kernel: audit: type=1101 audit(1769042622.944:800): pid=5604 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:42.953010 kernel: audit: type=1103 audit(1769042622.949:801): pid=5604 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:42.949000 audit[5604]: CRED_ACQ pid=5604 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:42.951325 sshd-session[5604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:43:42.958520 kernel: audit: type=1006 audit(1769042622.949:802): pid=5604 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 22 00:43:42.961911 kernel: audit: type=1300 audit(1769042622.949:802): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe83f95010 a2=3 a3=0 items=0 ppid=1 pid=5604 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:43:42.949000 audit[5604]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe83f95010 a2=3 a3=0 items=0 ppid=1 pid=5604 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:43:42.959921 systemd-logind[1939]: New session 17 of user core. Jan 22 00:43:42.949000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:43:42.967654 kernel: audit: type=1327 audit(1769042622.949:802): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:43:42.967151 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 22 00:43:42.969000 audit[5604]: USER_START pid=5604 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:42.971000 audit[5607]: CRED_ACQ pid=5607 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:42.977193 kernel: audit: type=1105 audit(1769042622.969:803): pid=5604 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:42.977257 kernel: audit: type=1103 audit(1769042622.971:804): pid=5607 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:43.419853 sshd[5607]: Connection closed by 68.220.241.50 port 58994 Jan 22 00:43:43.422201 sshd-session[5604]: pam_unix(sshd:session): session closed for user core Jan 22 00:43:43.428000 audit[5604]: USER_END pid=5604 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:43.433499 systemd-logind[1939]: Session 17 logged out. Waiting for processes to exit. Jan 22 00:43:43.430000 audit[5604]: CRED_DISP pid=5604 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:43.435717 systemd[1]: sshd@16-172.31.26.54:22-68.220.241.50:58994.service: Deactivated successfully. Jan 22 00:43:43.436362 kernel: audit: type=1106 audit(1769042623.428:805): pid=5604 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:43.436409 kernel: audit: type=1104 audit(1769042623.430:806): pid=5604 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:43.438862 systemd[1]: session-17.scope: Deactivated successfully. Jan 22 00:43:43.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.26.54:22-68.220.241.50:58994 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:43.441696 systemd-logind[1939]: Removed session 17. Jan 22 00:43:46.029212 kubelet[3437]: E0122 00:43:46.029155 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4rjzm" podUID="ffc8de62-f696-4cfe-ab23-12f34741b8d0" Jan 22 00:43:47.029927 kubelet[3437]: E0122 00:43:47.029526 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf9469c9d-jt79q" podUID="4921dab1-a0fc-4a4a-9ec2-f3f09160935a" Jan 22 00:43:47.029927 kubelet[3437]: E0122 00:43:47.029587 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf9469c9d-75c6s" podUID="0db24862-6144-45b1-8f39-39a11a3c80dd" Jan 22 00:43:48.513698 systemd[1]: Started sshd@17-172.31.26.54:22-68.220.241.50:48446.service - OpenSSH per-connection server daemon (68.220.241.50:48446). Jan 22 00:43:48.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.26.54:22-68.220.241.50:48446 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:48.514438 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 00:43:48.515791 kernel: audit: type=1130 audit(1769042628.512:808): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.26.54:22-68.220.241.50:48446 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:48.976000 audit[5620]: USER_ACCT pid=5620 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:48.983987 kernel: audit: type=1101 audit(1769042628.976:809): pid=5620 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:48.984036 sshd[5620]: Accepted publickey for core from 68.220.241.50 port 48446 ssh2: RSA SHA256:guZGX9gbNcoOyrr8VXliJQHZZzuPYZGvC0Dn+A+42nM Jan 22 00:43:48.984041 sshd-session[5620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:43:48.978000 audit[5620]: CRED_ACQ pid=5620 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:48.994492 kernel: audit: type=1103 audit(1769042628.978:810): pid=5620 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:48.994614 kernel: audit: type=1006 audit(1769042628.978:811): pid=5620 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 22 00:43:49.002181 kernel: audit: type=1300 audit(1769042628.978:811): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff59e1c50 a2=3 a3=0 items=0 ppid=1 pid=5620 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:43:48.978000 audit[5620]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff59e1c50 a2=3 a3=0 items=0 ppid=1 pid=5620 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:43:49.005329 kernel: audit: type=1327 audit(1769042628.978:811): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:43:48.978000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:43:49.007232 systemd-logind[1939]: New session 18 of user core. Jan 22 00:43:49.017165 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 22 00:43:49.020000 audit[5620]: USER_START pid=5620 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:49.025000 audit[5623]: CRED_ACQ pid=5623 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:49.027922 kernel: audit: type=1105 audit(1769042629.020:812): pid=5620 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:49.027998 kernel: audit: type=1103 audit(1769042629.025:813): pid=5623 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:49.031802 kubelet[3437]: E0122 00:43:49.031762 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bd666cb6-pggl9" podUID="3c1a4692-58d4-49e9-ab29-e19d2d7ee1e4" Jan 22 00:43:49.324489 sshd[5623]: Connection closed by 68.220.241.50 port 48446 Jan 22 00:43:49.328732 sshd-session[5620]: pam_unix(sshd:session): session closed for user core Jan 22 00:43:49.330000 audit[5620]: USER_END pid=5620 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:49.340376 kernel: audit: type=1106 audit(1769042629.330:814): pid=5620 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:49.340509 kernel: audit: type=1104 audit(1769042629.330:815): pid=5620 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:49.330000 audit[5620]: CRED_DISP pid=5620 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:49.347240 systemd-logind[1939]: Session 18 logged out. Waiting for processes to exit. Jan 22 00:43:49.349515 systemd[1]: sshd@17-172.31.26.54:22-68.220.241.50:48446.service: Deactivated successfully. Jan 22 00:43:49.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.26.54:22-68.220.241.50:48446 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:49.354199 systemd[1]: session-18.scope: Deactivated successfully. Jan 22 00:43:49.360045 systemd-logind[1939]: Removed session 18. Jan 22 00:43:49.413048 systemd[1]: Started sshd@18-172.31.26.54:22-68.220.241.50:48458.service - OpenSSH per-connection server daemon (68.220.241.50:48458). Jan 22 00:43:49.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.26.54:22-68.220.241.50:48458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:49.880000 audit[5657]: USER_ACCT pid=5657 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:49.882181 sshd[5657]: Accepted publickey for core from 68.220.241.50 port 48458 ssh2: RSA SHA256:guZGX9gbNcoOyrr8VXliJQHZZzuPYZGvC0Dn+A+42nM Jan 22 00:43:49.882000 audit[5657]: CRED_ACQ pid=5657 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:49.882000 audit[5657]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffceae2f40 a2=3 a3=0 items=0 ppid=1 pid=5657 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:43:49.882000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:43:49.885469 sshd-session[5657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:43:49.892944 systemd-logind[1939]: New session 19 of user core. Jan 22 00:43:49.898098 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 22 00:43:49.900000 audit[5657]: USER_START pid=5657 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:49.902000 audit[5660]: CRED_ACQ pid=5660 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:50.035988 kubelet[3437]: E0122 00:43:50.035762 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qvwlf" podUID="9a122a32-e7c8-4162-bccb-4b71d5c37d97" Jan 22 00:43:50.694298 sshd[5660]: Connection closed by 68.220.241.50 port 48458 Jan 22 00:43:50.701675 sshd-session[5657]: pam_unix(sshd:session): session closed for user core Jan 22 00:43:50.708000 audit[5657]: USER_END pid=5657 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:50.710000 audit[5657]: CRED_DISP pid=5657 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:50.714163 systemd-logind[1939]: Session 19 logged out. Waiting for processes to exit. Jan 22 00:43:50.714561 systemd[1]: sshd@18-172.31.26.54:22-68.220.241.50:48458.service: Deactivated successfully. Jan 22 00:43:50.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.26.54:22-68.220.241.50:48458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:50.717434 systemd[1]: session-19.scope: Deactivated successfully. Jan 22 00:43:50.719711 systemd-logind[1939]: Removed session 19. Jan 22 00:43:50.798176 systemd[1]: Started sshd@19-172.31.26.54:22-68.220.241.50:48460.service - OpenSSH per-connection server daemon (68.220.241.50:48460). Jan 22 00:43:50.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.26.54:22-68.220.241.50:48460 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:51.274000 audit[5672]: USER_ACCT pid=5672 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:51.276071 sshd[5672]: Accepted publickey for core from 68.220.241.50 port 48460 ssh2: RSA SHA256:guZGX9gbNcoOyrr8VXliJQHZZzuPYZGvC0Dn+A+42nM Jan 22 00:43:51.276000 audit[5672]: CRED_ACQ pid=5672 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:51.276000 audit[5672]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca3a10bb0 a2=3 a3=0 items=0 ppid=1 pid=5672 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:43:51.276000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:43:51.277541 sshd-session[5672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:43:51.283772 systemd-logind[1939]: New session 20 of user core. Jan 22 00:43:51.291294 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 22 00:43:51.294000 audit[5672]: USER_START pid=5672 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:51.296000 audit[5675]: CRED_ACQ pid=5675 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:52.230000 audit[5685]: NETFILTER_CFG table=filter:135 family=2 entries=26 op=nft_register_rule pid=5685 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:43:52.230000 audit[5685]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff8208b480 a2=0 a3=7fff8208b46c items=0 ppid=3540 pid=5685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:43:52.230000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:43:52.239000 audit[5685]: NETFILTER_CFG table=nat:136 family=2 entries=20 op=nft_register_rule pid=5685 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:43:52.239000 audit[5685]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff8208b480 a2=0 a3=0 items=0 ppid=3540 pid=5685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:43:52.239000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:43:52.257000 audit[5687]: NETFILTER_CFG table=filter:137 family=2 entries=38 op=nft_register_rule pid=5687 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:43:52.257000 audit[5687]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc78c6dc30 a2=0 a3=7ffc78c6dc1c items=0 ppid=3540 pid=5687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:43:52.257000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:43:52.263000 audit[5687]: NETFILTER_CFG table=nat:138 family=2 entries=20 op=nft_register_rule pid=5687 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:43:52.263000 audit[5687]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc78c6dc30 a2=0 a3=0 items=0 ppid=3540 pid=5687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:43:52.263000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:43:52.310528 sshd[5675]: Connection closed by 68.220.241.50 port 48460 Jan 22 00:43:52.312996 sshd-session[5672]: pam_unix(sshd:session): session closed for user core Jan 22 00:43:52.313000 audit[5672]: USER_END pid=5672 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:52.314000 audit[5672]: CRED_DISP pid=5672 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:52.320699 systemd[1]: sshd@19-172.31.26.54:22-68.220.241.50:48460.service: Deactivated successfully. Jan 22 00:43:52.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.26.54:22-68.220.241.50:48460 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:52.323679 systemd[1]: session-20.scope: Deactivated successfully. Jan 22 00:43:52.325157 systemd-logind[1939]: Session 20 logged out. Waiting for processes to exit. Jan 22 00:43:52.327620 systemd-logind[1939]: Removed session 20. Jan 22 00:43:52.404481 systemd[1]: Started sshd@20-172.31.26.54:22-68.220.241.50:48462.service - OpenSSH per-connection server daemon (68.220.241.50:48462). Jan 22 00:43:52.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.26.54:22-68.220.241.50:48462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:52.875000 audit[5692]: USER_ACCT pid=5692 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:52.877548 sshd[5692]: Accepted publickey for core from 68.220.241.50 port 48462 ssh2: RSA SHA256:guZGX9gbNcoOyrr8VXliJQHZZzuPYZGvC0Dn+A+42nM Jan 22 00:43:52.878000 audit[5692]: CRED_ACQ pid=5692 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:52.878000 audit[5692]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd615cc960 a2=3 a3=0 items=0 ppid=1 pid=5692 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:43:52.878000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:43:52.879628 sshd-session[5692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:43:52.886944 systemd-logind[1939]: New session 21 of user core. Jan 22 00:43:52.892101 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 22 00:43:52.895000 audit[5692]: USER_START pid=5692 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:52.898000 audit[5695]: CRED_ACQ pid=5695 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:53.703962 sshd[5695]: Connection closed by 68.220.241.50 port 48462 Jan 22 00:43:53.705378 sshd-session[5692]: pam_unix(sshd:session): session closed for user core Jan 22 00:43:53.709000 audit[5692]: USER_END pid=5692 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:53.712043 kernel: kauditd_printk_skb: 43 callbacks suppressed Jan 22 00:43:53.712102 kernel: audit: type=1106 audit(1769042633.709:845): pid=5692 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:53.713537 systemd-logind[1939]: Session 21 logged out. Waiting for processes to exit. Jan 22 00:43:53.714056 systemd[1]: sshd@20-172.31.26.54:22-68.220.241.50:48462.service: Deactivated successfully. Jan 22 00:43:53.723145 kernel: audit: type=1104 audit(1769042633.709:846): pid=5692 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:53.709000 audit[5692]: CRED_DISP pid=5692 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:53.718218 systemd[1]: session-21.scope: Deactivated successfully. Jan 22 00:43:53.721752 systemd-logind[1939]: Removed session 21. Jan 22 00:43:53.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.26.54:22-68.220.241.50:48462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:53.728999 kernel: audit: type=1131 audit(1769042633.711:847): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.26.54:22-68.220.241.50:48462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:53.790635 systemd[1]: Started sshd@21-172.31.26.54:22-68.220.241.50:39774.service - OpenSSH per-connection server daemon (68.220.241.50:39774). Jan 22 00:43:53.796014 kernel: audit: type=1130 audit(1769042633.790:848): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.26.54:22-68.220.241.50:39774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:53.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.26.54:22-68.220.241.50:39774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:54.035629 kubelet[3437]: E0122 00:43:54.035180 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65dd85ff-jgxjj" podUID="96b1517d-4481-408c-9294-a20121dee9ff" Jan 22 00:43:54.242000 audit[5707]: USER_ACCT pid=5707 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:54.250092 kernel: audit: type=1101 audit(1769042634.242:849): pid=5707 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:54.250140 sshd[5707]: Accepted publickey for core from 68.220.241.50 port 39774 ssh2: RSA SHA256:guZGX9gbNcoOyrr8VXliJQHZZzuPYZGvC0Dn+A+42nM Jan 22 00:43:54.250122 sshd-session[5707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:43:54.244000 audit[5707]: CRED_ACQ pid=5707 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:54.260173 kernel: audit: type=1103 audit(1769042634.244:850): pid=5707 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:54.244000 audit[5707]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc943d76c0 a2=3 a3=0 items=0 ppid=1 pid=5707 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:43:54.270299 kernel: audit: type=1006 audit(1769042634.244:851): pid=5707 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 22 00:43:54.270357 kernel: audit: type=1300 audit(1769042634.244:851): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc943d76c0 a2=3 a3=0 items=0 ppid=1 pid=5707 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:43:54.270386 kernel: audit: type=1327 audit(1769042634.244:851): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:43:54.244000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:43:54.277375 systemd-logind[1939]: New session 22 of user core. Jan 22 00:43:54.283692 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 22 00:43:54.295756 kernel: audit: type=1105 audit(1769042634.287:852): pid=5707 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:54.287000 audit[5707]: USER_START pid=5707 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:54.295000 audit[5710]: CRED_ACQ pid=5710 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:54.579336 sshd[5710]: Connection closed by 68.220.241.50 port 39774 Jan 22 00:43:54.580065 sshd-session[5707]: pam_unix(sshd:session): session closed for user core Jan 22 00:43:54.581000 audit[5707]: USER_END pid=5707 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:54.582000 audit[5707]: CRED_DISP pid=5707 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:43:54.585717 systemd-logind[1939]: Session 22 logged out. Waiting for processes to exit. Jan 22 00:43:54.586398 systemd[1]: sshd@21-172.31.26.54:22-68.220.241.50:39774.service: Deactivated successfully. Jan 22 00:43:54.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.26.54:22-68.220.241.50:39774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:54.588667 systemd[1]: session-22.scope: Deactivated successfully. Jan 22 00:43:54.590645 systemd-logind[1939]: Removed session 22. Jan 22 00:43:58.584000 audit[5723]: NETFILTER_CFG table=filter:139 family=2 entries=26 op=nft_register_rule pid=5723 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:43:58.584000 audit[5723]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffb5c44740 a2=0 a3=7fffb5c4472c items=0 ppid=3540 pid=5723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:43:58.584000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:43:58.593000 audit[5723]: NETFILTER_CFG table=nat:140 family=2 entries=104 op=nft_register_chain pid=5723 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:43:58.593000 audit[5723]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fffb5c44740 a2=0 a3=7fffb5c4472c items=0 ppid=3540 pid=5723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:43:58.593000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:43:59.682521 kernel: kauditd_printk_skb: 10 callbacks suppressed Jan 22 00:43:59.682645 kernel: audit: type=1130 audit(1769042639.674:859): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.26.54:22-68.220.241.50:39784 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:59.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.26.54:22-68.220.241.50:39784 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:59.675688 systemd[1]: Started sshd@22-172.31.26.54:22-68.220.241.50:39784.service - OpenSSH per-connection server daemon (68.220.241.50:39784). Jan 22 00:44:00.032192 kubelet[3437]: E0122 00:44:00.031915 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf9469c9d-jt79q" podUID="4921dab1-a0fc-4a4a-9ec2-f3f09160935a" Jan 22 00:44:00.154297 kernel: audit: type=1101 audit(1769042640.145:860): pid=5725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:00.145000 audit[5725]: USER_ACCT pid=5725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:00.155943 sshd[5725]: Accepted publickey for core from 68.220.241.50 port 39784 ssh2: RSA SHA256:guZGX9gbNcoOyrr8VXliJQHZZzuPYZGvC0Dn+A+42nM Jan 22 00:44:00.165950 kernel: audit: type=1103 audit(1769042640.156:861): pid=5725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:00.156000 audit[5725]: CRED_ACQ pid=5725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:00.158031 sshd-session[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:44:00.175124 kernel: audit: type=1006 audit(1769042640.156:862): pid=5725 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 22 00:44:00.184944 kernel: audit: type=1300 audit(1769042640.156:862): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff20fc89c0 a2=3 a3=0 items=0 ppid=1 pid=5725 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:00.156000 audit[5725]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff20fc89c0 a2=3 a3=0 items=0 ppid=1 pid=5725 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:00.156000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:44:00.189924 kernel: audit: type=1327 audit(1769042640.156:862): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:44:00.200468 systemd-logind[1939]: New session 23 of user core. Jan 22 00:44:00.208374 kubelet[3437]: E0122 00:44:00.207955 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf9469c9d-75c6s" podUID="0db24862-6144-45b1-8f39-39a11a3c80dd" Jan 22 00:44:00.210244 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 22 00:44:00.220000 audit[5725]: USER_START pid=5725 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:00.231192 kernel: audit: type=1105 audit(1769042640.220:863): pid=5725 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:00.230000 audit[5728]: CRED_ACQ pid=5728 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:00.237964 kernel: audit: type=1103 audit(1769042640.230:864): pid=5728 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:00.584826 sshd[5728]: Connection closed by 68.220.241.50 port 39784 Jan 22 00:44:00.587145 sshd-session[5725]: pam_unix(sshd:session): session closed for user core Jan 22 00:44:00.598005 kernel: audit: type=1106 audit(1769042640.588:865): pid=5725 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:00.588000 audit[5725]: USER_END pid=5725 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:00.603575 systemd[1]: sshd@22-172.31.26.54:22-68.220.241.50:39784.service: Deactivated successfully. Jan 22 00:44:00.612962 kernel: audit: type=1104 audit(1769042640.596:866): pid=5725 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:00.596000 audit[5725]: CRED_DISP pid=5725 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:00.615621 systemd[1]: session-23.scope: Deactivated successfully. Jan 22 00:44:00.618993 systemd-logind[1939]: Session 23 logged out. Waiting for processes to exit. Jan 22 00:44:00.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.26.54:22-68.220.241.50:39784 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:00.624731 systemd-logind[1939]: Removed session 23. Jan 22 00:44:01.029803 kubelet[3437]: E0122 00:44:01.029628 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4rjzm" podUID="ffc8de62-f696-4cfe-ab23-12f34741b8d0" Jan 22 00:44:03.030253 kubelet[3437]: E0122 00:44:03.029484 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bd666cb6-pggl9" podUID="3c1a4692-58d4-49e9-ab29-e19d2d7ee1e4" Jan 22 00:44:03.031519 kubelet[3437]: E0122 00:44:03.031384 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qvwlf" podUID="9a122a32-e7c8-4162-bccb-4b71d5c37d97" Jan 22 00:44:05.681566 systemd[1]: Started sshd@23-172.31.26.54:22-68.220.241.50:55852.service - OpenSSH per-connection server daemon (68.220.241.50:55852). Jan 22 00:44:05.689693 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 00:44:05.693298 kernel: audit: type=1130 audit(1769042645.681:868): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.26.54:22-68.220.241.50:55852 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:05.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.26.54:22-68.220.241.50:55852 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:06.173000 audit[5740]: USER_ACCT pid=5740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:06.175113 sshd[5740]: Accepted publickey for core from 68.220.241.50 port 55852 ssh2: RSA SHA256:guZGX9gbNcoOyrr8VXliJQHZZzuPYZGvC0Dn+A+42nM Jan 22 00:44:06.180928 kernel: audit: type=1101 audit(1769042646.173:869): pid=5740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:06.184200 sshd-session[5740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:44:06.180000 audit[5740]: CRED_ACQ pid=5740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:06.191941 kernel: audit: type=1103 audit(1769042646.180:870): pid=5740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:06.197092 kernel: audit: type=1006 audit(1769042646.180:871): pid=5740 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 22 00:44:06.180000 audit[5740]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe8618deb0 a2=3 a3=0 items=0 ppid=1 pid=5740 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:06.207090 kernel: audit: type=1300 audit(1769042646.180:871): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe8618deb0 a2=3 a3=0 items=0 ppid=1 pid=5740 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:06.209224 systemd-logind[1939]: New session 24 of user core. Jan 22 00:44:06.180000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:44:06.215958 kernel: audit: type=1327 audit(1769042646.180:871): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:44:06.218171 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 22 00:44:06.223000 audit[5740]: USER_START pid=5740 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:06.232934 kernel: audit: type=1105 audit(1769042646.223:872): pid=5740 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:06.232000 audit[5745]: CRED_ACQ pid=5745 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:06.238934 kernel: audit: type=1103 audit(1769042646.232:873): pid=5745 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:06.554988 sshd[5745]: Connection closed by 68.220.241.50 port 55852 Jan 22 00:44:06.556137 sshd-session[5740]: pam_unix(sshd:session): session closed for user core Jan 22 00:44:06.559000 audit[5740]: USER_END pid=5740 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:06.575964 kernel: audit: type=1106 audit(1769042646.559:874): pid=5740 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:06.576119 kernel: audit: type=1104 audit(1769042646.559:875): pid=5740 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:06.559000 audit[5740]: CRED_DISP pid=5740 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:06.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.26.54:22-68.220.241.50:55852 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:06.569533 systemd[1]: sshd@23-172.31.26.54:22-68.220.241.50:55852.service: Deactivated successfully. Jan 22 00:44:06.579439 systemd[1]: session-24.scope: Deactivated successfully. Jan 22 00:44:06.589638 systemd-logind[1939]: Session 24 logged out. Waiting for processes to exit. Jan 22 00:44:06.593485 systemd-logind[1939]: Removed session 24. Jan 22 00:44:08.033899 kubelet[3437]: E0122 00:44:08.033325 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65dd85ff-jgxjj" podUID="96b1517d-4481-408c-9294-a20121dee9ff" Jan 22 00:44:11.644035 systemd[1]: Started sshd@24-172.31.26.54:22-68.220.241.50:55864.service - OpenSSH per-connection server daemon (68.220.241.50:55864). Jan 22 00:44:11.645801 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 00:44:11.645941 kernel: audit: type=1130 audit(1769042651.643:877): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.26.54:22-68.220.241.50:55864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:11.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.26.54:22-68.220.241.50:55864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:12.107430 sshd[5757]: Accepted publickey for core from 68.220.241.50 port 55864 ssh2: RSA SHA256:guZGX9gbNcoOyrr8VXliJQHZZzuPYZGvC0Dn+A+42nM Jan 22 00:44:12.106000 audit[5757]: USER_ACCT pid=5757 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:12.115350 kernel: audit: type=1101 audit(1769042652.106:878): pid=5757 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:12.116278 sshd-session[5757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:44:12.125635 kernel: audit: type=1103 audit(1769042652.114:879): pid=5757 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:12.114000 audit[5757]: CRED_ACQ pid=5757 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:12.136044 kernel: audit: type=1006 audit(1769042652.114:880): pid=5757 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 22 00:44:12.136170 kernel: audit: type=1300 audit(1769042652.114:880): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffdce972b0 a2=3 a3=0 items=0 ppid=1 pid=5757 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:12.136209 kernel: audit: type=1327 audit(1769042652.114:880): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:44:12.114000 audit[5757]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffdce972b0 a2=3 a3=0 items=0 ppid=1 pid=5757 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:12.114000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:44:12.143984 systemd-logind[1939]: New session 25 of user core. Jan 22 00:44:12.150156 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 22 00:44:12.155000 audit[5757]: USER_START pid=5757 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:12.164536 kernel: audit: type=1105 audit(1769042652.155:881): pid=5757 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:12.163000 audit[5760]: CRED_ACQ pid=5760 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:12.171938 kernel: audit: type=1103 audit(1769042652.163:882): pid=5760 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:12.477052 sshd[5760]: Connection closed by 68.220.241.50 port 55864 Jan 22 00:44:12.477231 sshd-session[5757]: pam_unix(sshd:session): session closed for user core Jan 22 00:44:12.479000 audit[5757]: USER_END pid=5757 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:12.494009 kernel: audit: type=1106 audit(1769042652.479:883): pid=5757 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:12.494126 kernel: audit: type=1104 audit(1769042652.479:884): pid=5757 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:12.479000 audit[5757]: CRED_DISP pid=5757 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:12.495713 systemd[1]: sshd@24-172.31.26.54:22-68.220.241.50:55864.service: Deactivated successfully. Jan 22 00:44:12.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.26.54:22-68.220.241.50:55864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:12.499934 systemd[1]: session-25.scope: Deactivated successfully. Jan 22 00:44:12.503696 systemd-logind[1939]: Session 25 logged out. Waiting for processes to exit. Jan 22 00:44:12.506586 systemd-logind[1939]: Removed session 25. Jan 22 00:44:13.029456 containerd[1953]: time="2026-01-22T00:44:13.029340790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 22 00:44:13.031088 kubelet[3437]: E0122 00:44:13.029717 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf9469c9d-jt79q" podUID="4921dab1-a0fc-4a4a-9ec2-f3f09160935a" Jan 22 00:44:13.340910 containerd[1953]: time="2026-01-22T00:44:13.340737726Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:44:13.411737 containerd[1953]: time="2026-01-22T00:44:13.411578577Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 22 00:44:13.411737 containerd[1953]: time="2026-01-22T00:44:13.411691570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 22 00:44:13.411965 kubelet[3437]: E0122 00:44:13.411900 3437 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 00:44:13.411965 kubelet[3437]: E0122 00:44:13.411953 3437 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 00:44:13.413123 kubelet[3437]: E0122 00:44:13.413051 3437 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-frzdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4rjzm_calico-system(ffc8de62-f696-4cfe-ab23-12f34741b8d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 22 00:44:13.414691 kubelet[3437]: E0122 00:44:13.414640 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4rjzm" podUID="ffc8de62-f696-4cfe-ab23-12f34741b8d0" Jan 22 00:44:14.032579 containerd[1953]: time="2026-01-22T00:44:14.032022701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 00:44:14.355280 containerd[1953]: time="2026-01-22T00:44:14.355227127Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:44:14.357380 containerd[1953]: time="2026-01-22T00:44:14.357325674Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 00:44:14.357620 containerd[1953]: time="2026-01-22T00:44:14.357358144Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 00:44:14.357683 kubelet[3437]: E0122 00:44:14.357591 3437 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:44:14.357683 kubelet[3437]: E0122 00:44:14.357643 3437 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:44:14.358615 kubelet[3437]: E0122 00:44:14.357793 3437 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ksv99,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bf9469c9d-75c6s_calico-apiserver(0db24862-6144-45b1-8f39-39a11a3c80dd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 00:44:14.359262 kubelet[3437]: E0122 00:44:14.359221 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf9469c9d-75c6s" podUID="0db24862-6144-45b1-8f39-39a11a3c80dd" Jan 22 00:44:17.030323 kubelet[3437]: E0122 00:44:17.030199 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qvwlf" podUID="9a122a32-e7c8-4162-bccb-4b71d5c37d97" Jan 22 00:44:17.568079 systemd[1]: Started sshd@25-172.31.26.54:22-68.220.241.50:40202.service - OpenSSH per-connection server daemon (68.220.241.50:40202). Jan 22 00:44:17.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.26.54:22-68.220.241.50:40202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:17.568955 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 00:44:17.568997 kernel: audit: type=1130 audit(1769042657.567:886): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.26.54:22-68.220.241.50:40202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:18.015919 sshd[5781]: Accepted publickey for core from 68.220.241.50 port 40202 ssh2: RSA SHA256:guZGX9gbNcoOyrr8VXliJQHZZzuPYZGvC0Dn+A+42nM Jan 22 00:44:18.014000 audit[5781]: USER_ACCT pid=5781 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:18.017953 sshd-session[5781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:44:18.028856 kernel: audit: type=1101 audit(1769042658.014:887): pid=5781 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:18.029023 kernel: audit: type=1103 audit(1769042658.016:888): pid=5781 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:18.016000 audit[5781]: CRED_ACQ pid=5781 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:18.034436 kernel: audit: type=1006 audit(1769042658.016:889): pid=5781 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 22 00:44:18.016000 audit[5781]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffdcb9ac00 a2=3 a3=0 items=0 ppid=1 pid=5781 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:18.043016 kernel: audit: type=1300 audit(1769042658.016:889): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffdcb9ac00 a2=3 a3=0 items=0 ppid=1 pid=5781 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:18.043065 containerd[1953]: time="2026-01-22T00:44:18.037141923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 22 00:44:18.016000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:44:18.049914 kernel: audit: type=1327 audit(1769042658.016:889): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:44:18.054439 systemd-logind[1939]: New session 26 of user core. Jan 22 00:44:18.061160 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 22 00:44:18.078279 kernel: audit: type=1105 audit(1769042658.068:890): pid=5781 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:18.068000 audit[5781]: USER_START pid=5781 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:18.082000 audit[5784]: CRED_ACQ pid=5784 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:18.090941 kernel: audit: type=1103 audit(1769042658.082:891): pid=5784 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:18.440311 containerd[1953]: time="2026-01-22T00:44:18.440256886Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:44:18.441649 sshd[5784]: Connection closed by 68.220.241.50 port 40202 Jan 22 00:44:18.443142 sshd-session[5781]: pam_unix(sshd:session): session closed for user core Jan 22 00:44:18.445059 containerd[1953]: time="2026-01-22T00:44:18.444997097Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 22 00:44:18.445569 containerd[1953]: time="2026-01-22T00:44:18.445016094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 22 00:44:18.445630 kubelet[3437]: E0122 00:44:18.445250 3437 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 00:44:18.445630 kubelet[3437]: E0122 00:44:18.445301 3437 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 00:44:18.445630 kubelet[3437]: E0122 00:44:18.445443 3437 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hwx64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5bd666cb6-pggl9_calico-system(3c1a4692-58d4-49e9-ab29-e19d2d7ee1e4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 22 00:44:18.456121 kernel: audit: type=1106 audit(1769042658.445:892): pid=5781 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:18.445000 audit[5781]: USER_END pid=5781 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:18.456294 kubelet[3437]: E0122 00:44:18.447018 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bd666cb6-pggl9" podUID="3c1a4692-58d4-49e9-ab29-e19d2d7ee1e4" Jan 22 00:44:18.459892 systemd[1]: sshd@25-172.31.26.54:22-68.220.241.50:40202.service: Deactivated successfully. Jan 22 00:44:18.445000 audit[5781]: CRED_DISP pid=5781 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:18.467957 kernel: audit: type=1104 audit(1769042658.445:893): pid=5781 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:18.471864 systemd[1]: session-26.scope: Deactivated successfully. Jan 22 00:44:18.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.26.54:22-68.220.241.50:40202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:18.474063 systemd-logind[1939]: Session 26 logged out. Waiting for processes to exit. Jan 22 00:44:18.477038 systemd-logind[1939]: Removed session 26. Jan 22 00:44:19.029574 containerd[1953]: time="2026-01-22T00:44:19.029388866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 22 00:44:19.311390 containerd[1953]: time="2026-01-22T00:44:19.311222753Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:44:19.314679 containerd[1953]: time="2026-01-22T00:44:19.314555102Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 22 00:44:19.314679 containerd[1953]: time="2026-01-22T00:44:19.314642429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 22 00:44:19.315010 kubelet[3437]: E0122 00:44:19.314861 3437 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 00:44:19.315010 kubelet[3437]: E0122 00:44:19.314935 3437 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 00:44:19.315400 kubelet[3437]: E0122 00:44:19.315059 3437 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1493b4b6e39948d398b0f1f06536a205,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q7cwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65dd85ff-jgxjj_calico-system(96b1517d-4481-408c-9294-a20121dee9ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 22 00:44:19.317415 containerd[1953]: time="2026-01-22T00:44:19.317383869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 22 00:44:19.588895 containerd[1953]: time="2026-01-22T00:44:19.588814214Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:44:19.591477 containerd[1953]: time="2026-01-22T00:44:19.591422585Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 22 00:44:19.591645 containerd[1953]: time="2026-01-22T00:44:19.591461033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 22 00:44:19.591808 kubelet[3437]: E0122 00:44:19.591766 3437 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 00:44:19.592253 kubelet[3437]: E0122 00:44:19.591838 3437 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 00:44:19.592253 kubelet[3437]: E0122 00:44:19.592020 3437 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q7cwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65dd85ff-jgxjj_calico-system(96b1517d-4481-408c-9294-a20121dee9ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 22 00:44:19.594186 kubelet[3437]: E0122 00:44:19.593588 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65dd85ff-jgxjj" podUID="96b1517d-4481-408c-9294-a20121dee9ff" Jan 22 00:44:23.538045 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 00:44:23.538214 kernel: audit: type=1130 audit(1769042663.535:895): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.26.54:22-68.220.241.50:48902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:23.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.26.54:22-68.220.241.50:48902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:23.536367 systemd[1]: Started sshd@26-172.31.26.54:22-68.220.241.50:48902.service - OpenSSH per-connection server daemon (68.220.241.50:48902). Jan 22 00:44:24.060911 sshd[5819]: Accepted publickey for core from 68.220.241.50 port 48902 ssh2: RSA SHA256:guZGX9gbNcoOyrr8VXliJQHZZzuPYZGvC0Dn+A+42nM Jan 22 00:44:24.059000 audit[5819]: USER_ACCT pid=5819 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:24.068939 kernel: audit: type=1101 audit(1769042664.059:896): pid=5819 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:24.070937 sshd-session[5819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:44:24.067000 audit[5819]: CRED_ACQ pid=5819 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:24.078903 kernel: audit: type=1103 audit(1769042664.067:897): pid=5819 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:24.087021 kernel: audit: type=1006 audit(1769042664.067:898): pid=5819 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 22 00:44:24.067000 audit[5819]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcffc04ea0 a2=3 a3=0 items=0 ppid=1 pid=5819 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:24.097391 kernel: audit: type=1300 audit(1769042664.067:898): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcffc04ea0 a2=3 a3=0 items=0 ppid=1 pid=5819 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:24.067000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:44:24.101057 kernel: audit: type=1327 audit(1769042664.067:898): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:44:24.100932 systemd-logind[1939]: New session 27 of user core. Jan 22 00:44:24.104225 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 22 00:44:24.110000 audit[5819]: USER_START pid=5819 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:24.118060 kernel: audit: type=1105 audit(1769042664.110:899): pid=5819 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:24.118000 audit[5822]: CRED_ACQ pid=5822 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:24.126970 kernel: audit: type=1103 audit(1769042664.118:900): pid=5822 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:24.588558 sshd[5822]: Connection closed by 68.220.241.50 port 48902 Jan 22 00:44:24.590066 sshd-session[5819]: pam_unix(sshd:session): session closed for user core Jan 22 00:44:24.590000 audit[5819]: USER_END pid=5819 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:24.602310 kernel: audit: type=1106 audit(1769042664.590:901): pid=5819 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:24.602418 kernel: audit: type=1104 audit(1769042664.591:902): pid=5819 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:24.591000 audit[5819]: CRED_DISP pid=5819 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:24.600106 systemd[1]: sshd@26-172.31.26.54:22-68.220.241.50:48902.service: Deactivated successfully. Jan 22 00:44:24.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.26.54:22-68.220.241.50:48902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:24.602564 systemd[1]: session-27.scope: Deactivated successfully. Jan 22 00:44:24.604703 systemd-logind[1939]: Session 27 logged out. Waiting for processes to exit. Jan 22 00:44:24.606213 systemd-logind[1939]: Removed session 27. Jan 22 00:44:25.029512 kubelet[3437]: E0122 00:44:25.029109 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4rjzm" podUID="ffc8de62-f696-4cfe-ab23-12f34741b8d0" Jan 22 00:44:26.032242 containerd[1953]: time="2026-01-22T00:44:26.032203146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 00:44:26.299623 containerd[1953]: time="2026-01-22T00:44:26.299156780Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:44:26.308312 containerd[1953]: time="2026-01-22T00:44:26.308197781Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 00:44:26.308828 containerd[1953]: time="2026-01-22T00:44:26.308240980Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 00:44:26.309296 kubelet[3437]: E0122 00:44:26.309163 3437 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:44:26.309296 kubelet[3437]: E0122 00:44:26.309251 3437 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:44:26.311036 kubelet[3437]: E0122 00:44:26.310914 3437 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c5pzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bf9469c9d-jt79q_calico-apiserver(4921dab1-a0fc-4a4a-9ec2-f3f09160935a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 00:44:26.312194 kubelet[3437]: E0122 00:44:26.312142 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf9469c9d-jt79q" podUID="4921dab1-a0fc-4a4a-9ec2-f3f09160935a" Jan 22 00:44:27.031194 kubelet[3437]: E0122 00:44:27.031143 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf9469c9d-75c6s" podUID="0db24862-6144-45b1-8f39-39a11a3c80dd" Jan 22 00:44:28.029913 containerd[1953]: time="2026-01-22T00:44:28.029656921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 22 00:44:28.318970 containerd[1953]: time="2026-01-22T00:44:28.318026151Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:44:28.320214 containerd[1953]: time="2026-01-22T00:44:28.320117341Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 22 00:44:28.320357 containerd[1953]: time="2026-01-22T00:44:28.320205645Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 22 00:44:28.320812 kubelet[3437]: E0122 00:44:28.320729 3437 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 00:44:28.321605 kubelet[3437]: E0122 00:44:28.321193 3437 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 00:44:28.321907 kubelet[3437]: E0122 00:44:28.321455 3437 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ng7wz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qvwlf_calico-system(9a122a32-e7c8-4162-bccb-4b71d5c37d97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 22 00:44:28.324691 containerd[1953]: time="2026-01-22T00:44:28.324539266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 22 00:44:28.627910 containerd[1953]: time="2026-01-22T00:44:28.627709109Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:44:28.630139 containerd[1953]: time="2026-01-22T00:44:28.629929955Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 22 00:44:28.633120 kubelet[3437]: E0122 00:44:28.633068 3437 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 00:44:28.633358 kubelet[3437]: E0122 00:44:28.633124 3437 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 00:44:28.633358 kubelet[3437]: E0122 00:44:28.633273 3437 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ng7wz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qvwlf_calico-system(9a122a32-e7c8-4162-bccb-4b71d5c37d97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 22 00:44:28.634496 kubelet[3437]: E0122 00:44:28.634451 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qvwlf" podUID="9a122a32-e7c8-4162-bccb-4b71d5c37d97" Jan 22 00:44:28.658860 containerd[1953]: time="2026-01-22T00:44:28.630048225Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 22 00:44:29.682788 systemd[1]: Started sshd@27-172.31.26.54:22-68.220.241.50:48904.service - OpenSSH per-connection server daemon (68.220.241.50:48904). Jan 22 00:44:29.690432 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 00:44:29.690539 kernel: audit: type=1130 audit(1769042669.682:904): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.26.54:22-68.220.241.50:48904 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:29.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.26.54:22-68.220.241.50:48904 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:30.149000 audit[5836]: USER_ACCT pid=5836 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:30.157731 kernel: audit: type=1101 audit(1769042670.149:905): pid=5836 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:30.158481 sshd[5836]: Accepted publickey for core from 68.220.241.50 port 48904 ssh2: RSA SHA256:guZGX9gbNcoOyrr8VXliJQHZZzuPYZGvC0Dn+A+42nM Jan 22 00:44:30.159000 audit[5836]: CRED_ACQ pid=5836 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:30.168991 kernel: audit: type=1103 audit(1769042670.159:906): pid=5836 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:30.169113 sshd-session[5836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:44:30.176985 kernel: audit: type=1006 audit(1769042670.159:907): pid=5836 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 22 00:44:30.159000 audit[5836]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff767873d0 a2=3 a3=0 items=0 ppid=1 pid=5836 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:30.185090 kernel: audit: type=1300 audit(1769042670.159:907): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff767873d0 a2=3 a3=0 items=0 ppid=1 pid=5836 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:30.159000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:44:30.193897 kernel: audit: type=1327 audit(1769042670.159:907): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:44:30.196200 systemd-logind[1939]: New session 28 of user core. Jan 22 00:44:30.199117 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 22 00:44:30.204000 audit[5836]: USER_START pid=5836 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:30.213928 kernel: audit: type=1105 audit(1769042670.204:908): pid=5836 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:30.214000 audit[5839]: CRED_ACQ pid=5839 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:30.221902 kernel: audit: type=1103 audit(1769042670.214:909): pid=5839 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:30.539902 sshd[5839]: Connection closed by 68.220.241.50 port 48904 Jan 22 00:44:30.540753 sshd-session[5836]: pam_unix(sshd:session): session closed for user core Jan 22 00:44:30.543000 audit[5836]: USER_END pid=5836 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:30.552971 kernel: audit: type=1106 audit(1769042670.543:910): pid=5836 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:30.555747 systemd-logind[1939]: Session 28 logged out. Waiting for processes to exit. Jan 22 00:44:30.557195 systemd[1]: sshd@27-172.31.26.54:22-68.220.241.50:48904.service: Deactivated successfully. Jan 22 00:44:30.560521 systemd[1]: session-28.scope: Deactivated successfully. Jan 22 00:44:30.550000 audit[5836]: CRED_DISP pid=5836 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:30.567894 kernel: audit: type=1104 audit(1769042670.550:911): pid=5836 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 22 00:44:30.572347 systemd-logind[1939]: Removed session 28. Jan 22 00:44:30.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.26.54:22-68.220.241.50:48904 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:32.029892 kubelet[3437]: E0122 00:44:32.028499 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bd666cb6-pggl9" podUID="3c1a4692-58d4-49e9-ab29-e19d2d7ee1e4" Jan 22 00:44:33.030142 kubelet[3437]: E0122 00:44:33.030092 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65dd85ff-jgxjj" podUID="96b1517d-4481-408c-9294-a20121dee9ff" Jan 22 00:44:38.030484 kubelet[3437]: E0122 00:44:38.029416 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4rjzm" podUID="ffc8de62-f696-4cfe-ab23-12f34741b8d0" Jan 22 00:44:38.031312 kubelet[3437]: E0122 00:44:38.031210 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf9469c9d-75c6s" podUID="0db24862-6144-45b1-8f39-39a11a3c80dd" Jan 22 00:44:39.027817 kubelet[3437]: E0122 00:44:39.027755 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf9469c9d-jt79q" podUID="4921dab1-a0fc-4a4a-9ec2-f3f09160935a" Jan 22 00:44:41.029818 kubelet[3437]: E0122 00:44:41.029753 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qvwlf" podUID="9a122a32-e7c8-4162-bccb-4b71d5c37d97" Jan 22 00:44:44.207849 systemd[1]: cri-containerd-2d0ab1dfa6df226894fbb261c87c740be6f5600bb057130f3bc8bbc6a5a5ca8c.scope: Deactivated successfully. Jan 22 00:44:44.208864 systemd[1]: cri-containerd-2d0ab1dfa6df226894fbb261c87c740be6f5600bb057130f3bc8bbc6a5a5ca8c.scope: Consumed 4.775s CPU time, 97.9M memory peak, 86.5M read from disk. Jan 22 00:44:44.210000 audit: BPF prog-id=265 op=LOAD Jan 22 00:44:44.212783 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 00:44:44.212848 kernel: audit: type=1334 audit(1769042684.210:913): prog-id=265 op=LOAD Jan 22 00:44:44.218210 kernel: audit: type=1334 audit(1769042684.210:914): prog-id=92 op=UNLOAD Jan 22 00:44:44.218268 kernel: audit: type=1334 audit(1769042684.212:915): prog-id=117 op=UNLOAD Jan 22 00:44:44.210000 audit: BPF prog-id=92 op=UNLOAD Jan 22 00:44:44.212000 audit: BPF prog-id=117 op=UNLOAD Jan 22 00:44:44.212000 audit: BPF prog-id=121 op=UNLOAD Jan 22 00:44:44.221129 kernel: audit: type=1334 audit(1769042684.212:916): prog-id=121 op=UNLOAD Jan 22 00:44:44.308464 containerd[1953]: time="2026-01-22T00:44:44.308407843Z" level=info msg="received container exit event container_id:\"2d0ab1dfa6df226894fbb261c87c740be6f5600bb057130f3bc8bbc6a5a5ca8c\" id:\"2d0ab1dfa6df226894fbb261c87c740be6f5600bb057130f3bc8bbc6a5a5ca8c\" pid:3181 exit_status:1 exited_at:{seconds:1769042684 nanos:247673488}" Jan 22 00:44:44.429932 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d0ab1dfa6df226894fbb261c87c740be6f5600bb057130f3bc8bbc6a5a5ca8c-rootfs.mount: Deactivated successfully. Jan 22 00:44:44.884930 kubelet[3437]: I0122 00:44:44.884602 3437 scope.go:117] "RemoveContainer" containerID="2d0ab1dfa6df226894fbb261c87c740be6f5600bb057130f3bc8bbc6a5a5ca8c" Jan 22 00:44:44.891114 containerd[1953]: time="2026-01-22T00:44:44.890965748Z" level=info msg="CreateContainer within sandbox \"6d86e4498593e10f84ddcbdbb8ef7cce9b32e1e90eede8d683f1ac45da2c3455\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 22 00:44:44.959295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2870221514.mount: Deactivated successfully. Jan 22 00:44:44.964811 containerd[1953]: time="2026-01-22T00:44:44.964761545Z" level=info msg="Container e6d50338ffab1b6e86121eb8ae4681cad26c07683bbb069f60533684199727b2: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:44:44.999741 containerd[1953]: time="2026-01-22T00:44:44.999677587Z" level=info msg="CreateContainer within sandbox \"6d86e4498593e10f84ddcbdbb8ef7cce9b32e1e90eede8d683f1ac45da2c3455\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e6d50338ffab1b6e86121eb8ae4681cad26c07683bbb069f60533684199727b2\"" Jan 22 00:44:45.000458 containerd[1953]: time="2026-01-22T00:44:45.000414746Z" level=info msg="StartContainer for \"e6d50338ffab1b6e86121eb8ae4681cad26c07683bbb069f60533684199727b2\"" Jan 22 00:44:45.001804 containerd[1953]: time="2026-01-22T00:44:45.001746517Z" level=info msg="connecting to shim e6d50338ffab1b6e86121eb8ae4681cad26c07683bbb069f60533684199727b2" address="unix:///run/containerd/s/300790fb961c72be63c16253019c0f8406f2fc37f111ee4a3f78d5246f0ac7d4" protocol=ttrpc version=3 Jan 22 00:44:45.029388 kubelet[3437]: E0122 00:44:45.029232 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65dd85ff-jgxjj" podUID="96b1517d-4481-408c-9294-a20121dee9ff" Jan 22 00:44:45.038269 systemd[1]: Started cri-containerd-e6d50338ffab1b6e86121eb8ae4681cad26c07683bbb069f60533684199727b2.scope - libcontainer container e6d50338ffab1b6e86121eb8ae4681cad26c07683bbb069f60533684199727b2. Jan 22 00:44:45.079914 kernel: audit: type=1334 audit(1769042685.077:917): prog-id=266 op=LOAD Jan 22 00:44:45.077000 audit: BPF prog-id=266 op=LOAD Jan 22 00:44:45.080000 audit: BPF prog-id=267 op=LOAD Jan 22 00:44:45.080000 audit[5884]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000fa238 a2=98 a3=0 items=0 ppid=3018 pid=5884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.085396 kernel: audit: type=1334 audit(1769042685.080:918): prog-id=267 op=LOAD Jan 22 00:44:45.089769 kernel: audit: type=1300 audit(1769042685.080:918): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000fa238 a2=98 a3=0 items=0 ppid=3018 pid=5884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.089812 kernel: audit: type=1327 audit(1769042685.080:918): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536643530333338666661623162366538363132316562386165343638 Jan 22 00:44:45.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536643530333338666661623162366538363132316562386165343638 Jan 22 00:44:45.097841 kernel: audit: type=1334 audit(1769042685.082:919): prog-id=267 op=UNLOAD Jan 22 00:44:45.098048 kernel: audit: type=1300 audit(1769042685.082:919): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3018 pid=5884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.082000 audit: BPF prog-id=267 op=UNLOAD Jan 22 00:44:45.082000 audit[5884]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3018 pid=5884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.082000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536643530333338666661623162366538363132316562386165343638 Jan 22 00:44:45.112000 audit: BPF prog-id=268 op=LOAD Jan 22 00:44:45.112000 audit[5884]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000fa488 a2=98 a3=0 items=0 ppid=3018 pid=5884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536643530333338666661623162366538363132316562386165343638 Jan 22 00:44:45.112000 audit: BPF prog-id=269 op=LOAD Jan 22 00:44:45.112000 audit[5884]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0000fa218 a2=98 a3=0 items=0 ppid=3018 pid=5884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536643530333338666661623162366538363132316562386165343638 Jan 22 00:44:45.112000 audit: BPF prog-id=269 op=UNLOAD Jan 22 00:44:45.112000 audit[5884]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3018 pid=5884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536643530333338666661623162366538363132316562386165343638 Jan 22 00:44:45.112000 audit: BPF prog-id=268 op=UNLOAD Jan 22 00:44:45.112000 audit[5884]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3018 pid=5884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536643530333338666661623162366538363132316562386165343638 Jan 22 00:44:45.112000 audit: BPF prog-id=270 op=LOAD Jan 22 00:44:45.112000 audit[5884]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000fa6e8 a2=98 a3=0 items=0 ppid=3018 pid=5884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536643530333338666661623162366538363132316562386165343638 Jan 22 00:44:45.164164 containerd[1953]: time="2026-01-22T00:44:45.164017990Z" level=info msg="StartContainer for \"e6d50338ffab1b6e86121eb8ae4681cad26c07683bbb069f60533684199727b2\" returns successfully" Jan 22 00:44:45.268562 systemd[1]: cri-containerd-35863310c3d1b3280aa7d0f21f7ae2d6c2332def2f6ab31ee42d65f0c5a4d1d9.scope: Deactivated successfully. Jan 22 00:44:45.269240 systemd[1]: cri-containerd-35863310c3d1b3280aa7d0f21f7ae2d6c2332def2f6ab31ee42d65f0c5a4d1d9.scope: Consumed 14.004s CPU time, 109.9M memory peak, 46.1M read from disk. Jan 22 00:44:45.272427 containerd[1953]: time="2026-01-22T00:44:45.272387306Z" level=info msg="received container exit event container_id:\"35863310c3d1b3280aa7d0f21f7ae2d6c2332def2f6ab31ee42d65f0c5a4d1d9\" id:\"35863310c3d1b3280aa7d0f21f7ae2d6c2332def2f6ab31ee42d65f0c5a4d1d9\" pid:3754 exit_status:1 exited_at:{seconds:1769042685 nanos:271643570}" Jan 22 00:44:45.273000 audit: BPF prog-id=155 op=UNLOAD Jan 22 00:44:45.273000 audit: BPF prog-id=159 op=UNLOAD Jan 22 00:44:45.304619 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35863310c3d1b3280aa7d0f21f7ae2d6c2332def2f6ab31ee42d65f0c5a4d1d9-rootfs.mount: Deactivated successfully. Jan 22 00:44:45.896364 kubelet[3437]: I0122 00:44:45.896324 3437 scope.go:117] "RemoveContainer" containerID="35863310c3d1b3280aa7d0f21f7ae2d6c2332def2f6ab31ee42d65f0c5a4d1d9" Jan 22 00:44:45.912922 containerd[1953]: time="2026-01-22T00:44:45.912886289Z" level=info msg="CreateContainer within sandbox \"a32d2d682a86a48e0db1be1542c3fc6e8de6bd80b7aed3196f563597a3015698\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 22 00:44:45.931741 containerd[1953]: time="2026-01-22T00:44:45.930311151Z" level=info msg="Container 3e0e15b82f6d9608901e144627b7a3f24a70cf0457e87d2a94e1cedeffe84f94: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:44:45.943954 containerd[1953]: time="2026-01-22T00:44:45.943910859Z" level=info msg="CreateContainer within sandbox \"a32d2d682a86a48e0db1be1542c3fc6e8de6bd80b7aed3196f563597a3015698\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"3e0e15b82f6d9608901e144627b7a3f24a70cf0457e87d2a94e1cedeffe84f94\"" Jan 22 00:44:45.944465 containerd[1953]: time="2026-01-22T00:44:45.944435313Z" level=info msg="StartContainer for \"3e0e15b82f6d9608901e144627b7a3f24a70cf0457e87d2a94e1cedeffe84f94\"" Jan 22 00:44:45.945922 containerd[1953]: time="2026-01-22T00:44:45.945885885Z" level=info msg="connecting to shim 3e0e15b82f6d9608901e144627b7a3f24a70cf0457e87d2a94e1cedeffe84f94" address="unix:///run/containerd/s/8809c807fe746b4205c03dbb7af082b0006adf3a13be052c5fec38691461e7ab" protocol=ttrpc version=3 Jan 22 00:44:45.981149 systemd[1]: Started cri-containerd-3e0e15b82f6d9608901e144627b7a3f24a70cf0457e87d2a94e1cedeffe84f94.scope - libcontainer container 3e0e15b82f6d9608901e144627b7a3f24a70cf0457e87d2a94e1cedeffe84f94. Jan 22 00:44:45.999000 audit: BPF prog-id=271 op=LOAD Jan 22 00:44:46.000000 audit: BPF prog-id=272 op=LOAD Jan 22 00:44:46.000000 audit[5927]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3563 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:46.000000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365306531356238326636643936303839303165313434363237623761 Jan 22 00:44:46.000000 audit: BPF prog-id=272 op=UNLOAD Jan 22 00:44:46.000000 audit[5927]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3563 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:46.000000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365306531356238326636643936303839303165313434363237623761 Jan 22 00:44:46.000000 audit: BPF prog-id=273 op=LOAD Jan 22 00:44:46.000000 audit[5927]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3563 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:46.000000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365306531356238326636643936303839303165313434363237623761 Jan 22 00:44:46.000000 audit: BPF prog-id=274 op=LOAD Jan 22 00:44:46.000000 audit[5927]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3563 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:46.000000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365306531356238326636643936303839303165313434363237623761 Jan 22 00:44:46.000000 audit: BPF prog-id=274 op=UNLOAD Jan 22 00:44:46.000000 audit[5927]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3563 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:46.000000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365306531356238326636643936303839303165313434363237623761 Jan 22 00:44:46.000000 audit: BPF prog-id=273 op=UNLOAD Jan 22 00:44:46.000000 audit[5927]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3563 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:46.000000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365306531356238326636643936303839303165313434363237623761 Jan 22 00:44:46.000000 audit: BPF prog-id=275 op=LOAD Jan 22 00:44:46.000000 audit[5927]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3563 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:46.000000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365306531356238326636643936303839303165313434363237623761 Jan 22 00:44:46.032750 containerd[1953]: time="2026-01-22T00:44:46.032669946Z" level=info msg="StartContainer for \"3e0e15b82f6d9608901e144627b7a3f24a70cf0457e87d2a94e1cedeffe84f94\" returns successfully" Jan 22 00:44:47.039167 kubelet[3437]: E0122 00:44:47.039129 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bd666cb6-pggl9" podUID="3c1a4692-58d4-49e9-ab29-e19d2d7ee1e4" Jan 22 00:44:49.027953 kubelet[3437]: E0122 00:44:49.027908 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf9469c9d-75c6s" podUID="0db24862-6144-45b1-8f39-39a11a3c80dd" Jan 22 00:44:49.581672 kubelet[3437]: E0122 00:44:49.568086 3437 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-54?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 00:44:50.466358 systemd[1]: cri-containerd-3128adf35cf4fd85b64326afed60464878fa2cc26a27ddcd0b5d8c830045ff13.scope: Deactivated successfully. Jan 22 00:44:50.466971 systemd[1]: cri-containerd-3128adf35cf4fd85b64326afed60464878fa2cc26a27ddcd0b5d8c830045ff13.scope: Consumed 3.411s CPU time, 37.7M memory peak, 40.3M read from disk. Jan 22 00:44:50.470862 kernel: kauditd_printk_skb: 40 callbacks suppressed Jan 22 00:44:50.472849 kernel: audit: type=1334 audit(1769042690.466:935): prog-id=276 op=LOAD Jan 22 00:44:50.472921 kernel: audit: type=1334 audit(1769042690.466:936): prog-id=94 op=UNLOAD Jan 22 00:44:50.466000 audit: BPF prog-id=276 op=LOAD Jan 22 00:44:50.466000 audit: BPF prog-id=94 op=UNLOAD Jan 22 00:44:50.473000 audit: BPF prog-id=112 op=UNLOAD Jan 22 00:44:50.478008 kernel: audit: type=1334 audit(1769042690.473:937): prog-id=112 op=UNLOAD Jan 22 00:44:50.478092 kernel: audit: type=1334 audit(1769042690.473:938): prog-id=116 op=UNLOAD Jan 22 00:44:50.473000 audit: BPF prog-id=116 op=UNLOAD Jan 22 00:44:50.482286 containerd[1953]: time="2026-01-22T00:44:50.482181331Z" level=info msg="received container exit event container_id:\"3128adf35cf4fd85b64326afed60464878fa2cc26a27ddcd0b5d8c830045ff13\" id:\"3128adf35cf4fd85b64326afed60464878fa2cc26a27ddcd0b5d8c830045ff13\" pid:3179 exit_status:1 exited_at:{seconds:1769042690 nanos:481493466}" Jan 22 00:44:50.519667 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3128adf35cf4fd85b64326afed60464878fa2cc26a27ddcd0b5d8c830045ff13-rootfs.mount: Deactivated successfully. Jan 22 00:44:50.911381 kubelet[3437]: I0122 00:44:50.911350 3437 scope.go:117] "RemoveContainer" containerID="3128adf35cf4fd85b64326afed60464878fa2cc26a27ddcd0b5d8c830045ff13" Jan 22 00:44:50.913884 containerd[1953]: time="2026-01-22T00:44:50.913838832Z" level=info msg="CreateContainer within sandbox \"790f2487a769b434dd9e7314fb2eb6a095757f58aece830d9773173a535e2813\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 22 00:44:50.936842 containerd[1953]: time="2026-01-22T00:44:50.936126866Z" level=info msg="Container e4058f69acf6c7771060537fa2ea8627858770ebe0fb496d40f04f7fc01f517a: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:44:50.940449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1234017270.mount: Deactivated successfully. Jan 22 00:44:50.953818 containerd[1953]: time="2026-01-22T00:44:50.953757562Z" level=info msg="CreateContainer within sandbox \"790f2487a769b434dd9e7314fb2eb6a095757f58aece830d9773173a535e2813\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"e4058f69acf6c7771060537fa2ea8627858770ebe0fb496d40f04f7fc01f517a\"" Jan 22 00:44:50.954354 containerd[1953]: time="2026-01-22T00:44:50.954250020Z" level=info msg="StartContainer for \"e4058f69acf6c7771060537fa2ea8627858770ebe0fb496d40f04f7fc01f517a\"" Jan 22 00:44:50.955495 containerd[1953]: time="2026-01-22T00:44:50.955460011Z" level=info msg="connecting to shim e4058f69acf6c7771060537fa2ea8627858770ebe0fb496d40f04f7fc01f517a" address="unix:///run/containerd/s/435453e7b9038a1cb913e2e4b7b0467660d52d854bf3f38a19360b9fdfa6e72f" protocol=ttrpc version=3 Jan 22 00:44:50.986164 systemd[1]: Started cri-containerd-e4058f69acf6c7771060537fa2ea8627858770ebe0fb496d40f04f7fc01f517a.scope - libcontainer container e4058f69acf6c7771060537fa2ea8627858770ebe0fb496d40f04f7fc01f517a. Jan 22 00:44:51.001000 audit: BPF prog-id=277 op=LOAD Jan 22 00:44:51.003949 kernel: audit: type=1334 audit(1769042691.001:939): prog-id=277 op=LOAD Jan 22 00:44:51.004515 kernel: audit: type=1334 audit(1769042691.003:940): prog-id=278 op=LOAD Jan 22 00:44:51.003000 audit: BPF prog-id=278 op=LOAD Jan 22 00:44:51.003000 audit[5996]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017e238 a2=98 a3=0 items=0 ppid=3034 pid=5996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:51.012146 kernel: audit: type=1300 audit(1769042691.003:940): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017e238 a2=98 a3=0 items=0 ppid=3034 pid=5996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:51.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534303538663639616366366337373731303630353337666132656138 Jan 22 00:44:51.020277 kernel: audit: type=1327 audit(1769042691.003:940): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534303538663639616366366337373731303630353337666132656138 Jan 22 00:44:51.020377 kernel: audit: type=1334 audit(1769042691.003:941): prog-id=278 op=UNLOAD Jan 22 00:44:51.003000 audit: BPF prog-id=278 op=UNLOAD Jan 22 00:44:51.026326 kernel: audit: type=1300 audit(1769042691.003:941): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3034 pid=5996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:51.003000 audit[5996]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3034 pid=5996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:51.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534303538663639616366366337373731303630353337666132656138 Jan 22 00:44:51.003000 audit: BPF prog-id=279 op=LOAD Jan 22 00:44:51.003000 audit[5996]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017e488 a2=98 a3=0 items=0 ppid=3034 pid=5996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:51.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534303538663639616366366337373731303630353337666132656138 Jan 22 00:44:51.003000 audit: BPF prog-id=280 op=LOAD Jan 22 00:44:51.003000 audit[5996]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017e218 a2=98 a3=0 items=0 ppid=3034 pid=5996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:51.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534303538663639616366366337373731303630353337666132656138 Jan 22 00:44:51.003000 audit: BPF prog-id=280 op=UNLOAD Jan 22 00:44:51.003000 audit[5996]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3034 pid=5996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:51.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534303538663639616366366337373731303630353337666132656138 Jan 22 00:44:51.005000 audit: BPF prog-id=279 op=UNLOAD Jan 22 00:44:51.005000 audit[5996]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3034 pid=5996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:51.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534303538663639616366366337373731303630353337666132656138 Jan 22 00:44:51.005000 audit: BPF prog-id=281 op=LOAD Jan 22 00:44:51.005000 audit[5996]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017e6e8 a2=98 a3=0 items=0 ppid=3034 pid=5996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:51.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534303538663639616366366337373731303630353337666132656138 Jan 22 00:44:51.072320 containerd[1953]: time="2026-01-22T00:44:51.072276554Z" level=info msg="StartContainer for \"e4058f69acf6c7771060537fa2ea8627858770ebe0fb496d40f04f7fc01f517a\" returns successfully" Jan 22 00:44:52.028529 kubelet[3437]: E0122 00:44:52.028477 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qvwlf" podUID="9a122a32-e7c8-4162-bccb-4b71d5c37d97" Jan 22 00:44:53.027806 kubelet[3437]: E0122 00:44:53.027761 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4rjzm" podUID="ffc8de62-f696-4cfe-ab23-12f34741b8d0" Jan 22 00:44:53.028009 kubelet[3437]: E0122 00:44:53.027840 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf9469c9d-jt79q" podUID="4921dab1-a0fc-4a4a-9ec2-f3f09160935a" Jan 22 00:44:57.587990 systemd[1]: cri-containerd-3e0e15b82f6d9608901e144627b7a3f24a70cf0457e87d2a94e1cedeffe84f94.scope: Deactivated successfully. Jan 22 00:44:57.588446 systemd[1]: cri-containerd-3e0e15b82f6d9608901e144627b7a3f24a70cf0457e87d2a94e1cedeffe84f94.scope: Consumed 303ms CPU time, 65M memory peak, 29.7M read from disk. Jan 22 00:44:57.589000 audit: BPF prog-id=271 op=UNLOAD Jan 22 00:44:57.591634 kernel: kauditd_printk_skb: 16 callbacks suppressed Jan 22 00:44:57.591681 kernel: audit: type=1334 audit(1769042697.589:947): prog-id=271 op=UNLOAD Jan 22 00:44:57.591714 containerd[1953]: time="2026-01-22T00:44:57.590609402Z" level=info msg="received container exit event container_id:\"3e0e15b82f6d9608901e144627b7a3f24a70cf0457e87d2a94e1cedeffe84f94\" id:\"3e0e15b82f6d9608901e144627b7a3f24a70cf0457e87d2a94e1cedeffe84f94\" pid:5939 exit_status:1 exited_at:{seconds:1769042697 nanos:588198974}" Jan 22 00:44:57.589000 audit: BPF prog-id=275 op=UNLOAD Jan 22 00:44:57.594893 kernel: audit: type=1334 audit(1769042697.589:948): prog-id=275 op=UNLOAD Jan 22 00:44:57.629488 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e0e15b82f6d9608901e144627b7a3f24a70cf0457e87d2a94e1cedeffe84f94-rootfs.mount: Deactivated successfully. Jan 22 00:44:57.939828 kubelet[3437]: I0122 00:44:57.939688 3437 scope.go:117] "RemoveContainer" containerID="35863310c3d1b3280aa7d0f21f7ae2d6c2332def2f6ab31ee42d65f0c5a4d1d9" Jan 22 00:44:57.940502 kubelet[3437]: I0122 00:44:57.940446 3437 scope.go:117] "RemoveContainer" containerID="3e0e15b82f6d9608901e144627b7a3f24a70cf0457e87d2a94e1cedeffe84f94" Jan 22 00:44:57.940678 kubelet[3437]: E0122 00:44:57.940652 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-hqbxz_tigera-operator(f80908c2-7ff5-4ad9-99fe-716478a0dee4)\"" pod="tigera-operator/tigera-operator-7dcd859c48-hqbxz" podUID="f80908c2-7ff5-4ad9-99fe-716478a0dee4" Jan 22 00:44:57.993948 containerd[1953]: time="2026-01-22T00:44:57.993859258Z" level=info msg="RemoveContainer for \"35863310c3d1b3280aa7d0f21f7ae2d6c2332def2f6ab31ee42d65f0c5a4d1d9\"" Jan 22 00:44:58.014488 containerd[1953]: time="2026-01-22T00:44:58.014434654Z" level=info msg="RemoveContainer for \"35863310c3d1b3280aa7d0f21f7ae2d6c2332def2f6ab31ee42d65f0c5a4d1d9\" returns successfully" Jan 22 00:44:59.589442 kubelet[3437]: E0122 00:44:59.589385 3437 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-54?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 00:45:00.032599 kubelet[3437]: E0122 00:45:00.029371 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65dd85ff-jgxjj" podUID="96b1517d-4481-408c-9294-a20121dee9ff" Jan 22 00:45:01.028851 kubelet[3437]: E0122 00:45:01.028801 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bd666cb6-pggl9" podUID="3c1a4692-58d4-49e9-ab29-e19d2d7ee1e4" Jan 22 00:45:03.028333 kubelet[3437]: E0122 00:45:03.028287 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf9469c9d-75c6s" podUID="0db24862-6144-45b1-8f39-39a11a3c80dd" Jan 22 00:45:04.029211 kubelet[3437]: E0122 00:45:04.029016 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4rjzm" podUID="ffc8de62-f696-4cfe-ab23-12f34741b8d0" Jan 22 00:45:05.029161 kubelet[3437]: E0122 00:45:05.029097 3437 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qvwlf" podUID="9a122a32-e7c8-4162-bccb-4b71d5c37d97"