Jan 14 01:23:59.045296 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 13 22:26:24 -00 2026 Jan 14 01:23:59.045860 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ef461ed71f713584f576c99df12ffb04dd99b33cd2d16edeb307d0cf2f5b4260 Jan 14 01:23:59.045886 kernel: BIOS-provided physical RAM map: Jan 14 01:23:59.045899 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 14 01:23:59.045912 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jan 14 01:23:59.045924 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jan 14 01:23:59.045940 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 14 01:23:59.045954 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 14 01:23:59.045967 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 14 01:23:59.045981 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 14 01:23:59.045997 kernel: NX (Execute Disable) protection: active Jan 14 01:23:59.046010 kernel: APIC: Static calls initialized Jan 14 01:23:59.046023 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Jan 14 01:23:59.046037 kernel: extended physical RAM map: Jan 14 01:23:59.046054 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 14 01:23:59.046071 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Jan 14 01:23:59.046086 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Jan 14 01:23:59.046101 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Jan 14 01:23:59.046116 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jan 14 01:23:59.046203 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 14 01:23:59.046218 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 14 01:23:59.046233 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 14 01:23:59.046248 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 14 01:23:59.046263 kernel: efi: EFI v2.7 by EDK II Jan 14 01:23:59.046278 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Jan 14 01:23:59.046297 kernel: secureboot: Secure boot disabled Jan 14 01:23:59.046312 kernel: SMBIOS 2.7 present. Jan 14 01:23:59.046327 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 14 01:23:59.046342 kernel: DMI: Memory slots populated: 1/1 Jan 14 01:23:59.046355 kernel: Hypervisor detected: KVM Jan 14 01:23:59.046370 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 14 01:23:59.046385 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 14 01:23:59.046400 kernel: kvm-clock: using sched offset of 6684170775 cycles Jan 14 01:23:59.046417 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 14 01:23:59.046433 kernel: tsc: Detected 2499.996 MHz processor Jan 14 01:23:59.046452 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 01:23:59.046468 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 01:23:59.046484 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 14 01:23:59.046500 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 14 01:23:59.046516 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 01:23:59.046538 kernel: Using GB pages for direct mapping Jan 14 01:23:59.046555 kernel: ACPI: Early table checksum verification disabled Jan 14 01:23:59.046571 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jan 14 01:23:59.046588 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jan 14 01:23:59.046605 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 14 01:23:59.046622 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 14 01:23:59.046639 kernel: ACPI: FACS 0x00000000789D0000 000040 Jan 14 01:23:59.046658 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 14 01:23:59.046672 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 14 01:23:59.046688 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 14 01:23:59.046705 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 14 01:23:59.046722 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 14 01:23:59.046738 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 14 01:23:59.046755 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 14 01:23:59.046778 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jan 14 01:23:59.046791 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jan 14 01:23:59.046806 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jan 14 01:23:59.046821 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jan 14 01:23:59.046833 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jan 14 01:23:59.046846 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jan 14 01:23:59.048219 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jan 14 01:23:59.048244 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jan 14 01:23:59.048260 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jan 14 01:23:59.048274 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jan 14 01:23:59.048290 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jan 14 01:23:59.048305 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jan 14 01:23:59.048321 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 14 01:23:59.048336 kernel: NUMA: Initialized distance table, cnt=1 Jan 14 01:23:59.048354 kernel: NODE_DATA(0) allocated [mem 0x7a8eedc0-0x7a8f5fff] Jan 14 01:23:59.048369 kernel: Zone ranges: Jan 14 01:23:59.048383 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 01:23:59.048505 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jan 14 01:23:59.048691 kernel: Normal empty Jan 14 01:23:59.048708 kernel: Device empty Jan 14 01:23:59.048723 kernel: Movable zone start for each node Jan 14 01:23:59.048739 kernel: Early memory node ranges Jan 14 01:23:59.048760 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 14 01:23:59.048774 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jan 14 01:23:59.048789 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jan 14 01:23:59.048804 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jan 14 01:23:59.048819 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 01:23:59.048834 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 14 01:23:59.048850 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 14 01:23:59.048869 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jan 14 01:23:59.048885 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 14 01:23:59.048900 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 14 01:23:59.048915 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 14 01:23:59.048931 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 14 01:23:59.048946 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 01:23:59.048961 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 14 01:23:59.048976 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 14 01:23:59.048994 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 01:23:59.049009 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 14 01:23:59.049024 kernel: TSC deadline timer available Jan 14 01:23:59.049039 kernel: CPU topo: Max. logical packages: 1 Jan 14 01:23:59.049054 kernel: CPU topo: Max. logical dies: 1 Jan 14 01:23:59.049069 kernel: CPU topo: Max. dies per package: 1 Jan 14 01:23:59.049084 kernel: CPU topo: Max. threads per core: 2 Jan 14 01:23:59.049107 kernel: CPU topo: Num. cores per package: 1 Jan 14 01:23:59.049121 kernel: CPU topo: Num. threads per package: 2 Jan 14 01:23:59.049137 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 14 01:23:59.049151 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 14 01:23:59.051215 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jan 14 01:23:59.051235 kernel: Booting paravirtualized kernel on KVM Jan 14 01:23:59.051251 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 01:23:59.051266 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 14 01:23:59.051421 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 14 01:23:59.051440 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 14 01:23:59.051455 kernel: pcpu-alloc: [0] 0 1 Jan 14 01:23:59.051470 kernel: kvm-guest: PV spinlocks enabled Jan 14 01:23:59.051484 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 01:23:59.051502 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ef461ed71f713584f576c99df12ffb04dd99b33cd2d16edeb307d0cf2f5b4260 Jan 14 01:23:59.051521 kernel: random: crng init done Jan 14 01:23:59.051536 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 14 01:23:59.051550 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 14 01:23:59.051565 kernel: Fallback order for Node 0: 0 Jan 14 01:23:59.051580 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Jan 14 01:23:59.051595 kernel: Policy zone: DMA32 Jan 14 01:23:59.051625 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 01:23:59.051640 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 01:23:59.051655 kernel: Kernel/User page tables isolation: enabled Jan 14 01:23:59.051674 kernel: ftrace: allocating 40128 entries in 157 pages Jan 14 01:23:59.051690 kernel: ftrace: allocated 157 pages with 5 groups Jan 14 01:23:59.051705 kernel: Dynamic Preempt: voluntary Jan 14 01:23:59.051720 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 01:23:59.051736 kernel: rcu: RCU event tracing is enabled. Jan 14 01:23:59.051751 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 01:23:59.051766 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 01:23:59.051784 kernel: Rude variant of Tasks RCU enabled. Jan 14 01:23:59.051799 kernel: Tracing variant of Tasks RCU enabled. Jan 14 01:23:59.051813 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 01:23:59.051828 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 01:23:59.051843 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 01:23:59.051862 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 01:23:59.051878 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 01:23:59.051893 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 14 01:23:59.051909 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 01:23:59.051924 kernel: Console: colour dummy device 80x25 Jan 14 01:23:59.051939 kernel: printk: legacy console [tty0] enabled Jan 14 01:23:59.051954 kernel: printk: legacy console [ttyS0] enabled Jan 14 01:23:59.051972 kernel: ACPI: Core revision 20240827 Jan 14 01:23:59.051987 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 14 01:23:59.052003 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 01:23:59.052018 kernel: x2apic enabled Jan 14 01:23:59.052034 kernel: APIC: Switched APIC routing to: physical x2apic Jan 14 01:23:59.052049 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 14 01:23:59.052065 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jan 14 01:23:59.052084 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 14 01:23:59.052099 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jan 14 01:23:59.052113 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 01:23:59.052127 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 01:23:59.052142 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 14 01:23:59.052156 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 14 01:23:59.052182 kernel: RETBleed: Vulnerable Jan 14 01:23:59.052196 kernel: Speculative Store Bypass: Vulnerable Jan 14 01:23:59.052211 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 01:23:59.052225 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 01:23:59.052302 kernel: GDS: Unknown: Dependent on hypervisor status Jan 14 01:23:59.052320 kernel: active return thunk: its_return_thunk Jan 14 01:23:59.052335 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 14 01:23:59.052349 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 01:23:59.052364 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 01:23:59.052379 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 01:23:59.052394 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 14 01:23:59.052408 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 14 01:23:59.052423 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 14 01:23:59.052437 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 14 01:23:59.052456 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 14 01:23:59.052470 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 14 01:23:59.052485 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 01:23:59.052499 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 14 01:23:59.052514 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 14 01:23:59.052528 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 14 01:23:59.052543 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 14 01:23:59.052557 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 14 01:23:59.052571 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 14 01:23:59.052586 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 14 01:23:59.052600 kernel: Freeing SMP alternatives memory: 32K Jan 14 01:23:59.052617 kernel: pid_max: default: 32768 minimum: 301 Jan 14 01:23:59.052632 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 14 01:23:59.052646 kernel: landlock: Up and running. Jan 14 01:23:59.052660 kernel: SELinux: Initializing. Jan 14 01:23:59.052675 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 14 01:23:59.052690 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 14 01:23:59.052704 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 14 01:23:59.052720 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 14 01:23:59.052735 kernel: signal: max sigframe size: 3632 Jan 14 01:23:59.052750 kernel: rcu: Hierarchical SRCU implementation. Jan 14 01:23:59.052768 kernel: rcu: Max phase no-delay instances is 400. Jan 14 01:23:59.052783 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 14 01:23:59.052798 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 01:23:59.052813 kernel: smp: Bringing up secondary CPUs ... Jan 14 01:23:59.052828 kernel: smpboot: x86: Booting SMP configuration: Jan 14 01:23:59.052843 kernel: .... node #0, CPUs: #1 Jan 14 01:23:59.052860 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 14 01:23:59.052879 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 14 01:23:59.052894 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 01:23:59.052910 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jan 14 01:23:59.052926 kernel: Memory: 1924436K/2037804K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 108804K reserved, 0K cma-reserved) Jan 14 01:23:59.052941 kernel: devtmpfs: initialized Jan 14 01:23:59.052956 kernel: x86/mm: Memory block size: 128MB Jan 14 01:23:59.052974 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jan 14 01:23:59.052989 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 01:23:59.053004 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 01:23:59.053019 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 01:23:59.053034 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 01:23:59.053049 kernel: audit: initializing netlink subsys (disabled) Jan 14 01:23:59.053064 kernel: audit: type=2000 audit(1768353835.000:1): state=initialized audit_enabled=0 res=1 Jan 14 01:23:59.053082 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 01:23:59.053098 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 01:23:59.053113 kernel: cpuidle: using governor menu Jan 14 01:23:59.053128 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 01:23:59.053143 kernel: dca service started, version 1.12.1 Jan 14 01:23:59.060810 kernel: PCI: Using configuration type 1 for base access Jan 14 01:23:59.060856 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 01:23:59.060881 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 01:23:59.060898 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 01:23:59.060913 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 01:23:59.060927 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 01:23:59.060944 kernel: ACPI: Added _OSI(Module Device) Jan 14 01:23:59.060962 kernel: ACPI: Added _OSI(Processor Device) Jan 14 01:23:59.060977 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 01:23:59.060999 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 14 01:23:59.061016 kernel: ACPI: Interpreter enabled Jan 14 01:23:59.061033 kernel: ACPI: PM: (supports S0 S5) Jan 14 01:23:59.061050 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 01:23:59.061068 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 01:23:59.061086 kernel: PCI: Using E820 reservations for host bridge windows Jan 14 01:23:59.061104 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 14 01:23:59.061124 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 14 01:23:59.061478 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 14 01:23:59.061708 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 14 01:23:59.061917 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 14 01:23:59.061940 kernel: acpiphp: Slot [3] registered Jan 14 01:23:59.061958 kernel: acpiphp: Slot [4] registered Jan 14 01:23:59.061982 kernel: acpiphp: Slot [5] registered Jan 14 01:23:59.062000 kernel: acpiphp: Slot [6] registered Jan 14 01:23:59.062017 kernel: acpiphp: Slot [7] registered Jan 14 01:23:59.062034 kernel: acpiphp: Slot [8] registered Jan 14 01:23:59.062050 kernel: acpiphp: Slot [9] registered Jan 14 01:23:59.062063 kernel: acpiphp: Slot [10] registered Jan 14 01:23:59.062081 kernel: acpiphp: Slot [11] registered Jan 14 01:23:59.062097 kernel: acpiphp: Slot [12] registered Jan 14 01:23:59.062118 kernel: acpiphp: Slot [13] registered Jan 14 01:23:59.062347 kernel: acpiphp: Slot [14] registered Jan 14 01:23:59.062366 kernel: acpiphp: Slot [15] registered Jan 14 01:23:59.062383 kernel: acpiphp: Slot [16] registered Jan 14 01:23:59.062400 kernel: acpiphp: Slot [17] registered Jan 14 01:23:59.062417 kernel: acpiphp: Slot [18] registered Jan 14 01:23:59.062434 kernel: acpiphp: Slot [19] registered Jan 14 01:23:59.062456 kernel: acpiphp: Slot [20] registered Jan 14 01:23:59.062470 kernel: acpiphp: Slot [21] registered Jan 14 01:23:59.062485 kernel: acpiphp: Slot [22] registered Jan 14 01:23:59.062500 kernel: acpiphp: Slot [23] registered Jan 14 01:23:59.062514 kernel: acpiphp: Slot [24] registered Jan 14 01:23:59.062530 kernel: acpiphp: Slot [25] registered Jan 14 01:23:59.062545 kernel: acpiphp: Slot [26] registered Jan 14 01:23:59.062560 kernel: acpiphp: Slot [27] registered Jan 14 01:23:59.062581 kernel: acpiphp: Slot [28] registered Jan 14 01:23:59.062597 kernel: acpiphp: Slot [29] registered Jan 14 01:23:59.062613 kernel: acpiphp: Slot [30] registered Jan 14 01:23:59.062628 kernel: acpiphp: Slot [31] registered Jan 14 01:23:59.062644 kernel: PCI host bridge to bus 0000:00 Jan 14 01:23:59.063046 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 14 01:23:59.063285 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 14 01:23:59.063484 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 14 01:23:59.063658 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 14 01:23:59.063841 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jan 14 01:23:59.064247 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 14 01:23:59.064466 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jan 14 01:23:59.064665 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jan 14 01:23:59.064857 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Jan 14 01:23:59.065045 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 14 01:23:59.065242 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 14 01:23:59.065426 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 14 01:23:59.065621 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 14 01:23:59.067511 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 14 01:23:59.067737 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 14 01:23:59.067997 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 14 01:23:59.068214 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Jan 14 01:23:59.068401 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Jan 14 01:23:59.068622 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 14 01:23:59.068817 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 14 01:23:59.069008 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Jan 14 01:23:59.069410 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Jan 14 01:23:59.069619 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Jan 14 01:23:59.069958 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Jan 14 01:23:59.069985 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 14 01:23:59.070002 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 14 01:23:59.070019 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 14 01:23:59.070035 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 14 01:23:59.070051 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 14 01:23:59.070066 kernel: iommu: Default domain type: Translated Jan 14 01:23:59.070086 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 01:23:59.070102 kernel: efivars: Registered efivars operations Jan 14 01:23:59.070119 kernel: PCI: Using ACPI for IRQ routing Jan 14 01:23:59.070135 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 14 01:23:59.070149 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Jan 14 01:23:59.071397 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jan 14 01:23:59.071417 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jan 14 01:23:59.071640 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 14 01:23:59.071825 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 14 01:23:59.072007 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 14 01:23:59.072027 kernel: vgaarb: loaded Jan 14 01:23:59.072044 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 14 01:23:59.072061 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 14 01:23:59.072079 kernel: clocksource: Switched to clocksource kvm-clock Jan 14 01:23:59.072100 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 01:23:59.072116 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 01:23:59.072133 kernel: pnp: PnP ACPI init Jan 14 01:23:59.072150 kernel: pnp: PnP ACPI: found 5 devices Jan 14 01:23:59.072189 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 01:23:59.072206 kernel: NET: Registered PF_INET protocol family Jan 14 01:23:59.072224 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 14 01:23:59.072246 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 14 01:23:59.072262 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 01:23:59.072280 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 01:23:59.072297 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 14 01:23:59.072314 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 14 01:23:59.072332 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 14 01:23:59.072349 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 14 01:23:59.072369 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 01:23:59.072386 kernel: NET: Registered PF_XDP protocol family Jan 14 01:23:59.072587 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 14 01:23:59.072745 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 14 01:23:59.072901 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 14 01:23:59.073057 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 14 01:23:59.073231 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jan 14 01:23:59.073415 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 14 01:23:59.073436 kernel: PCI: CLS 0 bytes, default 64 Jan 14 01:23:59.073452 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 14 01:23:59.073469 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 14 01:23:59.073486 kernel: clocksource: Switched to clocksource tsc Jan 14 01:23:59.073502 kernel: Initialise system trusted keyrings Jan 14 01:23:59.073521 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 14 01:23:59.073537 kernel: Key type asymmetric registered Jan 14 01:23:59.073553 kernel: Asymmetric key parser 'x509' registered Jan 14 01:23:59.073569 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 14 01:23:59.073585 kernel: io scheduler mq-deadline registered Jan 14 01:23:59.073602 kernel: io scheduler kyber registered Jan 14 01:23:59.073618 kernel: io scheduler bfq registered Jan 14 01:23:59.073634 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 01:23:59.073652 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 01:23:59.073668 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 01:23:59.073685 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 14 01:23:59.073775 kernel: i8042: Warning: Keylock active Jan 14 01:23:59.073796 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 14 01:23:59.073812 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 14 01:23:59.074091 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 14 01:23:59.074280 kernel: rtc_cmos 00:00: registered as rtc0 Jan 14 01:23:59.074442 kernel: rtc_cmos 00:00: setting system clock to 2026-01-14T01:23:55 UTC (1768353835) Jan 14 01:23:59.074604 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 14 01:23:59.074646 kernel: intel_pstate: CPU model not supported Jan 14 01:23:59.074666 kernel: efifb: probing for efifb Jan 14 01:23:59.074682 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Jan 14 01:23:59.074702 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jan 14 01:23:59.074718 kernel: efifb: scrolling: redraw Jan 14 01:23:59.074736 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 14 01:23:59.074753 kernel: Console: switching to colour frame buffer device 100x37 Jan 14 01:23:59.074940 kernel: fb0: EFI VGA frame buffer device Jan 14 01:23:59.074957 kernel: pstore: Using crash dump compression: deflate Jan 14 01:23:59.074974 kernel: pstore: Registered efi_pstore as persistent store backend Jan 14 01:23:59.074994 kernel: NET: Registered PF_INET6 protocol family Jan 14 01:23:59.075011 kernel: Segment Routing with IPv6 Jan 14 01:23:59.075027 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 01:23:59.075043 kernel: NET: Registered PF_PACKET protocol family Jan 14 01:23:59.075060 kernel: Key type dns_resolver registered Jan 14 01:23:59.075076 kernel: IPI shorthand broadcast: enabled Jan 14 01:23:59.075093 kernel: sched_clock: Marking stable (1393001932, 147055273)->(1611065330, -71008125) Jan 14 01:23:59.075112 kernel: registered taskstats version 1 Jan 14 01:23:59.075129 kernel: Loading compiled-in X.509 certificates Jan 14 01:23:59.075145 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: e43fcdb17feb86efe6ca4b76910b93467fb95f4f' Jan 14 01:23:59.075177 kernel: Demotion targets for Node 0: null Jan 14 01:23:59.075193 kernel: Key type .fscrypt registered Jan 14 01:23:59.075209 kernel: Key type fscrypt-provisioning registered Jan 14 01:23:59.075224 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 01:23:59.075256 kernel: ima: Allocated hash algorithm: sha1 Jan 14 01:23:59.075272 kernel: ima: No architecture policies found Jan 14 01:23:59.075288 kernel: clk: Disabling unused clocks Jan 14 01:23:59.075305 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 14 01:23:59.075321 kernel: Write protecting the kernel read-only data: 47104k Jan 14 01:23:59.075344 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 14 01:23:59.075360 kernel: Run /init as init process Jan 14 01:23:59.075377 kernel: with arguments: Jan 14 01:23:59.075393 kernel: /init Jan 14 01:23:59.075409 kernel: with environment: Jan 14 01:23:59.075425 kernel: HOME=/ Jan 14 01:23:59.075441 kernel: TERM=linux Jan 14 01:23:59.075604 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 14 01:23:59.075630 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 14 01:23:59.075749 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 14 01:23:59.075771 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 14 01:23:59.075788 kernel: GPT:25804799 != 33554431 Jan 14 01:23:59.075805 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 14 01:23:59.075824 kernel: GPT:25804799 != 33554431 Jan 14 01:23:59.075840 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 14 01:23:59.075856 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 14 01:23:59.075873 kernel: SCSI subsystem initialized Jan 14 01:23:59.075890 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 01:23:59.075907 kernel: device-mapper: uevent: version 1.0.3 Jan 14 01:23:59.075924 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 14 01:23:59.075944 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 14 01:23:59.075960 kernel: raid6: avx512x4 gen() 17816 MB/s Jan 14 01:23:59.075977 kernel: raid6: avx512x2 gen() 17800 MB/s Jan 14 01:23:59.075994 kernel: raid6: avx512x1 gen() 17777 MB/s Jan 14 01:23:59.076011 kernel: raid6: avx2x4 gen() 17544 MB/s Jan 14 01:23:59.076025 kernel: raid6: avx2x2 gen() 17362 MB/s Jan 14 01:23:59.076041 kernel: raid6: avx2x1 gen() 13586 MB/s Jan 14 01:23:59.076060 kernel: raid6: using algorithm avx512x4 gen() 17816 MB/s Jan 14 01:23:59.076077 kernel: raid6: .... xor() 7226 MB/s, rmw enabled Jan 14 01:23:59.076094 kernel: raid6: using avx512x2 recovery algorithm Jan 14 01:23:59.076110 kernel: xor: automatically using best checksumming function avx Jan 14 01:23:59.076127 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 14 01:23:59.076144 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 01:23:59.076178 kernel: BTRFS: device fsid cd6116b6-e1b6-44f4-b1e2-5e7c5565b295 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (152) Jan 14 01:23:59.076199 kernel: BTRFS info (device dm-0): first mount of filesystem cd6116b6-e1b6-44f4-b1e2-5e7c5565b295 Jan 14 01:23:59.076216 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:23:59.076232 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 14 01:23:59.076259 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 01:23:59.076276 kernel: BTRFS info (device dm-0): enabling free space tree Jan 14 01:23:59.076294 kernel: loop: module loaded Jan 14 01:23:59.076311 kernel: loop0: detected capacity change from 0 to 100544 Jan 14 01:23:59.076330 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 01:23:59.076348 systemd[1]: Successfully made /usr/ read-only. Jan 14 01:23:59.076368 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 01:23:59.076386 systemd[1]: Detected virtualization amazon. Jan 14 01:23:59.076401 systemd[1]: Detected architecture x86-64. Jan 14 01:23:59.076418 systemd[1]: Running in initrd. Jan 14 01:23:59.076437 systemd[1]: No hostname configured, using default hostname. Jan 14 01:23:59.076454 systemd[1]: Hostname set to . Jan 14 01:23:59.076473 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 01:23:59.076503 systemd[1]: Queued start job for default target initrd.target. Jan 14 01:23:59.076526 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 01:23:59.076547 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:23:59.076584 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:23:59.076602 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 01:23:59.076619 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 01:23:59.076640 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 01:23:59.076660 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 01:23:59.076679 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:23:59.076701 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:23:59.076721 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 14 01:23:59.076740 systemd[1]: Reached target paths.target - Path Units. Jan 14 01:23:59.076760 systemd[1]: Reached target slices.target - Slice Units. Jan 14 01:23:59.076778 systemd[1]: Reached target swap.target - Swaps. Jan 14 01:23:59.076798 systemd[1]: Reached target timers.target - Timer Units. Jan 14 01:23:59.076816 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 01:23:59.076838 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 01:23:59.076857 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:23:59.076876 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 01:23:59.076895 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 14 01:23:59.076915 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:23:59.076933 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 01:23:59.076956 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:23:59.076974 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 01:23:59.076994 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 01:23:59.077014 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 01:23:59.077033 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 01:23:59.077052 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 01:23:59.077072 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 14 01:23:59.077095 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 01:23:59.077114 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 01:23:59.077132 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 01:23:59.077152 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:23:59.077194 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 01:23:59.077214 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:23:59.077233 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 01:23:59.077253 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 01:23:59.077309 systemd-journald[287]: Collecting audit messages is enabled. Jan 14 01:23:59.077354 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 01:23:59.077375 kernel: audit: type=1130 audit(1768353839.046:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:59.077395 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 01:23:59.077415 systemd-journald[287]: Journal started Jan 14 01:23:59.077454 systemd-journald[287]: Runtime Journal (/run/log/journal/ec227a1800477e619dbc0b4a8584ccd0) is 4.7M, max 38M, 33.2M free. Jan 14 01:23:59.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:59.082183 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 01:23:59.091474 kernel: audit: type=1130 audit(1768353839.083:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:59.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:59.097187 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 01:23:59.099873 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 01:23:59.104198 kernel: Bridge firewalling registered Jan 14 01:23:59.105691 systemd-modules-load[291]: Inserted module 'br_netfilter' Jan 14 01:23:59.113131 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 01:23:59.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:59.137994 kernel: audit: type=1130 audit(1768353839.120:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:59.138071 kernel: audit: type=1130 audit(1768353839.134:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:59.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:59.135099 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:23:59.146415 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 01:23:59.151293 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:23:59.153608 systemd-tmpfiles[305]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 14 01:23:59.165097 kernel: audit: type=1130 audit(1768353839.154:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:59.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:59.159759 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 01:23:59.176554 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:23:59.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:59.205662 kernel: audit: type=1130 audit(1768353839.177:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:59.213991 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:23:59.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:59.233315 kernel: audit: type=1130 audit(1768353839.214:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:59.232000 audit: BPF prog-id=6 op=LOAD Jan 14 01:23:59.236200 kernel: audit: type=1334 audit(1768353839.232:9): prog-id=6 op=LOAD Jan 14 01:23:59.234393 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 01:23:59.245771 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 01:23:59.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:59.261502 kernel: audit: type=1130 audit(1768353839.246:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:59.263607 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 01:23:59.296119 dracut-cmdline[327]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ef461ed71f713584f576c99df12ffb04dd99b33cd2d16edeb307d0cf2f5b4260 Jan 14 01:23:59.380939 systemd-resolved[324]: Positive Trust Anchors: Jan 14 01:23:59.380958 systemd-resolved[324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 01:23:59.380964 systemd-resolved[324]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 01:23:59.381028 systemd-resolved[324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 01:23:59.453429 systemd-resolved[324]: Defaulting to hostname 'linux'. Jan 14 01:23:59.455800 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 01:23:59.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:59.457092 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:23:59.463832 kernel: audit: type=1130 audit(1768353839.455:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:59.560203 kernel: Loading iSCSI transport class v2.0-870. Jan 14 01:23:59.651188 kernel: iscsi: registered transport (tcp) Jan 14 01:23:59.700227 kernel: iscsi: registered transport (qla4xxx) Jan 14 01:23:59.700326 kernel: QLogic iSCSI HBA Driver Jan 14 01:23:59.728187 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 01:23:59.748550 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:23:59.756505 kernel: audit: type=1130 audit(1768353839.748:12): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:59.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:59.751538 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 01:23:59.799708 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 01:23:59.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:59.804344 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 01:23:59.807425 kernel: audit: type=1130 audit(1768353839.799:13): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:59.807791 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 01:23:59.856296 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 01:23:59.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:59.861356 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:23:59.872324 kernel: audit: type=1130 audit(1768353839.856:14): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:59.872364 kernel: audit: type=1334 audit(1768353839.857:15): prog-id=7 op=LOAD Jan 14 01:23:59.872399 kernel: audit: type=1334 audit(1768353839.858:16): prog-id=8 op=LOAD Jan 14 01:23:59.857000 audit: BPF prog-id=7 op=LOAD Jan 14 01:23:59.858000 audit: BPF prog-id=8 op=LOAD Jan 14 01:23:59.908484 systemd-udevd[572]: Using default interface naming scheme 'v257'. Jan 14 01:23:59.927276 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:23:59.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:59.932095 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 01:23:59.952720 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 01:23:59.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:59.955000 audit: BPF prog-id=9 op=LOAD Jan 14 01:23:59.958079 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 01:23:59.970194 dracut-pre-trigger[657]: rd.md=0: removing MD RAID activation Jan 14 01:24:00.011209 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 01:24:00.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:00.015362 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 01:24:00.025063 systemd-networkd[668]: lo: Link UP Jan 14 01:24:00.025073 systemd-networkd[668]: lo: Gained carrier Jan 14 01:24:00.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:00.025742 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 01:24:00.026712 systemd[1]: Reached target network.target - Network. Jan 14 01:24:00.098464 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:24:00.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:00.102290 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 01:24:00.306434 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 14 01:24:00.309894 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 14 01:24:00.317179 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 14 01:24:00.350661 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:62:5c:fb:d0:8b Jan 14 01:24:00.353180 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 01:24:00.367094 (udev-worker)[717]: Network interface NamePolicy= disabled on kernel command line. Jan 14 01:24:00.379783 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:24:00.380254 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:24:00.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:00.380986 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:24:00.383235 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:24:00.460697 systemd-networkd[668]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:24:00.460789 systemd-networkd[668]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 01:24:00.493223 systemd-networkd[668]: eth0: Link UP Jan 14 01:24:00.494338 systemd-networkd[668]: eth0: Gained carrier Jan 14 01:24:00.494364 systemd-networkd[668]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:24:00.558444 systemd-networkd[668]: eth0: DHCPv4 address 172.31.18.46/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 14 01:24:00.568617 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:24:00.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:00.892000 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Jan 14 01:24:00.988194 kernel: AES CTR mode by8 optimization enabled Jan 14 01:24:01.102279 kernel: nvme nvme0: using unchecked data buffer Jan 14 01:24:01.675793 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 14 01:24:01.706871 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 14 01:24:01.707824 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 01:24:01.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:01.721966 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 14 01:24:01.738684 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 14 01:24:01.740182 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 01:24:01.741431 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:24:01.743243 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 01:24:01.745322 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 01:24:01.747553 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 01:24:01.810212 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 01:24:01.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:01.823077 disk-uuid[897]: Primary Header is updated. Jan 14 01:24:01.823077 disk-uuid[897]: Secondary Entries is updated. Jan 14 01:24:01.823077 disk-uuid[897]: Secondary Header is updated. Jan 14 01:24:02.377402 systemd-networkd[668]: eth0: Gained IPv6LL Jan 14 01:24:03.012708 disk-uuid[898]: Warning: The kernel is still using the old partition table. Jan 14 01:24:03.012708 disk-uuid[898]: The new table will be used at the next reboot or after you Jan 14 01:24:03.012708 disk-uuid[898]: run partprobe(8) or kpartx(8) Jan 14 01:24:03.012708 disk-uuid[898]: The operation has completed successfully. Jan 14 01:24:03.025404 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 01:24:03.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:03.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:03.025548 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 01:24:03.036369 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 01:24:03.092190 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1074) Jan 14 01:24:03.096399 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:24:03.096470 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:24:03.147254 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 14 01:24:03.147338 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 14 01:24:03.157190 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:24:03.157810 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 01:24:03.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:03.159915 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 01:24:04.395499 ignition[1093]: Ignition 2.24.0 Jan 14 01:24:04.395518 ignition[1093]: Stage: fetch-offline Jan 14 01:24:04.395632 ignition[1093]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:24:04.397641 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 01:24:04.405800 kernel: kauditd_printk_skb: 13 callbacks suppressed Jan 14 01:24:04.405838 kernel: audit: type=1130 audit(1768353844.397:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:04.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:04.395646 ignition[1093]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 14 01:24:04.401371 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 01:24:04.395960 ignition[1093]: Ignition finished successfully Jan 14 01:24:04.428478 ignition[1099]: Ignition 2.24.0 Jan 14 01:24:04.428493 ignition[1099]: Stage: fetch Jan 14 01:24:04.428763 ignition[1099]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:24:04.428779 ignition[1099]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 14 01:24:04.428884 ignition[1099]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 14 01:24:04.470067 ignition[1099]: PUT result: OK Jan 14 01:24:04.474034 ignition[1099]: parsed url from cmdline: "" Jan 14 01:24:04.474045 ignition[1099]: no config URL provided Jan 14 01:24:04.474055 ignition[1099]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 01:24:04.474076 ignition[1099]: no config at "/usr/lib/ignition/user.ign" Jan 14 01:24:04.474119 ignition[1099]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 14 01:24:04.475475 ignition[1099]: PUT result: OK Jan 14 01:24:04.475537 ignition[1099]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 14 01:24:04.477244 ignition[1099]: GET result: OK Jan 14 01:24:04.477389 ignition[1099]: parsing config with SHA512: 62ba59d5c7f0c5afedf8b9d5e3bedcaea199d55ad716f99e2943ca616e2dc8347fce21ed102e970d3947fb0ceb3749549652b5617241fbd92e413c0c7c8afbc0 Jan 14 01:24:04.484404 unknown[1099]: fetched base config from "system" Jan 14 01:24:04.484999 unknown[1099]: fetched base config from "system" Jan 14 01:24:04.485014 unknown[1099]: fetched user config from "aws" Jan 14 01:24:04.485565 ignition[1099]: fetch: fetch complete Jan 14 01:24:04.485572 ignition[1099]: fetch: fetch passed Jan 14 01:24:04.485635 ignition[1099]: Ignition finished successfully Jan 14 01:24:04.488378 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 01:24:04.495390 kernel: audit: type=1130 audit(1768353844.487:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:04.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:04.492348 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 01:24:04.523257 ignition[1107]: Ignition 2.24.0 Jan 14 01:24:04.523273 ignition[1107]: Stage: kargs Jan 14 01:24:04.523581 ignition[1107]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:24:04.523593 ignition[1107]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 14 01:24:04.523703 ignition[1107]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 14 01:24:04.524714 ignition[1107]: PUT result: OK Jan 14 01:24:04.528767 ignition[1107]: kargs: kargs passed Jan 14 01:24:04.528850 ignition[1107]: Ignition finished successfully Jan 14 01:24:04.530909 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 01:24:04.536845 kernel: audit: type=1130 audit(1768353844.530:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:04.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:04.534351 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 01:24:04.561429 ignition[1113]: Ignition 2.24.0 Jan 14 01:24:04.561443 ignition[1113]: Stage: disks Jan 14 01:24:04.561715 ignition[1113]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:24:04.561728 ignition[1113]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 14 01:24:04.561840 ignition[1113]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 14 01:24:04.565318 ignition[1113]: PUT result: OK Jan 14 01:24:04.570043 ignition[1113]: disks: disks passed Jan 14 01:24:04.570137 ignition[1113]: Ignition finished successfully Jan 14 01:24:04.572255 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 01:24:04.577111 kernel: audit: type=1130 audit(1768353844.571:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:04.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:04.573302 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 01:24:04.577583 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 01:24:04.578243 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 01:24:04.579048 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 01:24:04.579657 systemd[1]: Reached target basic.target - Basic System. Jan 14 01:24:04.581433 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 01:24:04.660214 systemd-fsck[1121]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 14 01:24:04.663849 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 01:24:04.671947 kernel: audit: type=1130 audit(1768353844.663:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:04.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:04.666640 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 01:24:04.900205 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 9c98b0a3-27fc-41c4-a169-349b38bd9ceb r/w with ordered data mode. Quota mode: none. Jan 14 01:24:04.900550 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 01:24:04.901507 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 01:24:04.904791 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 01:24:04.907820 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 01:24:04.910047 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 14 01:24:04.910100 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 01:24:04.910128 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 01:24:04.924801 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 01:24:04.926864 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 01:24:04.933190 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1140) Jan 14 01:24:04.936510 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:24:04.936580 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:24:04.944455 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 14 01:24:04.944562 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 14 01:24:04.947357 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 01:24:06.578443 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 01:24:06.585886 kernel: audit: type=1130 audit(1768353846.577:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:06.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:06.580425 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 01:24:06.598362 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 01:24:06.609757 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 01:24:06.614215 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:24:06.640233 ignition[1237]: INFO : Ignition 2.24.0 Jan 14 01:24:06.640233 ignition[1237]: INFO : Stage: mount Jan 14 01:24:06.641854 ignition[1237]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:24:06.641854 ignition[1237]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 14 01:24:06.641854 ignition[1237]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 14 01:24:06.645765 ignition[1237]: INFO : PUT result: OK Jan 14 01:24:06.649093 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 01:24:06.654251 kernel: audit: type=1130 audit(1768353846.648:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:06.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:06.654331 ignition[1237]: INFO : mount: mount passed Jan 14 01:24:06.654331 ignition[1237]: INFO : Ignition finished successfully Jan 14 01:24:06.659538 kernel: audit: type=1130 audit(1768353846.653:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:06.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:06.653866 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 01:24:06.656035 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 01:24:06.674121 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 01:24:06.702194 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1249) Jan 14 01:24:06.705319 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:24:06.705393 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:24:06.713336 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 14 01:24:06.713405 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 14 01:24:06.716218 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 01:24:06.745426 ignition[1266]: INFO : Ignition 2.24.0 Jan 14 01:24:06.745426 ignition[1266]: INFO : Stage: files Jan 14 01:24:06.746583 ignition[1266]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:24:06.746583 ignition[1266]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 14 01:24:06.746583 ignition[1266]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 14 01:24:06.747735 ignition[1266]: INFO : PUT result: OK Jan 14 01:24:06.750488 ignition[1266]: DEBUG : files: compiled without relabeling support, skipping Jan 14 01:24:06.753284 ignition[1266]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 01:24:06.753284 ignition[1266]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 01:24:06.759127 ignition[1266]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 01:24:06.759954 ignition[1266]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 01:24:06.759954 ignition[1266]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 01:24:06.759554 unknown[1266]: wrote ssh authorized keys file for user: core Jan 14 01:24:06.761752 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 14 01:24:06.762576 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 14 01:24:06.816200 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 01:24:07.337417 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 14 01:24:07.337417 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 01:24:07.339208 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 01:24:07.339208 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 01:24:07.339208 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 01:24:07.339208 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 01:24:07.339208 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 01:24:07.339208 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 01:24:07.339208 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 01:24:07.344941 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 01:24:07.344941 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 01:24:07.344941 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 01:24:07.347670 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 01:24:07.347670 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 01:24:07.347670 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 14 01:24:07.769136 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 01:24:08.204578 ignition[1266]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 01:24:08.204578 ignition[1266]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 01:24:08.268832 ignition[1266]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 01:24:08.273873 ignition[1266]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 01:24:08.273873 ignition[1266]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 01:24:08.273873 ignition[1266]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 14 01:24:08.281484 kernel: audit: type=1130 audit(1768353848.275:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.281585 ignition[1266]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 01:24:08.281585 ignition[1266]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 01:24:08.281585 ignition[1266]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 01:24:08.281585 ignition[1266]: INFO : files: files passed Jan 14 01:24:08.281585 ignition[1266]: INFO : Ignition finished successfully Jan 14 01:24:08.275937 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 01:24:08.278353 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 01:24:08.284315 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 01:24:08.291999 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 01:24:08.292131 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 01:24:08.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.300196 kernel: audit: type=1130 audit(1768353848.293:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.338679 initrd-setup-root-after-ignition[1298]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:24:08.338679 initrd-setup-root-after-ignition[1298]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:24:08.341976 initrd-setup-root-after-ignition[1302]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:24:08.344330 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 01:24:08.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.345046 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 01:24:08.347377 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 01:24:08.401044 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 01:24:08.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.401169 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 01:24:08.401936 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 01:24:08.403484 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 01:24:08.405271 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 01:24:08.406589 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 01:24:08.436398 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 01:24:08.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.438632 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 01:24:08.460876 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 01:24:08.461262 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:24:08.462541 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:24:08.463770 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 01:24:08.464682 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 01:24:08.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.464941 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 01:24:08.466049 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 01:24:08.467151 systemd[1]: Stopped target basic.target - Basic System. Jan 14 01:24:08.467882 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 01:24:08.468718 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 01:24:08.469505 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 01:24:08.470301 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 14 01:24:08.471181 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 01:24:08.472036 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 01:24:08.472879 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 01:24:08.474000 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 01:24:08.474963 systemd[1]: Stopped target swap.target - Swaps. Jan 14 01:24:08.475651 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 01:24:08.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.475848 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 01:24:08.476994 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:24:08.477877 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:24:08.478571 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 01:24:08.478986 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:24:08.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.479538 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 01:24:08.479777 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 01:24:08.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.481091 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 01:24:08.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.481345 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 01:24:08.482131 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 01:24:08.482355 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 01:24:08.485254 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 01:24:08.486120 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 01:24:08.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.486369 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:24:08.490455 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 01:24:08.492350 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 01:24:08.492573 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:24:08.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.495438 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 01:24:08.495635 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:24:08.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.498115 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 01:24:08.498966 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 01:24:08.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.507178 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 01:24:08.508141 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 01:24:08.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.522992 ignition[1322]: INFO : Ignition 2.24.0 Jan 14 01:24:08.524678 ignition[1322]: INFO : Stage: umount Jan 14 01:24:08.524678 ignition[1322]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:24:08.524678 ignition[1322]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 14 01:24:08.524678 ignition[1322]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 14 01:24:08.529395 ignition[1322]: INFO : PUT result: OK Jan 14 01:24:08.533648 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 01:24:08.535626 ignition[1322]: INFO : umount: umount passed Jan 14 01:24:08.536299 ignition[1322]: INFO : Ignition finished successfully Jan 14 01:24:08.537999 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 01:24:08.538215 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 01:24:08.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.539366 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 01:24:08.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.539445 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 01:24:08.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.540025 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 01:24:08.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.540092 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 01:24:08.540804 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 01:24:08.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.540886 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 01:24:08.541591 systemd[1]: Stopped target network.target - Network. Jan 14 01:24:08.542275 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 01:24:08.542341 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 01:24:08.543153 systemd[1]: Stopped target paths.target - Path Units. Jan 14 01:24:08.543799 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 01:24:08.548275 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:24:08.548706 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 01:24:08.549822 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 01:24:08.550553 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 01:24:08.550603 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 01:24:08.551388 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 01:24:08.551426 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 01:24:08.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.552041 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 14 01:24:08.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.552072 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:24:08.552726 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 01:24:08.552792 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 01:24:08.553644 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 01:24:08.553699 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 01:24:08.554609 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 01:24:08.555480 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 01:24:08.561542 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 01:24:08.561746 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 01:24:08.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.564000 audit: BPF prog-id=6 op=UNLOAD Jan 14 01:24:08.565486 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 01:24:08.565629 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 01:24:08.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.567000 audit: BPF prog-id=9 op=UNLOAD Jan 14 01:24:08.569303 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 14 01:24:08.569970 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 01:24:08.570027 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:24:08.572124 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 01:24:08.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.572715 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 01:24:08.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.572796 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 01:24:08.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.573656 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 01:24:08.573718 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:24:08.574617 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 01:24:08.574678 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 01:24:08.577146 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:24:08.595226 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 01:24:08.596190 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:24:08.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.598458 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 01:24:08.598565 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 01:24:08.599859 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 01:24:08.599921 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:24:08.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.601228 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 01:24:08.601305 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 01:24:08.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.605299 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 01:24:08.605384 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 01:24:08.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.606579 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 01:24:08.606658 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 01:24:08.611299 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 01:24:08.611805 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 14 01:24:08.611881 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:24:08.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.615343 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 01:24:08.615427 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:24:08.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.618589 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:24:08.618672 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:24:08.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.621349 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 01:24:08.627368 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 01:24:08.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.637426 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 01:24:08.637584 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 01:24:08.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.678673 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 01:24:08.679072 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 01:24:08.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.680390 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 01:24:08.680986 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 01:24:08.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:08.681069 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 01:24:08.683152 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 01:24:08.720022 systemd[1]: Switching root. Jan 14 01:24:08.776522 systemd-journald[287]: Journal stopped Jan 14 01:24:11.906270 systemd-journald[287]: Received SIGTERM from PID 1 (systemd). Jan 14 01:24:11.906377 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 01:24:11.906400 kernel: SELinux: policy capability open_perms=1 Jan 14 01:24:11.906424 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 01:24:11.906448 kernel: SELinux: policy capability always_check_network=0 Jan 14 01:24:11.906468 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 01:24:11.906486 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 01:24:11.906512 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 01:24:11.906530 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 01:24:11.906549 kernel: SELinux: policy capability userspace_initial_context=0 Jan 14 01:24:11.906567 kernel: kauditd_printk_skb: 43 callbacks suppressed Jan 14 01:24:11.906605 kernel: audit: type=1403 audit(1768353849.637:83): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 14 01:24:11.906627 systemd[1]: Successfully loaded SELinux policy in 127.870ms. Jan 14 01:24:11.906655 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.723ms. Jan 14 01:24:11.906680 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 01:24:11.906714 systemd[1]: Detected virtualization amazon. Jan 14 01:24:11.906734 systemd[1]: Detected architecture x86-64. Jan 14 01:24:11.906753 systemd[1]: Detected first boot. Jan 14 01:24:11.906773 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 01:24:11.906793 kernel: audit: type=1334 audit(1768353850.198:84): prog-id=10 op=LOAD Jan 14 01:24:11.906813 kernel: audit: type=1334 audit(1768353850.199:85): prog-id=10 op=UNLOAD Jan 14 01:24:11.906830 kernel: audit: type=1334 audit(1768353850.199:86): prog-id=11 op=LOAD Jan 14 01:24:11.906851 kernel: audit: type=1334 audit(1768353850.199:87): prog-id=11 op=UNLOAD Jan 14 01:24:11.906869 kernel: Guest personality initialized and is inactive Jan 14 01:24:11.906889 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 14 01:24:11.906909 kernel: Initialized host personality Jan 14 01:24:11.906929 zram_generator::config[1366]: No configuration found. Jan 14 01:24:11.906953 kernel: NET: Registered PF_VSOCK protocol family Jan 14 01:24:11.906975 systemd[1]: Populated /etc with preset unit settings. Jan 14 01:24:11.906995 kernel: audit: type=1334 audit(1768353851.559:88): prog-id=12 op=LOAD Jan 14 01:24:11.907015 kernel: audit: type=1334 audit(1768353851.559:89): prog-id=3 op=UNLOAD Jan 14 01:24:11.907033 kernel: audit: type=1334 audit(1768353851.559:90): prog-id=13 op=LOAD Jan 14 01:24:11.907052 kernel: audit: type=1334 audit(1768353851.560:91): prog-id=14 op=LOAD Jan 14 01:24:11.907070 kernel: audit: type=1334 audit(1768353851.560:92): prog-id=4 op=UNLOAD Jan 14 01:24:11.907089 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 01:24:11.907112 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 01:24:11.907133 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 01:24:11.907171 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 01:24:11.907192 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 01:24:11.907220 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 01:24:11.907240 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 01:24:11.907262 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 01:24:11.907282 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 01:24:11.907302 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 01:24:11.907322 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 01:24:11.907343 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:24:11.907364 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:24:11.907382 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 01:24:11.907403 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 01:24:11.907422 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 01:24:11.907442 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 01:24:11.907463 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 14 01:24:11.907484 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:24:11.907504 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:24:11.907523 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 01:24:11.907545 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 01:24:11.907564 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 01:24:11.907583 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 01:24:11.907603 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:24:11.907622 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 01:24:11.907645 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 14 01:24:11.907668 systemd[1]: Reached target slices.target - Slice Units. Jan 14 01:24:11.907692 systemd[1]: Reached target swap.target - Swaps. Jan 14 01:24:11.907715 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 01:24:11.907736 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 01:24:11.907767 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 14 01:24:11.907789 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:24:11.907811 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 14 01:24:11.907832 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:24:11.907853 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 14 01:24:11.907877 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 14 01:24:11.907900 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 01:24:11.907922 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:24:11.907942 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 01:24:11.907964 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 01:24:11.907984 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 01:24:11.908006 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 01:24:11.908031 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:24:11.908055 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 01:24:11.908078 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 01:24:11.908102 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 01:24:11.908124 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 01:24:11.908149 systemd[1]: Reached target machines.target - Containers. Jan 14 01:24:11.908220 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 01:24:11.908244 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:24:11.908268 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 01:24:11.908291 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 01:24:11.908315 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:24:11.908338 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 01:24:11.908361 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:24:11.908388 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 01:24:11.908412 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:24:11.908448 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 01:24:11.908477 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 01:24:11.908503 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 01:24:11.908527 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 01:24:11.908551 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 01:24:11.908579 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:24:11.908605 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 01:24:11.908628 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 01:24:11.908652 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 01:24:11.908675 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 01:24:11.908700 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 14 01:24:11.908724 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 01:24:11.908752 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:24:11.908775 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 01:24:11.908798 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 01:24:11.908821 kernel: fuse: init (API version 7.41) Jan 14 01:24:11.908845 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 01:24:11.908872 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 01:24:11.908895 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 01:24:11.908922 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 01:24:11.908945 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:24:11.908968 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 01:24:11.908993 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 01:24:11.909023 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:24:11.909048 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:24:11.909111 systemd-journald[1442]: Collecting audit messages is enabled. Jan 14 01:24:11.909194 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:24:11.909220 systemd-journald[1442]: Journal started Jan 14 01:24:11.909254 systemd-journald[1442]: Runtime Journal (/run/log/journal/ec227a1800477e619dbc0b4a8584ccd0) is 4.7M, max 38M, 33.2M free. Jan 14 01:24:11.633000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 14 01:24:11.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:11.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:11.793000 audit: BPF prog-id=14 op=UNLOAD Jan 14 01:24:11.793000 audit: BPF prog-id=13 op=UNLOAD Jan 14 01:24:11.794000 audit: BPF prog-id=15 op=LOAD Jan 14 01:24:11.794000 audit: BPF prog-id=16 op=LOAD Jan 14 01:24:11.794000 audit: BPF prog-id=17 op=LOAD Jan 14 01:24:11.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:11.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:11.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:11.903000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 01:24:11.903000 audit[1442]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffeba821080 a2=4000 a3=0 items=0 ppid=1 pid=1442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:11.903000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 14 01:24:11.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:11.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:11.548399 systemd[1]: Queued start job for default target multi-user.target. Jan 14 01:24:11.561743 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 14 01:24:11.562861 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 01:24:11.912213 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:24:11.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:11.916460 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 01:24:11.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:11.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:11.922574 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 01:24:11.924280 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 01:24:11.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:11.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:11.925420 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:24:11.925654 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:24:11.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:11.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:11.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:11.944904 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 01:24:11.954564 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 14 01:24:11.962317 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 01:24:11.967316 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 01:24:11.968144 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 01:24:11.968252 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 01:24:11.971491 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 14 01:24:11.973459 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:24:11.973643 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:24:11.989464 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 01:24:11.996406 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 01:24:11.997830 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 01:24:12.003332 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 01:24:12.004098 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 01:24:12.006625 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 01:24:12.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:12.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:12.012094 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 01:24:12.014478 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:24:12.017327 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 14 01:24:12.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:12.020869 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 01:24:12.022070 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 01:24:12.031874 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 01:24:12.036869 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 01:24:12.070177 kernel: ACPI: bus type drm_connector registered Jan 14 01:24:12.070619 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 01:24:12.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:12.072039 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 01:24:12.077552 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 14 01:24:12.079776 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 01:24:12.080059 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 01:24:12.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:12.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:12.117002 systemd-journald[1442]: Time spent on flushing to /var/log/journal/ec227a1800477e619dbc0b4a8584ccd0 is 23.894ms for 1150 entries. Jan 14 01:24:12.117002 systemd-journald[1442]: System Journal (/var/log/journal/ec227a1800477e619dbc0b4a8584ccd0) is 8M, max 588.1M, 580.1M free. Jan 14 01:24:12.171551 kernel: loop1: detected capacity change from 0 to 73176 Jan 14 01:24:12.171605 systemd-journald[1442]: Received client request to flush runtime journal. Jan 14 01:24:12.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:12.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:12.127154 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 14 01:24:12.142348 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:24:12.173391 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:24:12.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:12.174407 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 01:24:12.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:12.175253 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 01:24:12.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:12.178638 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 01:24:12.232713 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 01:24:12.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:12.234000 audit: BPF prog-id=18 op=LOAD Jan 14 01:24:12.234000 audit: BPF prog-id=19 op=LOAD Jan 14 01:24:12.234000 audit: BPF prog-id=20 op=LOAD Jan 14 01:24:12.237430 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 14 01:24:12.238000 audit: BPF prog-id=21 op=LOAD Jan 14 01:24:12.242450 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 01:24:12.246457 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 01:24:12.265000 audit: BPF prog-id=22 op=LOAD Jan 14 01:24:12.265000 audit: BPF prog-id=23 op=LOAD Jan 14 01:24:12.265000 audit: BPF prog-id=24 op=LOAD Jan 14 01:24:12.269502 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 01:24:12.272000 audit: BPF prog-id=25 op=LOAD Jan 14 01:24:12.272000 audit: BPF prog-id=26 op=LOAD Jan 14 01:24:12.272000 audit: BPF prog-id=27 op=LOAD Jan 14 01:24:12.280466 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 14 01:24:12.307088 systemd-tmpfiles[1519]: ACLs are not supported, ignoring. Jan 14 01:24:12.307117 systemd-tmpfiles[1519]: ACLs are not supported, ignoring. Jan 14 01:24:12.319277 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:24:12.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:12.382643 systemd-nsresourced[1522]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 14 01:24:12.386374 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 14 01:24:12.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:12.391879 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 01:24:12.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:12.489194 kernel: loop2: detected capacity change from 0 to 50784 Jan 14 01:24:12.519466 systemd-oomd[1517]: No swap; memory pressure usage will be degraded Jan 14 01:24:12.520217 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 14 01:24:12.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:12.567789 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 01:24:12.582803 systemd-resolved[1518]: Positive Trust Anchors: Jan 14 01:24:12.582857 systemd-resolved[1518]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 01:24:12.582862 systemd-resolved[1518]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 01:24:12.582901 systemd-resolved[1518]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 01:24:12.589137 systemd-resolved[1518]: Defaulting to hostname 'linux'. Jan 14 01:24:12.590928 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 01:24:12.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:12.591836 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:24:12.843187 kernel: loop3: detected capacity change from 0 to 111560 Jan 14 01:24:13.027481 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 01:24:13.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:13.027000 audit: BPF prog-id=8 op=UNLOAD Jan 14 01:24:13.027000 audit: BPF prog-id=7 op=UNLOAD Jan 14 01:24:13.027000 audit: BPF prog-id=28 op=LOAD Jan 14 01:24:13.027000 audit: BPF prog-id=29 op=LOAD Jan 14 01:24:13.029890 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:24:13.068414 systemd-udevd[1542]: Using default interface naming scheme 'v257'. Jan 14 01:24:13.136185 kernel: loop4: detected capacity change from 0 to 229808 Jan 14 01:24:13.253316 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:24:13.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:13.254000 audit: BPF prog-id=30 op=LOAD Jan 14 01:24:13.257336 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 01:24:13.296680 (udev-worker)[1547]: Network interface NamePolicy= disabled on kernel command line. Jan 14 01:24:13.334532 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 14 01:24:13.408203 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 01:24:13.412224 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 14 01:24:13.414642 systemd-networkd[1548]: lo: Link UP Jan 14 01:24:13.414653 systemd-networkd[1548]: lo: Gained carrier Jan 14 01:24:13.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:13.417322 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 01:24:13.417454 systemd-networkd[1548]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:24:13.417459 systemd-networkd[1548]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 01:24:13.418404 systemd[1]: Reached target network.target - Network. Jan 14 01:24:13.421906 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 14 01:24:13.425413 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 01:24:13.432509 kernel: ACPI: button: Power Button [PWRF] Jan 14 01:24:13.432605 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 14 01:24:13.431237 systemd-networkd[1548]: eth0: Link UP Jan 14 01:24:13.431497 systemd-networkd[1548]: eth0: Gained carrier Jan 14 01:24:13.431531 systemd-networkd[1548]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:24:13.437190 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 14 01:24:13.437619 kernel: ACPI: button: Sleep Button [SLPF] Jan 14 01:24:13.449643 systemd-networkd[1548]: eth0: DHCPv4 address 172.31.18.46/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 14 01:24:13.508256 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 14 01:24:13.511187 kernel: loop5: detected capacity change from 0 to 73176 Jan 14 01:24:13.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:13.534191 kernel: loop6: detected capacity change from 0 to 50784 Jan 14 01:24:13.560202 kernel: loop7: detected capacity change from 0 to 111560 Jan 14 01:24:13.580188 kernel: loop1: detected capacity change from 0 to 229808 Jan 14 01:24:13.604521 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:24:13.609679 (sd-merge)[1590]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-ami.raw'. Jan 14 01:24:13.625661 (sd-merge)[1590]: Merged extensions into '/usr'. Jan 14 01:24:13.674430 systemd[1]: Reload requested from client PID 1482 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 01:24:13.674450 systemd[1]: Reloading... Jan 14 01:24:13.864552 zram_generator::config[1699]: No configuration found. Jan 14 01:24:14.175103 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 14 01:24:14.176491 systemd[1]: Reloading finished in 500 ms. Jan 14 01:24:14.198248 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:24:14.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:14.201133 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 01:24:14.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:14.250655 systemd[1]: Starting ensure-sysext.service... Jan 14 01:24:14.252497 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 01:24:14.255777 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 01:24:14.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:14.257310 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:24:14.257424 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:24:14.257876 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:24:14.266375 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:24:14.266000 audit: BPF prog-id=31 op=LOAD Jan 14 01:24:14.268000 audit: BPF prog-id=15 op=UNLOAD Jan 14 01:24:14.268000 audit: BPF prog-id=32 op=LOAD Jan 14 01:24:14.268000 audit: BPF prog-id=33 op=LOAD Jan 14 01:24:14.268000 audit: BPF prog-id=16 op=UNLOAD Jan 14 01:24:14.268000 audit: BPF prog-id=17 op=UNLOAD Jan 14 01:24:14.268000 audit: BPF prog-id=34 op=LOAD Jan 14 01:24:14.268000 audit: BPF prog-id=35 op=LOAD Jan 14 01:24:14.268000 audit: BPF prog-id=28 op=UNLOAD Jan 14 01:24:14.268000 audit: BPF prog-id=29 op=UNLOAD Jan 14 01:24:14.269000 audit: BPF prog-id=36 op=LOAD Jan 14 01:24:14.269000 audit: BPF prog-id=25 op=UNLOAD Jan 14 01:24:14.269000 audit: BPF prog-id=37 op=LOAD Jan 14 01:24:14.269000 audit: BPF prog-id=38 op=LOAD Jan 14 01:24:14.269000 audit: BPF prog-id=26 op=UNLOAD Jan 14 01:24:14.269000 audit: BPF prog-id=27 op=UNLOAD Jan 14 01:24:14.270000 audit: BPF prog-id=39 op=LOAD Jan 14 01:24:14.270000 audit: BPF prog-id=18 op=UNLOAD Jan 14 01:24:14.270000 audit: BPF prog-id=40 op=LOAD Jan 14 01:24:14.270000 audit: BPF prog-id=41 op=LOAD Jan 14 01:24:14.270000 audit: BPF prog-id=19 op=UNLOAD Jan 14 01:24:14.270000 audit: BPF prog-id=20 op=UNLOAD Jan 14 01:24:14.271000 audit: BPF prog-id=42 op=LOAD Jan 14 01:24:14.272000 audit: BPF prog-id=30 op=UNLOAD Jan 14 01:24:14.272000 audit: BPF prog-id=43 op=LOAD Jan 14 01:24:14.273000 audit: BPF prog-id=22 op=UNLOAD Jan 14 01:24:14.274000 audit: BPF prog-id=44 op=LOAD Jan 14 01:24:14.274000 audit: BPF prog-id=45 op=LOAD Jan 14 01:24:14.274000 audit: BPF prog-id=23 op=UNLOAD Jan 14 01:24:14.274000 audit: BPF prog-id=24 op=UNLOAD Jan 14 01:24:14.274000 audit: BPF prog-id=46 op=LOAD Jan 14 01:24:14.274000 audit: BPF prog-id=21 op=UNLOAD Jan 14 01:24:14.285233 systemd-tmpfiles[1765]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 14 01:24:14.285558 systemd-tmpfiles[1765]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 14 01:24:14.285840 systemd-tmpfiles[1765]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 01:24:14.287183 systemd[1]: Reload requested from client PID 1763 ('systemctl') (unit ensure-sysext.service)... Jan 14 01:24:14.287196 systemd[1]: Reloading... Jan 14 01:24:14.288428 systemd-tmpfiles[1765]: ACLs are not supported, ignoring. Jan 14 01:24:14.288586 systemd-tmpfiles[1765]: ACLs are not supported, ignoring. Jan 14 01:24:14.299148 systemd-tmpfiles[1765]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 01:24:14.299309 systemd-tmpfiles[1765]: Skipping /boot Jan 14 01:24:14.310576 systemd-tmpfiles[1765]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 01:24:14.310596 systemd-tmpfiles[1765]: Skipping /boot Jan 14 01:24:14.382190 zram_generator::config[1802]: No configuration found. Jan 14 01:24:14.644201 systemd[1]: Reloading finished in 356 ms. Jan 14 01:24:14.672004 kernel: kauditd_printk_skb: 100 callbacks suppressed Jan 14 01:24:14.672093 kernel: audit: type=1334 audit(1768353854.668:191): prog-id=47 op=LOAD Jan 14 01:24:14.668000 audit: BPF prog-id=47 op=LOAD Jan 14 01:24:14.671000 audit: BPF prog-id=46 op=UNLOAD Jan 14 01:24:14.673000 audit: BPF prog-id=48 op=LOAD Jan 14 01:24:14.673000 audit: BPF prog-id=31 op=UNLOAD Jan 14 01:24:14.673000 audit: BPF prog-id=49 op=LOAD Jan 14 01:24:14.673000 audit: BPF prog-id=50 op=LOAD Jan 14 01:24:14.673000 audit: BPF prog-id=32 op=UNLOAD Jan 14 01:24:14.673000 audit: BPF prog-id=33 op=UNLOAD Jan 14 01:24:14.675174 kernel: audit: type=1334 audit(1768353854.671:192): prog-id=46 op=UNLOAD Jan 14 01:24:14.675205 kernel: audit: type=1334 audit(1768353854.673:193): prog-id=48 op=LOAD Jan 14 01:24:14.675220 kernel: audit: type=1334 audit(1768353854.673:194): prog-id=31 op=UNLOAD Jan 14 01:24:14.675247 kernel: audit: type=1334 audit(1768353854.673:195): prog-id=49 op=LOAD Jan 14 01:24:14.675270 kernel: audit: type=1334 audit(1768353854.673:196): prog-id=50 op=LOAD Jan 14 01:24:14.675287 kernel: audit: type=1334 audit(1768353854.673:197): prog-id=32 op=UNLOAD Jan 14 01:24:14.675305 kernel: audit: type=1334 audit(1768353854.673:198): prog-id=33 op=UNLOAD Jan 14 01:24:14.674000 audit: BPF prog-id=51 op=LOAD Jan 14 01:24:14.674000 audit: BPF prog-id=39 op=UNLOAD Jan 14 01:24:14.674000 audit: BPF prog-id=52 op=LOAD Jan 14 01:24:14.674000 audit: BPF prog-id=53 op=LOAD Jan 14 01:24:14.674000 audit: BPF prog-id=40 op=UNLOAD Jan 14 01:24:14.674000 audit: BPF prog-id=41 op=UNLOAD Jan 14 01:24:14.676255 kernel: audit: type=1334 audit(1768353854.674:199): prog-id=51 op=LOAD Jan 14 01:24:14.676305 kernel: audit: type=1334 audit(1768353854.674:200): prog-id=39 op=UNLOAD Jan 14 01:24:14.676000 audit: BPF prog-id=54 op=LOAD Jan 14 01:24:14.676000 audit: BPF prog-id=55 op=LOAD Jan 14 01:24:14.676000 audit: BPF prog-id=34 op=UNLOAD Jan 14 01:24:14.676000 audit: BPF prog-id=35 op=UNLOAD Jan 14 01:24:14.677000 audit: BPF prog-id=56 op=LOAD Jan 14 01:24:14.677000 audit: BPF prog-id=36 op=UNLOAD Jan 14 01:24:14.677000 audit: BPF prog-id=57 op=LOAD Jan 14 01:24:14.677000 audit: BPF prog-id=58 op=LOAD Jan 14 01:24:14.677000 audit: BPF prog-id=37 op=UNLOAD Jan 14 01:24:14.677000 audit: BPF prog-id=38 op=UNLOAD Jan 14 01:24:14.679000 audit: BPF prog-id=59 op=LOAD Jan 14 01:24:14.680000 audit: BPF prog-id=42 op=UNLOAD Jan 14 01:24:14.681000 audit: BPF prog-id=60 op=LOAD Jan 14 01:24:14.681000 audit: BPF prog-id=43 op=UNLOAD Jan 14 01:24:14.681000 audit: BPF prog-id=61 op=LOAD Jan 14 01:24:14.681000 audit: BPF prog-id=62 op=LOAD Jan 14 01:24:14.681000 audit: BPF prog-id=44 op=UNLOAD Jan 14 01:24:14.681000 audit: BPF prog-id=45 op=UNLOAD Jan 14 01:24:14.686222 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 01:24:14.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:14.690902 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:24:14.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:14.694546 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:24:14.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:14.706436 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 01:24:14.711519 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 01:24:14.716500 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 01:24:14.719886 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 01:24:14.724525 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 01:24:14.731648 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:24:14.732570 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:24:14.735192 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:24:14.746073 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:24:14.751519 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:24:14.752256 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:24:14.752582 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:24:14.752726 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:24:14.752884 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:24:14.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:14.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:14.764318 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:24:14.764616 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:24:14.771856 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:24:14.773116 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:24:14.773455 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:24:14.773731 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:24:14.773890 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:24:14.774050 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:24:14.780000 audit[1864]: SYSTEM_BOOT pid=1864 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 14 01:24:14.783390 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:24:14.787725 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:24:14.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:14.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:14.792236 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:24:14.792517 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:24:14.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:14.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:14.811398 systemd[1]: Finished ensure-sysext.service. Jan 14 01:24:14.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:14.812821 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 01:24:14.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:14.817539 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:24:14.818020 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:24:14.819820 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 01:24:14.823330 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:24:14.824044 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:24:14.824232 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:24:14.824292 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:24:14.824354 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 01:24:14.824416 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 01:24:14.824989 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:24:14.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:14.835850 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 01:24:14.841405 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 01:24:14.841742 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 01:24:14.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:14.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:14.842757 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:24:14.843007 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:24:14.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:14.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:14.844388 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 01:24:14.906000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 14 01:24:14.906000 audit[1897]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcc27d4d90 a2=420 a3=0 items=0 ppid=1860 pid=1897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:14.906000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:24:14.907768 augenrules[1897]: No rules Jan 14 01:24:14.908819 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 01:24:14.909391 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 01:24:15.001862 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 01:24:15.002560 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 01:24:15.369312 systemd-networkd[1548]: eth0: Gained IPv6LL Jan 14 01:24:15.372072 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 01:24:15.372726 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 01:24:17.405651 ldconfig[1862]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 01:24:17.418405 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 01:24:17.421192 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 01:24:17.444924 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 01:24:17.445840 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 01:24:17.446424 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 01:24:17.446972 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 01:24:17.447391 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 14 01:24:17.447903 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 01:24:17.448396 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 01:24:17.448786 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 14 01:24:17.449265 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 14 01:24:17.449799 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 01:24:17.450185 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 01:24:17.450400 systemd[1]: Reached target paths.target - Path Units. Jan 14 01:24:17.450791 systemd[1]: Reached target timers.target - Timer Units. Jan 14 01:24:17.451750 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 01:24:17.453546 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 01:24:17.456667 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 14 01:24:17.457539 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 14 01:24:17.458037 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 14 01:24:17.469068 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 01:24:17.470006 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 14 01:24:17.471436 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 01:24:17.472889 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 01:24:17.473373 systemd[1]: Reached target basic.target - Basic System. Jan 14 01:24:17.473838 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 01:24:17.473874 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 01:24:17.475134 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 01:24:17.479358 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 14 01:24:17.482410 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 01:24:17.492979 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 01:24:17.496208 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 01:24:17.502041 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 01:24:17.503299 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 01:24:17.508371 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 14 01:24:17.511412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:24:17.515729 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 01:24:17.523011 jq[1914]: false Jan 14 01:24:17.525957 systemd[1]: Started ntpd.service - Network Time Service. Jan 14 01:24:17.530499 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 01:24:17.535517 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 01:24:17.539025 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 14 01:24:17.546236 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 01:24:17.551478 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 01:24:17.560878 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 01:24:17.562328 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 01:24:17.563089 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 01:24:17.565428 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 01:24:17.572525 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 01:24:17.582923 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 01:24:17.584672 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 01:24:17.585002 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 01:24:17.606548 extend-filesystems[1915]: Found /dev/nvme0n1p6 Jan 14 01:24:17.616051 google_oslogin_nss_cache[1916]: oslogin_cache_refresh[1916]: Refreshing passwd entry cache Jan 14 01:24:17.616068 oslogin_cache_refresh[1916]: Refreshing passwd entry cache Jan 14 01:24:17.628859 extend-filesystems[1915]: Found /dev/nvme0n1p9 Jan 14 01:24:17.643547 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 01:24:17.645723 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 01:24:17.648901 jq[1930]: true Jan 14 01:24:17.664438 extend-filesystems[1915]: Checking size of /dev/nvme0n1p9 Jan 14 01:24:17.668498 google_oslogin_nss_cache[1916]: oslogin_cache_refresh[1916]: Failure getting users, quitting Jan 14 01:24:17.668498 google_oslogin_nss_cache[1916]: oslogin_cache_refresh[1916]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 01:24:17.668498 google_oslogin_nss_cache[1916]: oslogin_cache_refresh[1916]: Refreshing group entry cache Jan 14 01:24:17.667422 oslogin_cache_refresh[1916]: Failure getting users, quitting Jan 14 01:24:17.667448 oslogin_cache_refresh[1916]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 01:24:17.667512 oslogin_cache_refresh[1916]: Refreshing group entry cache Jan 14 01:24:17.691397 google_oslogin_nss_cache[1916]: oslogin_cache_refresh[1916]: Failure getting groups, quitting Jan 14 01:24:17.693193 oslogin_cache_refresh[1916]: Failure getting groups, quitting Jan 14 01:24:17.694238 google_oslogin_nss_cache[1916]: oslogin_cache_refresh[1916]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 01:24:17.695147 oslogin_cache_refresh[1916]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 01:24:17.712059 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 14 01:24:17.715593 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 14 01:24:17.728612 tar[1935]: linux-amd64/LICENSE Jan 14 01:24:17.728612 tar[1935]: linux-amd64/helm Jan 14 01:24:17.734497 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 01:24:17.734911 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 01:24:17.762808 jq[1952]: true Jan 14 01:24:17.778621 ntpd[1919]: ntpd 4.2.8p18@1.4062-o Tue Jan 13 21:47:10 UTC 2026 (1): Starting Jan 14 01:24:17.779559 ntpd[1919]: 14 Jan 01:24:17 ntpd[1919]: ntpd 4.2.8p18@1.4062-o Tue Jan 13 21:47:10 UTC 2026 (1): Starting Jan 14 01:24:17.779559 ntpd[1919]: 14 Jan 01:24:17 ntpd[1919]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 14 01:24:17.779559 ntpd[1919]: 14 Jan 01:24:17 ntpd[1919]: ---------------------------------------------------- Jan 14 01:24:17.779559 ntpd[1919]: 14 Jan 01:24:17 ntpd[1919]: ntp-4 is maintained by Network Time Foundation, Jan 14 01:24:17.779559 ntpd[1919]: 14 Jan 01:24:17 ntpd[1919]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 14 01:24:17.779559 ntpd[1919]: 14 Jan 01:24:17 ntpd[1919]: corporation. Support and training for ntp-4 are Jan 14 01:24:17.779559 ntpd[1919]: 14 Jan 01:24:17 ntpd[1919]: available at https://www.nwtime.org/support Jan 14 01:24:17.779559 ntpd[1919]: 14 Jan 01:24:17 ntpd[1919]: ---------------------------------------------------- Jan 14 01:24:17.778710 ntpd[1919]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 14 01:24:17.778720 ntpd[1919]: ---------------------------------------------------- Jan 14 01:24:17.778730 ntpd[1919]: ntp-4 is maintained by Network Time Foundation, Jan 14 01:24:17.778740 ntpd[1919]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 14 01:24:17.778750 ntpd[1919]: corporation. Support and training for ntp-4 are Jan 14 01:24:17.778760 ntpd[1919]: available at https://www.nwtime.org/support Jan 14 01:24:17.778770 ntpd[1919]: ---------------------------------------------------- Jan 14 01:24:17.791347 extend-filesystems[1915]: Resized partition /dev/nvme0n1p9 Jan 14 01:24:17.792469 ntpd[1919]: proto: precision = 0.061 usec (-24) Jan 14 01:24:17.794451 ntpd[1919]: 14 Jan 01:24:17 ntpd[1919]: proto: precision = 0.061 usec (-24) Jan 14 01:24:17.794451 ntpd[1919]: 14 Jan 01:24:17 ntpd[1919]: basedate set to 2026-01-01 Jan 14 01:24:17.794451 ntpd[1919]: 14 Jan 01:24:17 ntpd[1919]: gps base set to 2026-01-04 (week 2400) Jan 14 01:24:17.794451 ntpd[1919]: 14 Jan 01:24:17 ntpd[1919]: Listen and drop on 0 v6wildcard [::]:123 Jan 14 01:24:17.794451 ntpd[1919]: 14 Jan 01:24:17 ntpd[1919]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 14 01:24:17.794451 ntpd[1919]: 14 Jan 01:24:17 ntpd[1919]: Listen normally on 2 lo 127.0.0.1:123 Jan 14 01:24:17.794451 ntpd[1919]: 14 Jan 01:24:17 ntpd[1919]: Listen normally on 3 eth0 172.31.18.46:123 Jan 14 01:24:17.794451 ntpd[1919]: 14 Jan 01:24:17 ntpd[1919]: Listen normally on 4 lo [::1]:123 Jan 14 01:24:17.794451 ntpd[1919]: 14 Jan 01:24:17 ntpd[1919]: Listen normally on 5 eth0 [fe80::462:5cff:fefb:d08b%2]:123 Jan 14 01:24:17.794451 ntpd[1919]: 14 Jan 01:24:17 ntpd[1919]: Listening on routing socket on fd #22 for interface updates Jan 14 01:24:17.792848 ntpd[1919]: basedate set to 2026-01-01 Jan 14 01:24:17.792866 ntpd[1919]: gps base set to 2026-01-04 (week 2400) Jan 14 01:24:17.793010 ntpd[1919]: Listen and drop on 0 v6wildcard [::]:123 Jan 14 01:24:17.793044 ntpd[1919]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 14 01:24:17.793299 ntpd[1919]: Listen normally on 2 lo 127.0.0.1:123 Jan 14 01:24:17.793328 ntpd[1919]: Listen normally on 3 eth0 172.31.18.46:123 Jan 14 01:24:17.793358 ntpd[1919]: Listen normally on 4 lo [::1]:123 Jan 14 01:24:17.793386 ntpd[1919]: Listen normally on 5 eth0 [fe80::462:5cff:fefb:d08b%2]:123 Jan 14 01:24:17.793421 ntpd[1919]: Listening on routing socket on fd #22 for interface updates Jan 14 01:24:17.801894 ntpd[1919]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 14 01:24:17.805390 update_engine[1929]: I20260114 01:24:17.804713 1929 main.cc:92] Flatcar Update Engine starting Jan 14 01:24:17.805686 ntpd[1919]: 14 Jan 01:24:17 ntpd[1919]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 14 01:24:17.805686 ntpd[1919]: 14 Jan 01:24:17 ntpd[1919]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 14 01:24:17.801937 ntpd[1919]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 14 01:24:17.838604 extend-filesystems[1983]: resize2fs 1.47.3 (8-Jul-2025) Jan 14 01:24:17.826832 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 01:24:17.853196 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 1617920 to 2604027 blocks Jan 14 01:24:17.825315 dbus-daemon[1912]: [system] SELinux support is enabled Jan 14 01:24:17.835345 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 01:24:17.837788 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 14 01:24:17.844387 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 14 01:24:17.845122 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 01:24:17.845153 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 01:24:17.845785 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 01:24:17.845808 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 01:24:17.876374 dbus-daemon[1912]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1548 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 14 01:24:17.884482 dbus-daemon[1912]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 14 01:24:17.892108 update_engine[1929]: I20260114 01:24:17.891331 1929 update_check_scheduler.cc:74] Next update check in 4m51s Jan 14 01:24:17.900022 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 2604027 Jan 14 01:24:17.897889 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 14 01:24:17.899426 systemd[1]: Started update-engine.service - Update Engine. Jan 14 01:24:17.922187 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 01:24:17.927616 extend-filesystems[1983]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 14 01:24:17.927616 extend-filesystems[1983]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 14 01:24:17.927616 extend-filesystems[1983]: The filesystem on /dev/nvme0n1p9 is now 2604027 (4k) blocks long. Jan 14 01:24:17.952323 extend-filesystems[1915]: Resized filesystem in /dev/nvme0n1p9 Jan 14 01:24:17.929508 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 01:24:17.931340 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 01:24:17.963372 bash[2002]: Updated "/home/core/.ssh/authorized_keys" Jan 14 01:24:17.964137 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 01:24:17.973145 systemd[1]: Starting sshkeys.service... Jan 14 01:24:18.056375 coreos-metadata[1911]: Jan 14 01:24:18.056 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 14 01:24:18.062846 coreos-metadata[1911]: Jan 14 01:24:18.061 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 14 01:24:18.069151 coreos-metadata[1911]: Jan 14 01:24:18.068 INFO Fetch successful Jan 14 01:24:18.069151 coreos-metadata[1911]: Jan 14 01:24:18.068 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 14 01:24:18.081797 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 14 01:24:18.085582 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 14 01:24:18.090901 coreos-metadata[1911]: Jan 14 01:24:18.090 INFO Fetch successful Jan 14 01:24:18.090901 coreos-metadata[1911]: Jan 14 01:24:18.090 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 14 01:24:18.091095 coreos-metadata[1911]: Jan 14 01:24:18.090 INFO Fetch successful Jan 14 01:24:18.091095 coreos-metadata[1911]: Jan 14 01:24:18.090 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 14 01:24:18.096198 coreos-metadata[1911]: Jan 14 01:24:18.094 INFO Fetch successful Jan 14 01:24:18.096198 coreos-metadata[1911]: Jan 14 01:24:18.094 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 14 01:24:18.100762 coreos-metadata[1911]: Jan 14 01:24:18.100 INFO Fetch failed with 404: resource not found Jan 14 01:24:18.100762 coreos-metadata[1911]: Jan 14 01:24:18.100 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 14 01:24:18.106786 coreos-metadata[1911]: Jan 14 01:24:18.106 INFO Fetch successful Jan 14 01:24:18.106912 coreos-metadata[1911]: Jan 14 01:24:18.106 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 14 01:24:18.114712 systemd-logind[1928]: Watching system buttons on /dev/input/event2 (Power Button) Jan 14 01:24:18.114748 systemd-logind[1928]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 14 01:24:18.114774 systemd-logind[1928]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 14 01:24:18.115187 coreos-metadata[1911]: Jan 14 01:24:18.115 INFO Fetch successful Jan 14 01:24:18.115239 coreos-metadata[1911]: Jan 14 01:24:18.115 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 14 01:24:18.116226 systemd-logind[1928]: New seat seat0. Jan 14 01:24:18.117239 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 01:24:18.119340 coreos-metadata[1911]: Jan 14 01:24:18.119 INFO Fetch successful Jan 14 01:24:18.119340 coreos-metadata[1911]: Jan 14 01:24:18.119 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 14 01:24:18.122761 coreos-metadata[1911]: Jan 14 01:24:18.122 INFO Fetch successful Jan 14 01:24:18.122854 coreos-metadata[1911]: Jan 14 01:24:18.122 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 14 01:24:18.139443 coreos-metadata[1911]: Jan 14 01:24:18.139 INFO Fetch successful Jan 14 01:24:18.145917 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 14 01:24:18.149883 dbus-daemon[1912]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 14 01:24:18.161099 dbus-daemon[1912]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2004 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 14 01:24:18.179848 systemd[1]: Starting polkit.service - Authorization Manager... Jan 14 01:24:18.204189 amazon-ssm-agent[2001]: Initializing new seelog logger Jan 14 01:24:18.204522 amazon-ssm-agent[2001]: New Seelog Logger Creation Complete Jan 14 01:24:18.204522 amazon-ssm-agent[2001]: 2026/01/14 01:24:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 14 01:24:18.204522 amazon-ssm-agent[2001]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 14 01:24:18.206192 amazon-ssm-agent[2001]: 2026/01/14 01:24:18 processing appconfig overrides Jan 14 01:24:18.206582 amazon-ssm-agent[2001]: 2026/01/14 01:24:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 14 01:24:18.206582 amazon-ssm-agent[2001]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 14 01:24:18.206582 amazon-ssm-agent[2001]: 2026/01/14 01:24:18 processing appconfig overrides Jan 14 01:24:18.208780 amazon-ssm-agent[2001]: 2026/01/14 01:24:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 14 01:24:18.208780 amazon-ssm-agent[2001]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 14 01:24:18.208780 amazon-ssm-agent[2001]: 2026/01/14 01:24:18 processing appconfig overrides Jan 14 01:24:18.208780 amazon-ssm-agent[2001]: 2026-01-14 01:24:18.2063 INFO Proxy environment variables: Jan 14 01:24:18.215797 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 14 01:24:18.217049 amazon-ssm-agent[2001]: 2026/01/14 01:24:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 14 01:24:18.217049 amazon-ssm-agent[2001]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 14 01:24:18.217049 amazon-ssm-agent[2001]: 2026/01/14 01:24:18 processing appconfig overrides Jan 14 01:24:18.218779 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 01:24:18.326570 amazon-ssm-agent[2001]: 2026-01-14 01:24:18.2064 INFO https_proxy: Jan 14 01:24:18.430428 amazon-ssm-agent[2001]: 2026-01-14 01:24:18.2064 INFO http_proxy: Jan 14 01:24:18.442200 coreos-metadata[2023]: Jan 14 01:24:18.434 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 14 01:24:18.443281 coreos-metadata[2023]: Jan 14 01:24:18.443 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 14 01:24:18.446520 coreos-metadata[2023]: Jan 14 01:24:18.446 INFO Fetch successful Jan 14 01:24:18.446520 coreos-metadata[2023]: Jan 14 01:24:18.446 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 14 01:24:18.448847 coreos-metadata[2023]: Jan 14 01:24:18.448 INFO Fetch successful Jan 14 01:24:18.454815 polkitd[2033]: Started polkitd version 126 Jan 14 01:24:18.461132 unknown[2023]: wrote ssh authorized keys file for user: core Jan 14 01:24:18.476908 polkitd[2033]: Loading rules from directory /etc/polkit-1/rules.d Jan 14 01:24:18.477437 polkitd[2033]: Loading rules from directory /run/polkit-1/rules.d Jan 14 01:24:18.477495 polkitd[2033]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 14 01:24:18.477888 polkitd[2033]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 14 01:24:18.477916 polkitd[2033]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 14 01:24:18.477974 polkitd[2033]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 14 01:24:18.491265 polkitd[2033]: Finished loading, compiling and executing 2 rules Jan 14 01:24:18.496316 systemd[1]: Started polkit.service - Authorization Manager. Jan 14 01:24:18.515611 dbus-daemon[1912]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 14 01:24:18.519702 polkitd[2033]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 14 01:24:18.528137 amazon-ssm-agent[2001]: 2026-01-14 01:24:18.2064 INFO no_proxy: Jan 14 01:24:18.589187 update-ssh-keys[2073]: Updated "/home/core/.ssh/authorized_keys" Jan 14 01:24:18.590600 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 14 01:24:18.601276 systemd[1]: Finished sshkeys.service. Jan 14 01:24:18.634194 amazon-ssm-agent[2001]: 2026-01-14 01:24:18.2065 INFO Checking if agent identity type OnPrem can be assumed Jan 14 01:24:18.653555 systemd-hostnamed[2004]: Hostname set to (transient) Jan 14 01:24:18.653556 systemd-resolved[1518]: System hostname changed to 'ip-172-31-18-46'. Jan 14 01:24:18.693231 sshd_keygen[1943]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 01:24:18.735124 amazon-ssm-agent[2001]: 2026-01-14 01:24:18.2067 INFO Checking if agent identity type EC2 can be assumed Jan 14 01:24:18.823845 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 01:24:18.829036 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 01:24:18.835568 locksmithd[2005]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 01:24:18.836526 amazon-ssm-agent[2001]: 2026-01-14 01:24:18.5277 INFO Agent will take identity from EC2 Jan 14 01:24:18.876358 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 01:24:18.876737 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 01:24:18.878718 containerd[1962]: time="2026-01-14T01:24:18Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 14 01:24:18.887197 containerd[1962]: time="2026-01-14T01:24:18.885966737Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 14 01:24:18.886270 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 01:24:18.939192 amazon-ssm-agent[2001]: 2026-01-14 01:24:18.5307 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jan 14 01:24:18.952709 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 01:24:18.961016 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 01:24:18.967909 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 14 01:24:18.969526 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 01:24:19.001515 containerd[1962]: time="2026-01-14T01:24:19.001455093Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.277µs" Jan 14 01:24:19.001515 containerd[1962]: time="2026-01-14T01:24:19.001508486Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 14 01:24:19.001663 containerd[1962]: time="2026-01-14T01:24:19.001561675Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 14 01:24:19.001663 containerd[1962]: time="2026-01-14T01:24:19.001578672Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 14 01:24:19.002227 containerd[1962]: time="2026-01-14T01:24:19.001786398Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 14 01:24:19.002227 containerd[1962]: time="2026-01-14T01:24:19.001821320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 01:24:19.002227 containerd[1962]: time="2026-01-14T01:24:19.001896328Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 01:24:19.002227 containerd[1962]: time="2026-01-14T01:24:19.001912441Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 01:24:19.002227 containerd[1962]: time="2026-01-14T01:24:19.002210793Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 01:24:19.002428 containerd[1962]: time="2026-01-14T01:24:19.002235051Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 01:24:19.002428 containerd[1962]: time="2026-01-14T01:24:19.002254346Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 01:24:19.002428 containerd[1962]: time="2026-01-14T01:24:19.002266780Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 01:24:19.002541 containerd[1962]: time="2026-01-14T01:24:19.002485642Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 01:24:19.002541 containerd[1962]: time="2026-01-14T01:24:19.002502897Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 14 01:24:19.002617 containerd[1962]: time="2026-01-14T01:24:19.002601876Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 14 01:24:19.003368 containerd[1962]: time="2026-01-14T01:24:19.002855661Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 01:24:19.003368 containerd[1962]: time="2026-01-14T01:24:19.002900312Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 01:24:19.003368 containerd[1962]: time="2026-01-14T01:24:19.002915936Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 14 01:24:19.003368 containerd[1962]: time="2026-01-14T01:24:19.002958465Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 14 01:24:19.005182 containerd[1962]: time="2026-01-14T01:24:19.005099257Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 14 01:24:19.005278 containerd[1962]: time="2026-01-14T01:24:19.005229158Z" level=info msg="metadata content store policy set" policy=shared Jan 14 01:24:19.020096 containerd[1962]: time="2026-01-14T01:24:19.020041968Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 14 01:24:19.020237 containerd[1962]: time="2026-01-14T01:24:19.020137108Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 01:24:19.020320 containerd[1962]: time="2026-01-14T01:24:19.020294325Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 01:24:19.020376 containerd[1962]: time="2026-01-14T01:24:19.020333305Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 14 01:24:19.020376 containerd[1962]: time="2026-01-14T01:24:19.020355814Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 14 01:24:19.020452 containerd[1962]: time="2026-01-14T01:24:19.020373755Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 14 01:24:19.020452 containerd[1962]: time="2026-01-14T01:24:19.020392237Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 14 01:24:19.020452 containerd[1962]: time="2026-01-14T01:24:19.020407113Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 14 01:24:19.020452 containerd[1962]: time="2026-01-14T01:24:19.020423991Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 14 01:24:19.020452 containerd[1962]: time="2026-01-14T01:24:19.020443691Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 14 01:24:19.020635 containerd[1962]: time="2026-01-14T01:24:19.020467649Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 14 01:24:19.020635 containerd[1962]: time="2026-01-14T01:24:19.020485511Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 14 01:24:19.020635 containerd[1962]: time="2026-01-14T01:24:19.020508955Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 14 01:24:19.020635 containerd[1962]: time="2026-01-14T01:24:19.020527651Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 14 01:24:19.020759 containerd[1962]: time="2026-01-14T01:24:19.020680101Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 14 01:24:19.020759 containerd[1962]: time="2026-01-14T01:24:19.020713123Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 14 01:24:19.020759 containerd[1962]: time="2026-01-14T01:24:19.020734309Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 14 01:24:19.020759 containerd[1962]: time="2026-01-14T01:24:19.020749865Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 14 01:24:19.020900 containerd[1962]: time="2026-01-14T01:24:19.020766423Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 14 01:24:19.020900 containerd[1962]: time="2026-01-14T01:24:19.020782491Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 14 01:24:19.020900 containerd[1962]: time="2026-01-14T01:24:19.020799856Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 14 01:24:19.020900 containerd[1962]: time="2026-01-14T01:24:19.020817275Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 14 01:24:19.020900 containerd[1962]: time="2026-01-14T01:24:19.020834509Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 14 01:24:19.020900 containerd[1962]: time="2026-01-14T01:24:19.020850858Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 14 01:24:19.020900 containerd[1962]: time="2026-01-14T01:24:19.020872032Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 14 01:24:19.021130 containerd[1962]: time="2026-01-14T01:24:19.020921754Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 14 01:24:19.021130 containerd[1962]: time="2026-01-14T01:24:19.020986032Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 14 01:24:19.021130 containerd[1962]: time="2026-01-14T01:24:19.021003315Z" level=info msg="Start snapshots syncer" Jan 14 01:24:19.021130 containerd[1962]: time="2026-01-14T01:24:19.021054935Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 14 01:24:19.023392 containerd[1962]: time="2026-01-14T01:24:19.021612937Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 14 01:24:19.023392 containerd[1962]: time="2026-01-14T01:24:19.021685755Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 14 01:24:19.024309 containerd[1962]: time="2026-01-14T01:24:19.024214231Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 14 01:24:19.024506 containerd[1962]: time="2026-01-14T01:24:19.024460194Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 14 01:24:19.024558 containerd[1962]: time="2026-01-14T01:24:19.024512115Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 14 01:24:19.024558 containerd[1962]: time="2026-01-14T01:24:19.024532040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 14 01:24:19.024558 containerd[1962]: time="2026-01-14T01:24:19.024553485Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 14 01:24:19.024658 containerd[1962]: time="2026-01-14T01:24:19.024572655Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 14 01:24:19.024658 containerd[1962]: time="2026-01-14T01:24:19.024592314Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 14 01:24:19.024658 containerd[1962]: time="2026-01-14T01:24:19.024609605Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 14 01:24:19.024658 containerd[1962]: time="2026-01-14T01:24:19.024625178Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 14 01:24:19.024658 containerd[1962]: time="2026-01-14T01:24:19.024642680Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 14 01:24:19.024825 containerd[1962]: time="2026-01-14T01:24:19.024703176Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 01:24:19.024825 containerd[1962]: time="2026-01-14T01:24:19.024724944Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 01:24:19.024825 containerd[1962]: time="2026-01-14T01:24:19.024809796Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 01:24:19.024932 containerd[1962]: time="2026-01-14T01:24:19.024828925Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 01:24:19.024932 containerd[1962]: time="2026-01-14T01:24:19.024842175Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 14 01:24:19.024932 containerd[1962]: time="2026-01-14T01:24:19.024858041Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 14 01:24:19.024932 containerd[1962]: time="2026-01-14T01:24:19.024874534Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 14 01:24:19.024932 containerd[1962]: time="2026-01-14T01:24:19.024891904Z" level=info msg="runtime interface created" Jan 14 01:24:19.024932 containerd[1962]: time="2026-01-14T01:24:19.024900239Z" level=info msg="created NRI interface" Jan 14 01:24:19.024932 containerd[1962]: time="2026-01-14T01:24:19.024912416Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 14 01:24:19.025154 containerd[1962]: time="2026-01-14T01:24:19.024933668Z" level=info msg="Connect containerd service" Jan 14 01:24:19.025154 containerd[1962]: time="2026-01-14T01:24:19.024965979Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 01:24:19.027509 containerd[1962]: time="2026-01-14T01:24:19.027403731Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 01:24:19.035394 amazon-ssm-agent[2001]: 2026-01-14 01:24:18.5307 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 14 01:24:19.112839 amazon-ssm-agent[2001]: 2026/01/14 01:24:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 14 01:24:19.112839 amazon-ssm-agent[2001]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 14 01:24:19.113010 amazon-ssm-agent[2001]: 2026/01/14 01:24:19 processing appconfig overrides Jan 14 01:24:19.135082 amazon-ssm-agent[2001]: 2026-01-14 01:24:18.5307 INFO [amazon-ssm-agent] Starting Core Agent Jan 14 01:24:19.148977 amazon-ssm-agent[2001]: 2026-01-14 01:24:18.5307 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jan 14 01:24:19.148977 amazon-ssm-agent[2001]: 2026-01-14 01:24:18.5308 INFO [Registrar] Starting registrar module Jan 14 01:24:19.148977 amazon-ssm-agent[2001]: 2026-01-14 01:24:18.5363 INFO [EC2Identity] Checking disk for registration info Jan 14 01:24:19.148977 amazon-ssm-agent[2001]: 2026-01-14 01:24:18.5366 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jan 14 01:24:19.148977 amazon-ssm-agent[2001]: 2026-01-14 01:24:18.5366 INFO [EC2Identity] Generating registration keypair Jan 14 01:24:19.149249 amazon-ssm-agent[2001]: 2026-01-14 01:24:19.0418 INFO [EC2Identity] Checking write access before registering Jan 14 01:24:19.149249 amazon-ssm-agent[2001]: 2026-01-14 01:24:19.0427 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jan 14 01:24:19.149249 amazon-ssm-agent[2001]: 2026-01-14 01:24:19.1126 INFO [EC2Identity] EC2 registration was successful. Jan 14 01:24:19.149249 amazon-ssm-agent[2001]: 2026-01-14 01:24:19.1126 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jan 14 01:24:19.149249 amazon-ssm-agent[2001]: 2026-01-14 01:24:19.1127 INFO [CredentialRefresher] credentialRefresher has started Jan 14 01:24:19.149249 amazon-ssm-agent[2001]: 2026-01-14 01:24:19.1127 INFO [CredentialRefresher] Starting credentials refresher loop Jan 14 01:24:19.149249 amazon-ssm-agent[2001]: 2026-01-14 01:24:19.1486 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 14 01:24:19.149249 amazon-ssm-agent[2001]: 2026-01-14 01:24:19.1489 INFO [CredentialRefresher] Credentials ready Jan 14 01:24:19.233636 amazon-ssm-agent[2001]: 2026-01-14 01:24:19.1490 INFO [CredentialRefresher] Next credential rotation will be in 29.999993466366668 minutes Jan 14 01:24:19.239234 tar[1935]: linux-amd64/README.md Jan 14 01:24:19.259785 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 01:24:19.346809 containerd[1962]: time="2026-01-14T01:24:19.346750094Z" level=info msg="Start subscribing containerd event" Jan 14 01:24:19.346809 containerd[1962]: time="2026-01-14T01:24:19.346813391Z" level=info msg="Start recovering state" Jan 14 01:24:19.346954 containerd[1962]: time="2026-01-14T01:24:19.346901825Z" level=info msg="Start event monitor" Jan 14 01:24:19.346954 containerd[1962]: time="2026-01-14T01:24:19.346914188Z" level=info msg="Start cni network conf syncer for default" Jan 14 01:24:19.346954 containerd[1962]: time="2026-01-14T01:24:19.346922109Z" level=info msg="Start streaming server" Jan 14 01:24:19.346954 containerd[1962]: time="2026-01-14T01:24:19.346930324Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 14 01:24:19.346954 containerd[1962]: time="2026-01-14T01:24:19.346937292Z" level=info msg="runtime interface starting up..." Jan 14 01:24:19.346954 containerd[1962]: time="2026-01-14T01:24:19.346942628Z" level=info msg="starting plugins..." Jan 14 01:24:19.346954 containerd[1962]: time="2026-01-14T01:24:19.346953641Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 14 01:24:19.347100 containerd[1962]: time="2026-01-14T01:24:19.346766309Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 01:24:19.347100 containerd[1962]: time="2026-01-14T01:24:19.347074575Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 01:24:19.347316 containerd[1962]: time="2026-01-14T01:24:19.347127882Z" level=info msg="containerd successfully booted in 0.469269s" Jan 14 01:24:19.347374 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 01:24:20.163733 amazon-ssm-agent[2001]: 2026-01-14 01:24:20.1636 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 14 01:24:20.264327 amazon-ssm-agent[2001]: 2026-01-14 01:24:20.1668 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2192) started Jan 14 01:24:20.364989 amazon-ssm-agent[2001]: 2026-01-14 01:24:20.1669 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 14 01:24:22.774920 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:24:22.777096 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 01:24:22.778642 systemd[1]: Startup finished in 3.913s (kernel) + 11.473s (initrd) + 13.266s (userspace) = 28.654s. Jan 14 01:24:22.789809 (kubelet)[2209]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:24:25.822699 systemd-resolved[1518]: Clock change detected. Flushing caches. Jan 14 01:24:25.901742 kubelet[2209]: E0114 01:24:25.901650 2209 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:24:25.904515 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:24:25.904744 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:24:25.905424 systemd[1]: kubelet.service: Consumed 1.086s CPU time, 270.2M memory peak. Jan 14 01:24:25.955196 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 01:24:25.956487 systemd[1]: Started sshd@0-172.31.18.46:22-4.153.228.146:48890.service - OpenSSH per-connection server daemon (4.153.228.146:48890). Jan 14 01:24:26.564695 sshd[2221]: Accepted publickey for core from 4.153.228.146 port 48890 ssh2: RSA SHA256:ES3aJcA+M+pl5u1hk2HWRqxW4DXd1pPYtNeRk1B3mrI Jan 14 01:24:26.566967 sshd-session[2221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:24:26.575098 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 01:24:26.576974 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 01:24:26.586585 systemd-logind[1928]: New session 1 of user core. Jan 14 01:24:26.600966 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 01:24:26.605025 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 01:24:26.622625 (systemd)[2227]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:24:26.625600 systemd-logind[1928]: New session 2 of user core. Jan 14 01:24:26.805162 systemd[2227]: Queued start job for default target default.target. Jan 14 01:24:26.811707 systemd[2227]: Created slice app.slice - User Application Slice. Jan 14 01:24:26.811745 systemd[2227]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 14 01:24:26.811761 systemd[2227]: Reached target paths.target - Paths. Jan 14 01:24:26.811812 systemd[2227]: Reached target timers.target - Timers. Jan 14 01:24:26.813201 systemd[2227]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 01:24:26.815704 systemd[2227]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 14 01:24:26.839675 systemd[2227]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 01:24:26.839921 systemd[2227]: Reached target sockets.target - Sockets. Jan 14 01:24:26.840574 systemd[2227]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 14 01:24:26.840684 systemd[2227]: Reached target basic.target - Basic System. Jan 14 01:24:26.840752 systemd[2227]: Reached target default.target - Main User Target. Jan 14 01:24:26.840782 systemd[2227]: Startup finished in 209ms. Jan 14 01:24:26.840872 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 01:24:26.848025 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 01:24:27.100338 systemd[1]: Started sshd@1-172.31.18.46:22-4.153.228.146:48896.service - OpenSSH per-connection server daemon (4.153.228.146:48896). Jan 14 01:24:27.531544 sshd[2241]: Accepted publickey for core from 4.153.228.146 port 48896 ssh2: RSA SHA256:ES3aJcA+M+pl5u1hk2HWRqxW4DXd1pPYtNeRk1B3mrI Jan 14 01:24:27.532101 sshd-session[2241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:24:27.539002 systemd-logind[1928]: New session 3 of user core. Jan 14 01:24:27.544954 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 01:24:27.773245 sshd[2245]: Connection closed by 4.153.228.146 port 48896 Jan 14 01:24:27.774720 sshd-session[2241]: pam_unix(sshd:session): session closed for user core Jan 14 01:24:27.779476 systemd[1]: sshd@1-172.31.18.46:22-4.153.228.146:48896.service: Deactivated successfully. Jan 14 01:24:27.781901 systemd[1]: session-3.scope: Deactivated successfully. Jan 14 01:24:27.783372 systemd-logind[1928]: Session 3 logged out. Waiting for processes to exit. Jan 14 01:24:27.785304 systemd-logind[1928]: Removed session 3. Jan 14 01:24:27.876676 systemd[1]: Started sshd@2-172.31.18.46:22-4.153.228.146:48902.service - OpenSSH per-connection server daemon (4.153.228.146:48902). Jan 14 01:24:28.328253 sshd[2251]: Accepted publickey for core from 4.153.228.146 port 48902 ssh2: RSA SHA256:ES3aJcA+M+pl5u1hk2HWRqxW4DXd1pPYtNeRk1B3mrI Jan 14 01:24:28.328937 sshd-session[2251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:24:28.336604 systemd-logind[1928]: New session 4 of user core. Jan 14 01:24:28.345837 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 01:24:28.570829 sshd[2255]: Connection closed by 4.153.228.146 port 48902 Jan 14 01:24:28.572814 sshd-session[2251]: pam_unix(sshd:session): session closed for user core Jan 14 01:24:28.577854 systemd[1]: sshd@2-172.31.18.46:22-4.153.228.146:48902.service: Deactivated successfully. Jan 14 01:24:28.580079 systemd[1]: session-4.scope: Deactivated successfully. Jan 14 01:24:28.582420 systemd-logind[1928]: Session 4 logged out. Waiting for processes to exit. Jan 14 01:24:28.583491 systemd-logind[1928]: Removed session 4. Jan 14 01:24:28.670637 systemd[1]: Started sshd@3-172.31.18.46:22-4.153.228.146:48914.service - OpenSSH per-connection server daemon (4.153.228.146:48914). Jan 14 01:24:29.135513 sshd[2261]: Accepted publickey for core from 4.153.228.146 port 48914 ssh2: RSA SHA256:ES3aJcA+M+pl5u1hk2HWRqxW4DXd1pPYtNeRk1B3mrI Jan 14 01:24:29.137086 sshd-session[2261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:24:29.144656 systemd-logind[1928]: New session 5 of user core. Jan 14 01:24:29.153884 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 01:24:29.385705 sshd[2265]: Connection closed by 4.153.228.146 port 48914 Jan 14 01:24:29.386834 sshd-session[2261]: pam_unix(sshd:session): session closed for user core Jan 14 01:24:29.391689 systemd-logind[1928]: Session 5 logged out. Waiting for processes to exit. Jan 14 01:24:29.392128 systemd[1]: sshd@3-172.31.18.46:22-4.153.228.146:48914.service: Deactivated successfully. Jan 14 01:24:29.394836 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 01:24:29.396981 systemd-logind[1928]: Removed session 5. Jan 14 01:24:29.471895 systemd[1]: Started sshd@4-172.31.18.46:22-4.153.228.146:48918.service - OpenSSH per-connection server daemon (4.153.228.146:48918). Jan 14 01:24:29.908959 sshd[2271]: Accepted publickey for core from 4.153.228.146 port 48918 ssh2: RSA SHA256:ES3aJcA+M+pl5u1hk2HWRqxW4DXd1pPYtNeRk1B3mrI Jan 14 01:24:29.910744 sshd-session[2271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:24:29.916697 systemd-logind[1928]: New session 6 of user core. Jan 14 01:24:29.922859 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 01:24:30.214720 sudo[2276]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 01:24:30.215037 sudo[2276]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:24:30.225865 sudo[2276]: pam_unix(sudo:session): session closed for user root Jan 14 01:24:30.303345 sshd[2275]: Connection closed by 4.153.228.146 port 48918 Jan 14 01:24:30.304800 sshd-session[2271]: pam_unix(sshd:session): session closed for user core Jan 14 01:24:30.310833 systemd-logind[1928]: Session 6 logged out. Waiting for processes to exit. Jan 14 01:24:30.311114 systemd[1]: sshd@4-172.31.18.46:22-4.153.228.146:48918.service: Deactivated successfully. Jan 14 01:24:30.313178 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 01:24:30.315103 systemd-logind[1928]: Removed session 6. Jan 14 01:24:30.395822 systemd[1]: Started sshd@5-172.31.18.46:22-4.153.228.146:48934.service - OpenSSH per-connection server daemon (4.153.228.146:48934). Jan 14 01:24:30.827831 sshd[2283]: Accepted publickey for core from 4.153.228.146 port 48934 ssh2: RSA SHA256:ES3aJcA+M+pl5u1hk2HWRqxW4DXd1pPYtNeRk1B3mrI Jan 14 01:24:30.829370 sshd-session[2283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:24:30.836179 systemd-logind[1928]: New session 7 of user core. Jan 14 01:24:30.842891 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 01:24:30.990689 sudo[2289]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 01:24:30.991007 sudo[2289]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:24:30.993916 sudo[2289]: pam_unix(sudo:session): session closed for user root Jan 14 01:24:31.001252 sudo[2288]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 01:24:31.001695 sudo[2288]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:24:31.011279 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 01:24:31.065528 kernel: kauditd_printk_skb: 42 callbacks suppressed Jan 14 01:24:31.065691 kernel: audit: type=1305 audit(1768353871.061:241): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 01:24:31.061000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 01:24:31.064419 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 01:24:31.065897 augenrules[2313]: No rules Jan 14 01:24:31.064766 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 01:24:31.070637 kernel: audit: type=1300 audit(1768353871.061:241): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc87ed4610 a2=420 a3=0 items=0 ppid=2294 pid=2313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:31.061000 audit[2313]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc87ed4610 a2=420 a3=0 items=0 ppid=2294 pid=2313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:31.070969 sudo[2288]: pam_unix(sudo:session): session closed for user root Jan 14 01:24:31.061000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:24:31.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:31.076231 kernel: audit: type=1327 audit(1768353871.061:241): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:24:31.076320 kernel: audit: type=1130 audit(1768353871.063:242): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:31.076353 kernel: audit: type=1131 audit(1768353871.063:243): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:31.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:31.069000 audit[2288]: USER_END pid=2288 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:24:31.082396 kernel: audit: type=1106 audit(1768353871.069:244): pid=2288 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:24:31.069000 audit[2288]: CRED_DISP pid=2288 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:24:31.082688 kernel: audit: type=1104 audit(1768353871.069:245): pid=2288 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:24:31.148387 sshd[2287]: Connection closed by 4.153.228.146 port 48934 Jan 14 01:24:31.149715 sshd-session[2283]: pam_unix(sshd:session): session closed for user core Jan 14 01:24:31.149000 audit[2283]: USER_END pid=2283 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:24:31.157428 systemd[1]: sshd@5-172.31.18.46:22-4.153.228.146:48934.service: Deactivated successfully. Jan 14 01:24:31.157656 kernel: audit: type=1106 audit(1768353871.149:246): pid=2283 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:24:31.149000 audit[2283]: CRED_DISP pid=2283 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:24:31.160859 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 01:24:31.162220 systemd-logind[1928]: Session 7 logged out. Waiting for processes to exit. Jan 14 01:24:31.167728 kernel: audit: type=1104 audit(1768353871.149:247): pid=2283 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:24:31.167834 kernel: audit: type=1131 audit(1768353871.156:248): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.18.46:22-4.153.228.146:48934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:31.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.18.46:22-4.153.228.146:48934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:31.166118 systemd-logind[1928]: Removed session 7. Jan 14 01:24:31.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.18.46:22-4.153.228.146:48940 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:31.246694 systemd[1]: Started sshd@6-172.31.18.46:22-4.153.228.146:48940.service - OpenSSH per-connection server daemon (4.153.228.146:48940). Jan 14 01:24:31.681000 audit[2322]: USER_ACCT pid=2322 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:24:31.683292 sshd[2322]: Accepted publickey for core from 4.153.228.146 port 48940 ssh2: RSA SHA256:ES3aJcA+M+pl5u1hk2HWRqxW4DXd1pPYtNeRk1B3mrI Jan 14 01:24:31.682000 audit[2322]: CRED_ACQ pid=2322 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:24:31.682000 audit[2322]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff91af0ea0 a2=3 a3=0 items=0 ppid=1 pid=2322 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:31.682000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:24:31.684620 sshd-session[2322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:24:31.690577 systemd-logind[1928]: New session 8 of user core. Jan 14 01:24:31.697829 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 01:24:31.700000 audit[2322]: USER_START pid=2322 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:24:31.702000 audit[2326]: CRED_ACQ pid=2326 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:24:31.841000 audit[2327]: USER_ACCT pid=2327 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:24:31.842785 sudo[2327]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 01:24:31.841000 audit[2327]: CRED_REFR pid=2327 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:24:31.843101 sudo[2327]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:24:31.841000 audit[2327]: USER_START pid=2327 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:24:33.185909 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 01:24:33.201142 (dockerd)[2347]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 01:24:34.392391 dockerd[2347]: time="2026-01-14T01:24:34.391701648Z" level=info msg="Starting up" Jan 14 01:24:34.393527 dockerd[2347]: time="2026-01-14T01:24:34.393481837Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 14 01:24:34.406215 dockerd[2347]: time="2026-01-14T01:24:34.406173828Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 14 01:24:34.450813 systemd[1]: var-lib-docker-metacopy\x2dcheck2819817005-merged.mount: Deactivated successfully. Jan 14 01:24:34.473043 dockerd[2347]: time="2026-01-14T01:24:34.472820904Z" level=info msg="Loading containers: start." Jan 14 01:24:34.487578 kernel: Initializing XFRM netlink socket Jan 14 01:24:34.628000 audit[2395]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2395 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:24:34.628000 audit[2395]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffed4256d60 a2=0 a3=0 items=0 ppid=2347 pid=2395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.628000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 01:24:34.631000 audit[2397]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2397 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:24:34.631000 audit[2397]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffcecb2c370 a2=0 a3=0 items=0 ppid=2347 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.631000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 01:24:34.633000 audit[2399]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2399 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:24:34.633000 audit[2399]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf8d62920 a2=0 a3=0 items=0 ppid=2347 pid=2399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.633000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 01:24:34.635000 audit[2401]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2401 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:24:34.635000 audit[2401]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc975eaa90 a2=0 a3=0 items=0 ppid=2347 pid=2401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.635000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 01:24:34.638000 audit[2403]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2403 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:24:34.638000 audit[2403]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd083757d0 a2=0 a3=0 items=0 ppid=2347 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.638000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 01:24:34.640000 audit[2405]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2405 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:24:34.640000 audit[2405]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd29069320 a2=0 a3=0 items=0 ppid=2347 pid=2405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.640000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:24:34.643000 audit[2407]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2407 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:24:34.643000 audit[2407]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd83634400 a2=0 a3=0 items=0 ppid=2347 pid=2407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.643000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:24:34.645000 audit[2409]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2409 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:24:34.645000 audit[2409]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe03f755c0 a2=0 a3=0 items=0 ppid=2347 pid=2409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.645000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 01:24:34.730000 audit[2412]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2412 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:24:34.730000 audit[2412]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fffb744faf0 a2=0 a3=0 items=0 ppid=2347 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.730000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 14 01:24:34.733000 audit[2414]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2414 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:24:34.733000 audit[2414]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcad5defb0 a2=0 a3=0 items=0 ppid=2347 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.733000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 01:24:34.736000 audit[2416]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2416 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:24:34.736000 audit[2416]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff09e33ad0 a2=0 a3=0 items=0 ppid=2347 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.736000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 01:24:34.738000 audit[2418]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2418 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:24:34.738000 audit[2418]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd2cfe1910 a2=0 a3=0 items=0 ppid=2347 pid=2418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.738000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:24:34.741000 audit[2420]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2420 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:24:34.741000 audit[2420]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe3a8b4170 a2=0 a3=0 items=0 ppid=2347 pid=2420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.741000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 01:24:34.820000 audit[2450]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2450 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:24:34.820000 audit[2450]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc62f221f0 a2=0 a3=0 items=0 ppid=2347 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.820000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 01:24:34.822000 audit[2452]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2452 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:24:34.822000 audit[2452]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc80e9cd20 a2=0 a3=0 items=0 ppid=2347 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.822000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 01:24:34.824000 audit[2454]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2454 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:24:34.824000 audit[2454]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc6b0a28a0 a2=0 a3=0 items=0 ppid=2347 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.824000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 01:24:34.827000 audit[2456]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2456 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:24:34.827000 audit[2456]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc47ce9450 a2=0 a3=0 items=0 ppid=2347 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.827000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 01:24:34.829000 audit[2458]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2458 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:24:34.829000 audit[2458]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffea08836e0 a2=0 a3=0 items=0 ppid=2347 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.829000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 01:24:34.831000 audit[2460]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2460 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:24:34.831000 audit[2460]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdf6472f70 a2=0 a3=0 items=0 ppid=2347 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.831000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:24:34.834000 audit[2462]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2462 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:24:34.834000 audit[2462]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc231c9660 a2=0 a3=0 items=0 ppid=2347 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.834000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:24:34.836000 audit[2464]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2464 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:24:34.836000 audit[2464]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffff13965f0 a2=0 a3=0 items=0 ppid=2347 pid=2464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.836000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 01:24:34.839000 audit[2466]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2466 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:24:34.839000 audit[2466]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffeeb25c540 a2=0 a3=0 items=0 ppid=2347 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.839000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 14 01:24:34.842000 audit[2468]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2468 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:24:34.842000 audit[2468]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc19e09e70 a2=0 a3=0 items=0 ppid=2347 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.842000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 01:24:34.844000 audit[2470]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2470 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:24:34.844000 audit[2470]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff98f636c0 a2=0 a3=0 items=0 ppid=2347 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.844000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 01:24:34.847000 audit[2472]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2472 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:24:34.847000 audit[2472]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd76402fa0 a2=0 a3=0 items=0 ppid=2347 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.847000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:24:34.850000 audit[2474]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2474 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:24:34.850000 audit[2474]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe050ba790 a2=0 a3=0 items=0 ppid=2347 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.850000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 01:24:34.856000 audit[2479]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2479 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:24:34.856000 audit[2479]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc0678c200 a2=0 a3=0 items=0 ppid=2347 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.856000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 01:24:34.859000 audit[2481]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2481 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:24:34.859000 audit[2481]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc6c8d4df0 a2=0 a3=0 items=0 ppid=2347 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.859000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 01:24:34.861000 audit[2483]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:24:34.861000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff78ffbe00 a2=0 a3=0 items=0 ppid=2347 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.861000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 01:24:34.864000 audit[2485]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2485 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:24:34.864000 audit[2485]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffce9413ca0 a2=0 a3=0 items=0 ppid=2347 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.864000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 01:24:34.866000 audit[2487]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2487 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:24:34.866000 audit[2487]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fffd4ec99d0 a2=0 a3=0 items=0 ppid=2347 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.866000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 01:24:34.869000 audit[2489]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2489 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:24:34.869000 audit[2489]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc6abb0a20 a2=0 a3=0 items=0 ppid=2347 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.869000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 01:24:34.887839 (udev-worker)[2368]: Network interface NamePolicy= disabled on kernel command line. Jan 14 01:24:34.899000 audit[2494]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2494 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:24:34.899000 audit[2494]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffce73d5fd0 a2=0 a3=0 items=0 ppid=2347 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.899000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 14 01:24:34.907000 audit[2497]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2497 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:24:34.907000 audit[2497]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffeb0c676e0 a2=0 a3=0 items=0 ppid=2347 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.907000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 14 01:24:34.917000 audit[2505]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2505 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:24:34.917000 audit[2505]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffd6a36ed80 a2=0 a3=0 items=0 ppid=2347 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.917000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 14 01:24:34.929000 audit[2511]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2511 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:24:34.929000 audit[2511]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff94de2b90 a2=0 a3=0 items=0 ppid=2347 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.929000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 14 01:24:34.931000 audit[2513]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2513 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:24:34.931000 audit[2513]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffcac366f80 a2=0 a3=0 items=0 ppid=2347 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.931000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 14 01:24:34.934000 audit[2515]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2515 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:24:34.934000 audit[2515]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd306d7a30 a2=0 a3=0 items=0 ppid=2347 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.934000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 14 01:24:34.937000 audit[2517]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2517 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:24:34.937000 audit[2517]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffef05b0630 a2=0 a3=0 items=0 ppid=2347 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.937000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:24:34.939000 audit[2519]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2519 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:24:34.939000 audit[2519]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe3250a490 a2=0 a3=0 items=0 ppid=2347 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.939000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 14 01:24:34.941446 systemd-networkd[1548]: docker0: Link UP Jan 14 01:24:34.951675 dockerd[2347]: time="2026-01-14T01:24:34.951605076Z" level=info msg="Loading containers: done." Jan 14 01:24:35.007274 dockerd[2347]: time="2026-01-14T01:24:35.007190864Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 01:24:35.007500 dockerd[2347]: time="2026-01-14T01:24:35.007322817Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 14 01:24:35.007500 dockerd[2347]: time="2026-01-14T01:24:35.007440553Z" level=info msg="Initializing buildkit" Jan 14 01:24:35.049620 dockerd[2347]: time="2026-01-14T01:24:35.049438657Z" level=info msg="Completed buildkit initialization" Jan 14 01:24:35.061288 dockerd[2347]: time="2026-01-14T01:24:35.061218450Z" level=info msg="Daemon has completed initialization" Jan 14 01:24:35.061288 dockerd[2347]: time="2026-01-14T01:24:35.061323794Z" level=info msg="API listen on /run/docker.sock" Jan 14 01:24:35.061901 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 01:24:35.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:36.155248 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 01:24:36.157868 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:24:36.509358 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:24:36.516044 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 14 01:24:36.516168 kernel: audit: type=1130 audit(1768353876.509:299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:36.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:36.526152 (kubelet)[2565]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:24:36.579283 kubelet[2565]: E0114 01:24:36.579242 2565 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:24:36.584227 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:24:36.584375 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:24:36.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:24:36.585422 systemd[1]: kubelet.service: Consumed 182ms CPU time, 108.4M memory peak. Jan 14 01:24:36.589586 kernel: audit: type=1131 audit(1768353876.583:300): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:24:37.130784 containerd[1962]: time="2026-01-14T01:24:37.130734150Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 14 01:24:37.879858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1953511211.mount: Deactivated successfully. Jan 14 01:24:39.132983 containerd[1962]: time="2026-01-14T01:24:39.132914079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:24:39.134259 containerd[1962]: time="2026-01-14T01:24:39.134206321Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=28445968" Jan 14 01:24:39.135504 containerd[1962]: time="2026-01-14T01:24:39.135452180Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:24:39.138344 containerd[1962]: time="2026-01-14T01:24:39.138285855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:24:39.139656 containerd[1962]: time="2026-01-14T01:24:39.139434314Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 2.008658682s" Jan 14 01:24:39.139656 containerd[1962]: time="2026-01-14T01:24:39.139491946Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 14 01:24:39.140528 containerd[1962]: time="2026-01-14T01:24:39.140480059Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 14 01:24:40.812168 containerd[1962]: time="2026-01-14T01:24:40.812109454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:24:40.813371 containerd[1962]: time="2026-01-14T01:24:40.813299534Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Jan 14 01:24:40.814827 containerd[1962]: time="2026-01-14T01:24:40.814766851Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:24:40.817738 containerd[1962]: time="2026-01-14T01:24:40.817494075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:24:40.818818 containerd[1962]: time="2026-01-14T01:24:40.818675479Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.67815583s" Jan 14 01:24:40.818818 containerd[1962]: time="2026-01-14T01:24:40.818715538Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 14 01:24:40.819418 containerd[1962]: time="2026-01-14T01:24:40.819391521Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 14 01:24:42.160840 containerd[1962]: time="2026-01-14T01:24:42.160784816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:24:42.162221 containerd[1962]: time="2026-01-14T01:24:42.162109698Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Jan 14 01:24:42.163663 containerd[1962]: time="2026-01-14T01:24:42.163605350Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:24:42.166354 containerd[1962]: time="2026-01-14T01:24:42.166279204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:24:42.167639 containerd[1962]: time="2026-01-14T01:24:42.167463532Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.348033804s" Jan 14 01:24:42.167639 containerd[1962]: time="2026-01-14T01:24:42.167505328Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 14 01:24:42.168567 containerd[1962]: time="2026-01-14T01:24:42.168276251Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 14 01:24:43.254788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2046098454.mount: Deactivated successfully. Jan 14 01:24:43.866722 containerd[1962]: time="2026-01-14T01:24:43.866658848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:24:43.867821 containerd[1962]: time="2026-01-14T01:24:43.867660276Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=20340589" Jan 14 01:24:43.868970 containerd[1962]: time="2026-01-14T01:24:43.868935791Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:24:43.870617 containerd[1962]: time="2026-01-14T01:24:43.870586049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:24:43.871136 containerd[1962]: time="2026-01-14T01:24:43.871110751Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.702798904s" Jan 14 01:24:43.871224 containerd[1962]: time="2026-01-14T01:24:43.871210924Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 14 01:24:43.871701 containerd[1962]: time="2026-01-14T01:24:43.871670873Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 14 01:24:44.345133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1310014063.mount: Deactivated successfully. Jan 14 01:24:45.604248 containerd[1962]: time="2026-01-14T01:24:45.603753119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:24:45.606568 containerd[1962]: time="2026-01-14T01:24:45.606500745Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20128467" Jan 14 01:24:45.610102 containerd[1962]: time="2026-01-14T01:24:45.610027460Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:24:45.616906 containerd[1962]: time="2026-01-14T01:24:45.616828113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:24:45.618349 containerd[1962]: time="2026-01-14T01:24:45.618293110Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.746592s" Jan 14 01:24:45.618349 containerd[1962]: time="2026-01-14T01:24:45.618335126Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 14 01:24:45.618905 containerd[1962]: time="2026-01-14T01:24:45.618878191Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 14 01:24:46.151248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1349946443.mount: Deactivated successfully. Jan 14 01:24:46.164896 containerd[1962]: time="2026-01-14T01:24:46.164826641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:24:46.167205 containerd[1962]: time="2026-01-14T01:24:46.166977175Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 01:24:46.169403 containerd[1962]: time="2026-01-14T01:24:46.169359107Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:24:46.172771 containerd[1962]: time="2026-01-14T01:24:46.172694892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:24:46.173337 containerd[1962]: time="2026-01-14T01:24:46.173204557Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 554.296776ms" Jan 14 01:24:46.173337 containerd[1962]: time="2026-01-14T01:24:46.173242865Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 14 01:24:46.174203 containerd[1962]: time="2026-01-14T01:24:46.173860178Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 14 01:24:46.722125 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 01:24:46.726233 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:24:46.737426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount373100711.mount: Deactivated successfully. Jan 14 01:24:47.069835 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:24:47.077493 kernel: audit: type=1130 audit(1768353887.068:301): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:47.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:47.090043 (kubelet)[2724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:24:47.207743 kubelet[2724]: E0114 01:24:47.207691 2724 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:24:47.210783 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:24:47.211225 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:24:47.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:24:47.211945 systemd[1]: kubelet.service: Consumed 237ms CPU time, 106.6M memory peak. Jan 14 01:24:47.216591 kernel: audit: type=1131 audit(1768353887.210:302): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:24:49.033701 containerd[1962]: time="2026-01-14T01:24:49.033646975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:24:49.035340 containerd[1962]: time="2026-01-14T01:24:49.035284133Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=46127678" Jan 14 01:24:49.036613 containerd[1962]: time="2026-01-14T01:24:49.036530103Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:24:49.039703 containerd[1962]: time="2026-01-14T01:24:49.039632668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:24:49.040768 containerd[1962]: time="2026-01-14T01:24:49.040609425Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.866711784s" Jan 14 01:24:49.040768 containerd[1962]: time="2026-01-14T01:24:49.040659671Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 14 01:24:49.723822 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 14 01:24:49.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:49.729571 kernel: audit: type=1131 audit(1768353889.722:303): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:49.737000 audit: BPF prog-id=66 op=UNLOAD Jan 14 01:24:49.740569 kernel: audit: type=1334 audit(1768353889.737:304): prog-id=66 op=UNLOAD Jan 14 01:24:52.616768 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:24:52.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:52.617046 systemd[1]: kubelet.service: Consumed 237ms CPU time, 106.6M memory peak. Jan 14 01:24:52.626072 kernel: audit: type=1130 audit(1768353892.615:305): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:52.626176 kernel: audit: type=1131 audit(1768353892.615:306): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:52.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:52.624879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:24:52.660085 systemd[1]: Reload requested from client PID 2805 ('systemctl') (unit session-8.scope)... Jan 14 01:24:52.660271 systemd[1]: Reloading... Jan 14 01:24:52.795616 zram_generator::config[2855]: No configuration found. Jan 14 01:24:53.076410 systemd[1]: Reloading finished in 415 ms. Jan 14 01:24:53.108112 kernel: audit: type=1334 audit(1768353893.098:307): prog-id=70 op=LOAD Jan 14 01:24:53.108204 kernel: audit: type=1334 audit(1768353893.098:308): prog-id=71 op=LOAD Jan 14 01:24:53.108225 kernel: audit: type=1334 audit(1768353893.100:309): prog-id=54 op=UNLOAD Jan 14 01:24:53.098000 audit: BPF prog-id=70 op=LOAD Jan 14 01:24:53.098000 audit: BPF prog-id=71 op=LOAD Jan 14 01:24:53.100000 audit: BPF prog-id=54 op=UNLOAD Jan 14 01:24:53.100000 audit: BPF prog-id=55 op=UNLOAD Jan 14 01:24:53.111580 kernel: audit: type=1334 audit(1768353893.100:310): prog-id=55 op=UNLOAD Jan 14 01:24:53.102000 audit: BPF prog-id=72 op=LOAD Jan 14 01:24:53.113724 kernel: audit: type=1334 audit(1768353893.102:311): prog-id=72 op=LOAD Jan 14 01:24:53.113774 kernel: audit: type=1334 audit(1768353893.102:312): prog-id=60 op=UNLOAD Jan 14 01:24:53.102000 audit: BPF prog-id=60 op=UNLOAD Jan 14 01:24:53.102000 audit: BPF prog-id=73 op=LOAD Jan 14 01:24:53.118586 kernel: audit: type=1334 audit(1768353893.102:313): prog-id=73 op=LOAD Jan 14 01:24:53.118647 kernel: audit: type=1334 audit(1768353893.102:314): prog-id=74 op=LOAD Jan 14 01:24:53.102000 audit: BPF prog-id=74 op=LOAD Jan 14 01:24:53.102000 audit: BPF prog-id=61 op=UNLOAD Jan 14 01:24:53.102000 audit: BPF prog-id=62 op=UNLOAD Jan 14 01:24:53.105000 audit: BPF prog-id=75 op=LOAD Jan 14 01:24:53.105000 audit: BPF prog-id=48 op=UNLOAD Jan 14 01:24:53.105000 audit: BPF prog-id=76 op=LOAD Jan 14 01:24:53.105000 audit: BPF prog-id=77 op=LOAD Jan 14 01:24:53.105000 audit: BPF prog-id=49 op=UNLOAD Jan 14 01:24:53.105000 audit: BPF prog-id=50 op=UNLOAD Jan 14 01:24:53.105000 audit: BPF prog-id=78 op=LOAD Jan 14 01:24:53.106000 audit: BPF prog-id=59 op=UNLOAD Jan 14 01:24:53.108000 audit: BPF prog-id=79 op=LOAD Jan 14 01:24:53.108000 audit: BPF prog-id=63 op=UNLOAD Jan 14 01:24:53.108000 audit: BPF prog-id=80 op=LOAD Jan 14 01:24:53.108000 audit: BPF prog-id=81 op=LOAD Jan 14 01:24:53.108000 audit: BPF prog-id=64 op=UNLOAD Jan 14 01:24:53.108000 audit: BPF prog-id=65 op=UNLOAD Jan 14 01:24:53.113000 audit: BPF prog-id=82 op=LOAD Jan 14 01:24:53.113000 audit: BPF prog-id=69 op=UNLOAD Jan 14 01:24:53.113000 audit: BPF prog-id=83 op=LOAD Jan 14 01:24:53.113000 audit: BPF prog-id=56 op=UNLOAD Jan 14 01:24:53.114000 audit: BPF prog-id=84 op=LOAD Jan 14 01:24:53.114000 audit: BPF prog-id=85 op=LOAD Jan 14 01:24:53.114000 audit: BPF prog-id=57 op=UNLOAD Jan 14 01:24:53.114000 audit: BPF prog-id=58 op=UNLOAD Jan 14 01:24:53.115000 audit: BPF prog-id=86 op=LOAD Jan 14 01:24:53.115000 audit: BPF prog-id=51 op=UNLOAD Jan 14 01:24:53.116000 audit: BPF prog-id=87 op=LOAD Jan 14 01:24:53.116000 audit: BPF prog-id=88 op=LOAD Jan 14 01:24:53.116000 audit: BPF prog-id=52 op=UNLOAD Jan 14 01:24:53.116000 audit: BPF prog-id=53 op=UNLOAD Jan 14 01:24:53.117000 audit: BPF prog-id=89 op=LOAD Jan 14 01:24:53.117000 audit: BPF prog-id=47 op=UNLOAD Jan 14 01:24:53.133113 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 01:24:53.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:24:53.133198 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 01:24:53.133601 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:24:53.133668 systemd[1]: kubelet.service: Consumed 137ms CPU time, 98.3M memory peak. Jan 14 01:24:53.135415 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:24:53.359384 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:24:53.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:53.371539 (kubelet)[2915]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 01:24:53.433605 kubelet[2915]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:24:53.433605 kubelet[2915]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 01:24:53.433605 kubelet[2915]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:24:53.437578 kubelet[2915]: I0114 01:24:53.436840 2915 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 01:24:53.660794 kubelet[2915]: I0114 01:24:53.660687 2915 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 14 01:24:53.660794 kubelet[2915]: I0114 01:24:53.660714 2915 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 01:24:53.661264 kubelet[2915]: I0114 01:24:53.660935 2915 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 01:24:53.709384 kubelet[2915]: I0114 01:24:53.709322 2915 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 01:24:53.713268 kubelet[2915]: E0114 01:24:53.713223 2915 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.18.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.46:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 14 01:24:53.738030 kubelet[2915]: I0114 01:24:53.737994 2915 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 01:24:53.745112 kubelet[2915]: I0114 01:24:53.745079 2915 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 01:24:53.750291 kubelet[2915]: I0114 01:24:53.750228 2915 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 01:24:53.753999 kubelet[2915]: I0114 01:24:53.750276 2915 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-46","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 01:24:53.757107 kubelet[2915]: I0114 01:24:53.757064 2915 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 01:24:53.757107 kubelet[2915]: I0114 01:24:53.757099 2915 container_manager_linux.go:303] "Creating device plugin manager" Jan 14 01:24:53.758353 kubelet[2915]: I0114 01:24:53.758312 2915 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:24:53.762340 kubelet[2915]: I0114 01:24:53.762292 2915 kubelet.go:480] "Attempting to sync node with API server" Jan 14 01:24:53.762340 kubelet[2915]: I0114 01:24:53.762330 2915 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 01:24:53.762468 kubelet[2915]: I0114 01:24:53.762358 2915 kubelet.go:386] "Adding apiserver pod source" Jan 14 01:24:53.762468 kubelet[2915]: I0114 01:24:53.762375 2915 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 01:24:53.767148 kubelet[2915]: E0114 01:24:53.767115 2915 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.18.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-46&limit=500&resourceVersion=0\": dial tcp 172.31.18.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 01:24:53.769022 kubelet[2915]: E0114 01:24:53.768997 2915 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.18.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 01:24:53.769203 kubelet[2915]: I0114 01:24:53.769192 2915 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 01:24:53.769762 kubelet[2915]: I0114 01:24:53.769742 2915 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 01:24:53.771490 kubelet[2915]: W0114 01:24:53.771446 2915 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 01:24:53.779954 kubelet[2915]: I0114 01:24:53.779918 2915 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 01:24:53.780081 kubelet[2915]: I0114 01:24:53.779986 2915 server.go:1289] "Started kubelet" Jan 14 01:24:53.782468 kubelet[2915]: I0114 01:24:53.782446 2915 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 01:24:53.788813 kubelet[2915]: E0114 01:24:53.784611 2915 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.46:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.46:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-46.188a747f4beebb83 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-46,UID:ip-172-31-18-46,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-46,},FirstTimestamp:2026-01-14 01:24:53.779946371 +0000 UTC m=+0.403011008,LastTimestamp:2026-01-14 01:24:53.779946371 +0000 UTC m=+0.403011008,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-46,}" Jan 14 01:24:53.789632 kubelet[2915]: I0114 01:24:53.789590 2915 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 01:24:53.792846 kubelet[2915]: I0114 01:24:53.792827 2915 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 01:24:53.796084 kubelet[2915]: I0114 01:24:53.796060 2915 server.go:317] "Adding debug handlers to kubelet server" Jan 14 01:24:53.797776 kubelet[2915]: E0114 01:24:53.797734 2915 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-18-46\" not found" Jan 14 01:24:53.804729 kubelet[2915]: E0114 01:24:53.804011 2915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-46?timeout=10s\": dial tcp 172.31.18.46:6443: connect: connection refused" interval="200ms" Jan 14 01:24:53.804729 kubelet[2915]: I0114 01:24:53.804723 2915 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 01:24:53.804875 kubelet[2915]: I0114 01:24:53.804787 2915 reconciler.go:26] "Reconciler: start to sync state" Jan 14 01:24:53.805000 audit[2930]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2930 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:24:53.805000 audit[2930]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe90957430 a2=0 a3=0 items=0 ppid=2915 pid=2930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:53.805000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 01:24:53.807797 kubelet[2915]: I0114 01:24:53.807772 2915 factory.go:223] Registration of the systemd container factory successfully Jan 14 01:24:53.807876 kubelet[2915]: I0114 01:24:53.807861 2915 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 01:24:53.808305 kubelet[2915]: E0114 01:24:53.808275 2915 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.18.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 01:24:53.808893 kubelet[2915]: I0114 01:24:53.808842 2915 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 01:24:53.809112 kubelet[2915]: I0114 01:24:53.809052 2915 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 01:24:53.808000 audit[2931]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2931 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:24:53.808000 audit[2931]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe482bc0c0 a2=0 a3=0 items=0 ppid=2915 pid=2931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:53.808000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 01:24:53.811385 kubelet[2915]: I0114 01:24:53.811190 2915 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 01:24:53.815444 kubelet[2915]: E0114 01:24:53.815422 2915 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 01:24:53.816132 kubelet[2915]: I0114 01:24:53.816116 2915 factory.go:223] Registration of the containerd container factory successfully Jan 14 01:24:53.814000 audit[2933]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2933 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:24:53.814000 audit[2933]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe80f618b0 a2=0 a3=0 items=0 ppid=2915 pid=2933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:53.814000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:24:53.819000 audit[2936]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2936 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:24:53.819000 audit[2936]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcda427b20 a2=0 a3=0 items=0 ppid=2915 pid=2936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:53.819000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:24:53.834609 kubelet[2915]: I0114 01:24:53.834303 2915 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 01:24:53.834609 kubelet[2915]: I0114 01:24:53.834318 2915 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 01:24:53.834609 kubelet[2915]: I0114 01:24:53.834334 2915 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:24:53.833000 audit[2941]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:24:53.833000 audit[2941]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fffd0f411c0 a2=0 a3=0 items=0 ppid=2915 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:53.833000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 14 01:24:53.835706 kubelet[2915]: I0114 01:24:53.835039 2915 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 14 01:24:53.834000 audit[2942]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2942 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:24:53.834000 audit[2942]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe3b02fff0 a2=0 a3=0 items=0 ppid=2915 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:53.834000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 01:24:53.836215 kubelet[2915]: I0114 01:24:53.836193 2915 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 14 01:24:53.836257 kubelet[2915]: I0114 01:24:53.836217 2915 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 14 01:24:53.836257 kubelet[2915]: I0114 01:24:53.836235 2915 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 01:24:53.836257 kubelet[2915]: I0114 01:24:53.836242 2915 kubelet.go:2436] "Starting kubelet main sync loop" Jan 14 01:24:53.836323 kubelet[2915]: E0114 01:24:53.836278 2915 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 01:24:53.835000 audit[2943]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2943 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:24:53.835000 audit[2944]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2944 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:24:53.835000 audit[2944]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffecf1cee30 a2=0 a3=0 items=0 ppid=2915 pid=2944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:53.835000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 01:24:53.839247 kubelet[2915]: E0114 01:24:53.838045 2915 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.18.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 01:24:53.837000 audit[2945]: NETFILTER_CFG table=nat:50 family=10 entries=1 op=nft_register_chain pid=2945 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:24:53.837000 audit[2945]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc318c4ef0 a2=0 a3=0 items=0 ppid=2915 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:53.837000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 01:24:53.835000 audit[2943]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe1d966b50 a2=0 a3=0 items=0 ppid=2915 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:53.840168 kubelet[2915]: I0114 01:24:53.840157 2915 policy_none.go:49] "None policy: Start" Jan 14 01:24:53.835000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 01:24:53.840447 kubelet[2915]: I0114 01:24:53.840436 2915 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 01:24:53.840505 kubelet[2915]: I0114 01:24:53.840500 2915 state_mem.go:35] "Initializing new in-memory state store" Jan 14 01:24:53.839000 audit[2948]: NETFILTER_CFG table=filter:51 family=10 entries=1 op=nft_register_chain pid=2948 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:24:53.839000 audit[2948]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffec0477590 a2=0 a3=0 items=0 ppid=2915 pid=2948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:53.839000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 01:24:53.840000 audit[2949]: NETFILTER_CFG table=nat:52 family=2 entries=1 op=nft_register_chain pid=2949 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:24:53.840000 audit[2949]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc50f9a990 a2=0 a3=0 items=0 ppid=2915 pid=2949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:53.840000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 01:24:53.841000 audit[2950]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=2950 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:24:53.841000 audit[2950]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd7b6faa90 a2=0 a3=0 items=0 ppid=2915 pid=2950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:53.841000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 01:24:53.851153 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 01:24:53.866880 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 01:24:53.871077 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 01:24:53.883723 kubelet[2915]: E0114 01:24:53.883615 2915 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 01:24:53.883820 kubelet[2915]: I0114 01:24:53.883800 2915 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 01:24:53.883847 kubelet[2915]: I0114 01:24:53.883810 2915 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 01:24:53.884678 kubelet[2915]: I0114 01:24:53.884347 2915 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 01:24:53.886309 kubelet[2915]: E0114 01:24:53.886292 2915 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 01:24:53.886826 kubelet[2915]: E0114 01:24:53.886749 2915 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-46\" not found" Jan 14 01:24:53.953997 systemd[1]: Created slice kubepods-burstable-pod055be3ba9eb4e32da466e5559de314cb.slice - libcontainer container kubepods-burstable-pod055be3ba9eb4e32da466e5559de314cb.slice. Jan 14 01:24:53.962531 kubelet[2915]: E0114 01:24:53.962486 2915 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-46\" not found" node="ip-172-31-18-46" Jan 14 01:24:53.970864 kubelet[2915]: E0114 01:24:53.970022 2915 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-46\" not found" node="ip-172-31-18-46" Jan 14 01:24:53.967514 systemd[1]: Created slice kubepods-burstable-pod1684c3f4c2dc400d672baeb5ddebfc7e.slice - libcontainer container kubepods-burstable-pod1684c3f4c2dc400d672baeb5ddebfc7e.slice. Jan 14 01:24:53.986589 kubelet[2915]: I0114 01:24:53.986545 2915 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-46" Jan 14 01:24:53.987019 kubelet[2915]: E0114 01:24:53.986928 2915 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.46:6443/api/v1/nodes\": dial tcp 172.31.18.46:6443: connect: connection refused" node="ip-172-31-18-46" Jan 14 01:24:53.990429 systemd[1]: Created slice kubepods-burstable-poda589bbbdd7f65b3ba62c83422cd1e37e.slice - libcontainer container kubepods-burstable-poda589bbbdd7f65b3ba62c83422cd1e37e.slice. Jan 14 01:24:53.993210 kubelet[2915]: E0114 01:24:53.993180 2915 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-46\" not found" node="ip-172-31-18-46" Jan 14 01:24:54.004688 kubelet[2915]: E0114 01:24:54.004646 2915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-46?timeout=10s\": dial tcp 172.31.18.46:6443: connect: connection refused" interval="400ms" Jan 14 01:24:54.106265 kubelet[2915]: I0114 01:24:54.106177 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/055be3ba9eb4e32da466e5559de314cb-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-46\" (UID: \"055be3ba9eb4e32da466e5559de314cb\") " pod="kube-system/kube-apiserver-ip-172-31-18-46" Jan 14 01:24:54.106265 kubelet[2915]: I0114 01:24:54.106227 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1684c3f4c2dc400d672baeb5ddebfc7e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-46\" (UID: \"1684c3f4c2dc400d672baeb5ddebfc7e\") " pod="kube-system/kube-controller-manager-ip-172-31-18-46" Jan 14 01:24:54.106479 kubelet[2915]: I0114 01:24:54.106317 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1684c3f4c2dc400d672baeb5ddebfc7e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-46\" (UID: \"1684c3f4c2dc400d672baeb5ddebfc7e\") " pod="kube-system/kube-controller-manager-ip-172-31-18-46" Jan 14 01:24:54.106479 kubelet[2915]: I0114 01:24:54.106367 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a589bbbdd7f65b3ba62c83422cd1e37e-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-46\" (UID: \"a589bbbdd7f65b3ba62c83422cd1e37e\") " pod="kube-system/kube-scheduler-ip-172-31-18-46" Jan 14 01:24:54.106479 kubelet[2915]: I0114 01:24:54.106404 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1684c3f4c2dc400d672baeb5ddebfc7e-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-46\" (UID: \"1684c3f4c2dc400d672baeb5ddebfc7e\") " pod="kube-system/kube-controller-manager-ip-172-31-18-46" Jan 14 01:24:54.106479 kubelet[2915]: I0114 01:24:54.106435 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1684c3f4c2dc400d672baeb5ddebfc7e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-46\" (UID: \"1684c3f4c2dc400d672baeb5ddebfc7e\") " pod="kube-system/kube-controller-manager-ip-172-31-18-46" Jan 14 01:24:54.106699 kubelet[2915]: I0114 01:24:54.106463 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1684c3f4c2dc400d672baeb5ddebfc7e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-46\" (UID: \"1684c3f4c2dc400d672baeb5ddebfc7e\") " pod="kube-system/kube-controller-manager-ip-172-31-18-46" Jan 14 01:24:54.106699 kubelet[2915]: I0114 01:24:54.106532 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/055be3ba9eb4e32da466e5559de314cb-ca-certs\") pod \"kube-apiserver-ip-172-31-18-46\" (UID: \"055be3ba9eb4e32da466e5559de314cb\") " pod="kube-system/kube-apiserver-ip-172-31-18-46" Jan 14 01:24:54.106699 kubelet[2915]: I0114 01:24:54.106597 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/055be3ba9eb4e32da466e5559de314cb-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-46\" (UID: \"055be3ba9eb4e32da466e5559de314cb\") " pod="kube-system/kube-apiserver-ip-172-31-18-46" Jan 14 01:24:54.189120 kubelet[2915]: I0114 01:24:54.189078 2915 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-46" Jan 14 01:24:54.189464 kubelet[2915]: E0114 01:24:54.189416 2915 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.46:6443/api/v1/nodes\": dial tcp 172.31.18.46:6443: connect: connection refused" node="ip-172-31-18-46" Jan 14 01:24:54.274039 containerd[1962]: time="2026-01-14T01:24:54.273594731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-46,Uid:055be3ba9eb4e32da466e5559de314cb,Namespace:kube-system,Attempt:0,}" Jan 14 01:24:54.283954 containerd[1962]: time="2026-01-14T01:24:54.283898243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-46,Uid:1684c3f4c2dc400d672baeb5ddebfc7e,Namespace:kube-system,Attempt:0,}" Jan 14 01:24:54.295068 containerd[1962]: time="2026-01-14T01:24:54.295030129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-46,Uid:a589bbbdd7f65b3ba62c83422cd1e37e,Namespace:kube-system,Attempt:0,}" Jan 14 01:24:54.406159 kubelet[2915]: E0114 01:24:54.406097 2915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-46?timeout=10s\": dial tcp 172.31.18.46:6443: connect: connection refused" interval="800ms" Jan 14 01:24:54.419713 containerd[1962]: time="2026-01-14T01:24:54.419672619Z" level=info msg="connecting to shim 2130d8b80bd342b11dcc024a2882daa028238c340cc24854ab47e8e6bd36f403" address="unix:///run/containerd/s/da1b2950aace19b611641c39afab2d24ad9fe9d23b5e45994c835f95424f82e9" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:24:54.423937 containerd[1962]: time="2026-01-14T01:24:54.423889094Z" level=info msg="connecting to shim c7ed81dad6a8059ce8db295698fb38b6faedfa6e42c0af4a767a1fc5e2ed0a64" address="unix:///run/containerd/s/9f5d112a63c14da8f22ed9129aa95c531f9bf59d2a88288a5f71921b6229fdef" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:24:54.424811 containerd[1962]: time="2026-01-14T01:24:54.424775772Z" level=info msg="connecting to shim c7fda0972c6386a7141f470e068bf7b70cc8ba6dc3ee9a67bf41139fa0e0b80d" address="unix:///run/containerd/s/a38b3149ac7c2ae17efbbba35b5eaa2c53057b669eac8de3fdd8fd36ab5bef5d" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:24:54.522190 systemd[1]: Started cri-containerd-2130d8b80bd342b11dcc024a2882daa028238c340cc24854ab47e8e6bd36f403.scope - libcontainer container 2130d8b80bd342b11dcc024a2882daa028238c340cc24854ab47e8e6bd36f403. Jan 14 01:24:54.524269 systemd[1]: Started cri-containerd-c7ed81dad6a8059ce8db295698fb38b6faedfa6e42c0af4a767a1fc5e2ed0a64.scope - libcontainer container c7ed81dad6a8059ce8db295698fb38b6faedfa6e42c0af4a767a1fc5e2ed0a64. Jan 14 01:24:54.527887 systemd[1]: Started cri-containerd-c7fda0972c6386a7141f470e068bf7b70cc8ba6dc3ee9a67bf41139fa0e0b80d.scope - libcontainer container c7fda0972c6386a7141f470e068bf7b70cc8ba6dc3ee9a67bf41139fa0e0b80d. Jan 14 01:24:54.553000 audit: BPF prog-id=90 op=LOAD Jan 14 01:24:54.556000 audit: BPF prog-id=91 op=LOAD Jan 14 01:24:54.556000 audit[3004]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=2976 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337656438316461643661383035396365386462323935363938666233 Jan 14 01:24:54.556000 audit: BPF prog-id=91 op=UNLOAD Jan 14 01:24:54.556000 audit[3004]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2976 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337656438316461643661383035396365386462323935363938666233 Jan 14 01:24:54.556000 audit: BPF prog-id=92 op=LOAD Jan 14 01:24:54.556000 audit[3004]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=2976 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337656438316461643661383035396365386462323935363938666233 Jan 14 01:24:54.556000 audit: BPF prog-id=93 op=LOAD Jan 14 01:24:54.556000 audit[3004]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=2976 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337656438316461643661383035396365386462323935363938666233 Jan 14 01:24:54.556000 audit: BPF prog-id=93 op=UNLOAD Jan 14 01:24:54.556000 audit[3004]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2976 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337656438316461643661383035396365386462323935363938666233 Jan 14 01:24:54.556000 audit: BPF prog-id=92 op=UNLOAD Jan 14 01:24:54.556000 audit[3004]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2976 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337656438316461643661383035396365386462323935363938666233 Jan 14 01:24:54.556000 audit: BPF prog-id=94 op=LOAD Jan 14 01:24:54.556000 audit[3004]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=2976 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337656438316461643661383035396365386462323935363938666233 Jan 14 01:24:54.561000 audit: BPF prog-id=95 op=LOAD Jan 14 01:24:54.564000 audit: BPF prog-id=96 op=LOAD Jan 14 01:24:54.564000 audit: BPF prog-id=97 op=LOAD Jan 14 01:24:54.564000 audit[3011]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2979 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.564000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337666461303937326336333836613731343166343730653036386266 Jan 14 01:24:54.565000 audit: BPF prog-id=97 op=UNLOAD Jan 14 01:24:54.565000 audit[3011]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2979 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337666461303937326336333836613731343166343730653036386266 Jan 14 01:24:54.566000 audit: BPF prog-id=98 op=LOAD Jan 14 01:24:54.566000 audit: BPF prog-id=99 op=LOAD Jan 14 01:24:54.566000 audit[3002]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2973 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231333064386238306264333432623131646363303234613238383264 Jan 14 01:24:54.566000 audit: BPF prog-id=98 op=UNLOAD Jan 14 01:24:54.566000 audit[3002]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2973 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231333064386238306264333432623131646363303234613238383264 Jan 14 01:24:54.566000 audit[3011]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2979 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337666461303937326336333836613731343166343730653036386266 Jan 14 01:24:54.566000 audit: BPF prog-id=100 op=LOAD Jan 14 01:24:54.566000 audit[3011]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2979 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337666461303937326336333836613731343166343730653036386266 Jan 14 01:24:54.567000 audit: BPF prog-id=100 op=UNLOAD Jan 14 01:24:54.567000 audit[3011]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2979 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337666461303937326336333836613731343166343730653036386266 Jan 14 01:24:54.567000 audit: BPF prog-id=99 op=UNLOAD Jan 14 01:24:54.567000 audit[3011]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2979 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337666461303937326336333836613731343166343730653036386266 Jan 14 01:24:54.567000 audit: BPF prog-id=101 op=LOAD Jan 14 01:24:54.567000 audit[3002]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2973 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231333064386238306264333432623131646363303234613238383264 Jan 14 01:24:54.567000 audit: BPF prog-id=102 op=LOAD Jan 14 01:24:54.567000 audit[3002]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2973 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231333064386238306264333432623131646363303234613238383264 Jan 14 01:24:54.567000 audit: BPF prog-id=102 op=UNLOAD Jan 14 01:24:54.567000 audit[3002]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2973 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231333064386238306264333432623131646363303234613238383264 Jan 14 01:24:54.567000 audit: BPF prog-id=101 op=UNLOAD Jan 14 01:24:54.567000 audit[3002]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2973 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231333064386238306264333432623131646363303234613238383264 Jan 14 01:24:54.567000 audit: BPF prog-id=103 op=LOAD Jan 14 01:24:54.567000 audit[3011]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2979 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337666461303937326336333836613731343166343730653036386266 Jan 14 01:24:54.568000 audit: BPF prog-id=104 op=LOAD Jan 14 01:24:54.568000 audit[3002]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2973 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231333064386238306264333432623131646363303234613238383264 Jan 14 01:24:54.592308 kubelet[2915]: I0114 01:24:54.592275 2915 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-46" Jan 14 01:24:54.594590 kubelet[2915]: E0114 01:24:54.593332 2915 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.46:6443/api/v1/nodes\": dial tcp 172.31.18.46:6443: connect: connection refused" node="ip-172-31-18-46" Jan 14 01:24:54.627904 kubelet[2915]: E0114 01:24:54.627495 2915 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.18.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-46&limit=500&resourceVersion=0\": dial tcp 172.31.18.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 01:24:54.652272 containerd[1962]: time="2026-01-14T01:24:54.652218316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-46,Uid:1684c3f4c2dc400d672baeb5ddebfc7e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7ed81dad6a8059ce8db295698fb38b6faedfa6e42c0af4a767a1fc5e2ed0a64\"" Jan 14 01:24:54.657732 containerd[1962]: time="2026-01-14T01:24:54.657191047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-46,Uid:055be3ba9eb4e32da466e5559de314cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"2130d8b80bd342b11dcc024a2882daa028238c340cc24854ab47e8e6bd36f403\"" Jan 14 01:24:54.666933 containerd[1962]: time="2026-01-14T01:24:54.666889492Z" level=info msg="CreateContainer within sandbox \"c7ed81dad6a8059ce8db295698fb38b6faedfa6e42c0af4a767a1fc5e2ed0a64\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 01:24:54.669820 containerd[1962]: time="2026-01-14T01:24:54.669756012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-46,Uid:a589bbbdd7f65b3ba62c83422cd1e37e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7fda0972c6386a7141f470e068bf7b70cc8ba6dc3ee9a67bf41139fa0e0b80d\"" Jan 14 01:24:54.670714 containerd[1962]: time="2026-01-14T01:24:54.670692551Z" level=info msg="CreateContainer within sandbox \"2130d8b80bd342b11dcc024a2882daa028238c340cc24854ab47e8e6bd36f403\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 01:24:54.677411 containerd[1962]: time="2026-01-14T01:24:54.677106099Z" level=info msg="CreateContainer within sandbox \"c7fda0972c6386a7141f470e068bf7b70cc8ba6dc3ee9a67bf41139fa0e0b80d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 01:24:54.687739 containerd[1962]: time="2026-01-14T01:24:54.687679825Z" level=info msg="Container b5ba6c1df4f4d57fb70beab2e5aad0de88345f228e254222367f44b1e5f20d1e: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:24:54.697321 containerd[1962]: time="2026-01-14T01:24:54.697267541Z" level=info msg="Container ca2a831227d3e8c3d41766e2fc13a795f1354d80339af7ebf0d4ae7d41210d3c: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:24:54.704303 containerd[1962]: time="2026-01-14T01:24:54.703693335Z" level=info msg="Container 7d5a13bb23ea033a089d6bc689e95c7f15a622bf8ff17eace8d6f57056edbdc7: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:24:54.711378 containerd[1962]: time="2026-01-14T01:24:54.711330023Z" level=info msg="CreateContainer within sandbox \"c7ed81dad6a8059ce8db295698fb38b6faedfa6e42c0af4a767a1fc5e2ed0a64\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b5ba6c1df4f4d57fb70beab2e5aad0de88345f228e254222367f44b1e5f20d1e\"" Jan 14 01:24:54.712223 containerd[1962]: time="2026-01-14T01:24:54.712195325Z" level=info msg="StartContainer for \"b5ba6c1df4f4d57fb70beab2e5aad0de88345f228e254222367f44b1e5f20d1e\"" Jan 14 01:24:54.713344 containerd[1962]: time="2026-01-14T01:24:54.713318590Z" level=info msg="connecting to shim b5ba6c1df4f4d57fb70beab2e5aad0de88345f228e254222367f44b1e5f20d1e" address="unix:///run/containerd/s/9f5d112a63c14da8f22ed9129aa95c531f9bf59d2a88288a5f71921b6229fdef" protocol=ttrpc version=3 Jan 14 01:24:54.721456 containerd[1962]: time="2026-01-14T01:24:54.721342446Z" level=info msg="CreateContainer within sandbox \"c7fda0972c6386a7141f470e068bf7b70cc8ba6dc3ee9a67bf41139fa0e0b80d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7d5a13bb23ea033a089d6bc689e95c7f15a622bf8ff17eace8d6f57056edbdc7\"" Jan 14 01:24:54.723275 containerd[1962]: time="2026-01-14T01:24:54.723097612Z" level=info msg="StartContainer for \"7d5a13bb23ea033a089d6bc689e95c7f15a622bf8ff17eace8d6f57056edbdc7\"" Jan 14 01:24:54.725386 containerd[1962]: time="2026-01-14T01:24:54.725336582Z" level=info msg="connecting to shim 7d5a13bb23ea033a089d6bc689e95c7f15a622bf8ff17eace8d6f57056edbdc7" address="unix:///run/containerd/s/a38b3149ac7c2ae17efbbba35b5eaa2c53057b669eac8de3fdd8fd36ab5bef5d" protocol=ttrpc version=3 Jan 14 01:24:54.728759 containerd[1962]: time="2026-01-14T01:24:54.728699523Z" level=info msg="CreateContainer within sandbox \"2130d8b80bd342b11dcc024a2882daa028238c340cc24854ab47e8e6bd36f403\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ca2a831227d3e8c3d41766e2fc13a795f1354d80339af7ebf0d4ae7d41210d3c\"" Jan 14 01:24:54.729743 containerd[1962]: time="2026-01-14T01:24:54.729645805Z" level=info msg="StartContainer for \"ca2a831227d3e8c3d41766e2fc13a795f1354d80339af7ebf0d4ae7d41210d3c\"" Jan 14 01:24:54.733226 containerd[1962]: time="2026-01-14T01:24:54.733135493Z" level=info msg="connecting to shim ca2a831227d3e8c3d41766e2fc13a795f1354d80339af7ebf0d4ae7d41210d3c" address="unix:///run/containerd/s/da1b2950aace19b611641c39afab2d24ad9fe9d23b5e45994c835f95424f82e9" protocol=ttrpc version=3 Jan 14 01:24:54.741807 systemd[1]: Started cri-containerd-b5ba6c1df4f4d57fb70beab2e5aad0de88345f228e254222367f44b1e5f20d1e.scope - libcontainer container b5ba6c1df4f4d57fb70beab2e5aad0de88345f228e254222367f44b1e5f20d1e. Jan 14 01:24:54.769804 systemd[1]: Started cri-containerd-7d5a13bb23ea033a089d6bc689e95c7f15a622bf8ff17eace8d6f57056edbdc7.scope - libcontainer container 7d5a13bb23ea033a089d6bc689e95c7f15a622bf8ff17eace8d6f57056edbdc7. Jan 14 01:24:54.780874 systemd[1]: Started cri-containerd-ca2a831227d3e8c3d41766e2fc13a795f1354d80339af7ebf0d4ae7d41210d3c.scope - libcontainer container ca2a831227d3e8c3d41766e2fc13a795f1354d80339af7ebf0d4ae7d41210d3c. Jan 14 01:24:54.787000 audit: BPF prog-id=105 op=LOAD Jan 14 01:24:54.790000 audit: BPF prog-id=106 op=LOAD Jan 14 01:24:54.790000 audit[3090]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2976 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235626136633164663466346435376662373062656162326535616164 Jan 14 01:24:54.790000 audit: BPF prog-id=106 op=UNLOAD Jan 14 01:24:54.790000 audit[3090]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2976 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235626136633164663466346435376662373062656162326535616164 Jan 14 01:24:54.791000 audit: BPF prog-id=107 op=LOAD Jan 14 01:24:54.791000 audit[3090]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2976 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235626136633164663466346435376662373062656162326535616164 Jan 14 01:24:54.792000 audit: BPF prog-id=108 op=LOAD Jan 14 01:24:54.792000 audit[3090]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2976 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235626136633164663466346435376662373062656162326535616164 Jan 14 01:24:54.792000 audit: BPF prog-id=108 op=UNLOAD Jan 14 01:24:54.792000 audit[3090]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2976 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235626136633164663466346435376662373062656162326535616164 Jan 14 01:24:54.792000 audit: BPF prog-id=107 op=UNLOAD Jan 14 01:24:54.792000 audit[3090]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2976 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235626136633164663466346435376662373062656162326535616164 Jan 14 01:24:54.792000 audit: BPF prog-id=109 op=LOAD Jan 14 01:24:54.792000 audit[3090]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2976 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235626136633164663466346435376662373062656162326535616164 Jan 14 01:24:54.798000 audit: BPF prog-id=110 op=LOAD Jan 14 01:24:54.800000 audit: BPF prog-id=111 op=LOAD Jan 14 01:24:54.800000 audit[3101]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2979 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764356131336262323365613033336130383964366263363839653935 Jan 14 01:24:54.801000 audit: BPF prog-id=111 op=UNLOAD Jan 14 01:24:54.801000 audit[3101]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2979 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.801000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764356131336262323365613033336130383964366263363839653935 Jan 14 01:24:54.801000 audit: BPF prog-id=112 op=LOAD Jan 14 01:24:54.801000 audit[3101]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2979 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.801000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764356131336262323365613033336130383964366263363839653935 Jan 14 01:24:54.801000 audit: BPF prog-id=113 op=LOAD Jan 14 01:24:54.801000 audit[3101]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2979 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.801000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764356131336262323365613033336130383964366263363839653935 Jan 14 01:24:54.801000 audit: BPF prog-id=113 op=UNLOAD Jan 14 01:24:54.801000 audit[3101]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2979 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.801000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764356131336262323365613033336130383964366263363839653935 Jan 14 01:24:54.801000 audit: BPF prog-id=112 op=UNLOAD Jan 14 01:24:54.801000 audit[3101]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2979 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.801000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764356131336262323365613033336130383964366263363839653935 Jan 14 01:24:54.801000 audit: BPF prog-id=114 op=LOAD Jan 14 01:24:54.801000 audit[3101]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2979 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.801000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764356131336262323365613033336130383964366263363839653935 Jan 14 01:24:54.810000 audit: BPF prog-id=115 op=LOAD Jan 14 01:24:54.811000 audit: BPF prog-id=116 op=LOAD Jan 14 01:24:54.811000 audit[3103]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2973 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361326138333132323764336538633364343137363665326663313361 Jan 14 01:24:54.811000 audit: BPF prog-id=116 op=UNLOAD Jan 14 01:24:54.811000 audit[3103]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2973 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361326138333132323764336538633364343137363665326663313361 Jan 14 01:24:54.811000 audit: BPF prog-id=117 op=LOAD Jan 14 01:24:54.811000 audit[3103]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2973 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361326138333132323764336538633364343137363665326663313361 Jan 14 01:24:54.811000 audit: BPF prog-id=118 op=LOAD Jan 14 01:24:54.811000 audit[3103]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2973 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361326138333132323764336538633364343137363665326663313361 Jan 14 01:24:54.811000 audit: BPF prog-id=118 op=UNLOAD Jan 14 01:24:54.811000 audit[3103]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2973 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361326138333132323764336538633364343137363665326663313361 Jan 14 01:24:54.811000 audit: BPF prog-id=117 op=UNLOAD Jan 14 01:24:54.811000 audit[3103]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2973 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361326138333132323764336538633364343137363665326663313361 Jan 14 01:24:54.811000 audit: BPF prog-id=119 op=LOAD Jan 14 01:24:54.811000 audit[3103]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2973 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:54.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361326138333132323764336538633364343137363665326663313361 Jan 14 01:24:54.910740 kubelet[2915]: E0114 01:24:54.910699 2915 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.18.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 01:24:54.916010 containerd[1962]: time="2026-01-14T01:24:54.915955153Z" level=info msg="StartContainer for \"7d5a13bb23ea033a089d6bc689e95c7f15a622bf8ff17eace8d6f57056edbdc7\" returns successfully" Jan 14 01:24:54.919155 kubelet[2915]: E0114 01:24:54.919097 2915 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.18.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 01:24:54.926741 containerd[1962]: time="2026-01-14T01:24:54.926675545Z" level=info msg="StartContainer for \"b5ba6c1df4f4d57fb70beab2e5aad0de88345f228e254222367f44b1e5f20d1e\" returns successfully" Jan 14 01:24:54.926741 containerd[1962]: time="2026-01-14T01:24:54.926719122Z" level=info msg="StartContainer for \"ca2a831227d3e8c3d41766e2fc13a795f1354d80339af7ebf0d4ae7d41210d3c\" returns successfully" Jan 14 01:24:55.080514 kubelet[2915]: E0114 01:24:55.080275 2915 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.46:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.46:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-46.188a747f4beebb83 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-46,UID:ip-172-31-18-46,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-46,},FirstTimestamp:2026-01-14 01:24:53.779946371 +0000 UTC m=+0.403011008,LastTimestamp:2026-01-14 01:24:53.779946371 +0000 UTC m=+0.403011008,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-46,}" Jan 14 01:24:55.207485 kubelet[2915]: E0114 01:24:55.207431 2915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-46?timeout=10s\": dial tcp 172.31.18.46:6443: connect: connection refused" interval="1.6s" Jan 14 01:24:55.236347 kubelet[2915]: E0114 01:24:55.236289 2915 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.18.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 01:24:55.395465 kubelet[2915]: I0114 01:24:55.395436 2915 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-46" Jan 14 01:24:55.396252 kubelet[2915]: E0114 01:24:55.396027 2915 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.46:6443/api/v1/nodes\": dial tcp 172.31.18.46:6443: connect: connection refused" node="ip-172-31-18-46" Jan 14 01:24:55.835982 kubelet[2915]: E0114 01:24:55.835842 2915 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.18.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.46:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 14 01:24:55.884856 kubelet[2915]: E0114 01:24:55.884827 2915 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-46\" not found" node="ip-172-31-18-46" Jan 14 01:24:55.887006 kubelet[2915]: E0114 01:24:55.886982 2915 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-46\" not found" node="ip-172-31-18-46" Jan 14 01:24:55.889184 kubelet[2915]: E0114 01:24:55.889159 2915 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-46\" not found" node="ip-172-31-18-46" Jan 14 01:24:56.808604 kubelet[2915]: E0114 01:24:56.808540 2915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-46?timeout=10s\": dial tcp 172.31.18.46:6443: connect: connection refused" interval="3.2s" Jan 14 01:24:56.891400 kubelet[2915]: E0114 01:24:56.891369 2915 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-46\" not found" node="ip-172-31-18-46" Jan 14 01:24:56.891757 kubelet[2915]: E0114 01:24:56.891736 2915 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-46\" not found" node="ip-172-31-18-46" Jan 14 01:24:56.892022 kubelet[2915]: E0114 01:24:56.891995 2915 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-46\" not found" node="ip-172-31-18-46" Jan 14 01:24:56.998946 kubelet[2915]: I0114 01:24:56.998887 2915 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-46" Jan 14 01:24:56.999235 kubelet[2915]: E0114 01:24:56.999189 2915 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.46:6443/api/v1/nodes\": dial tcp 172.31.18.46:6443: connect: connection refused" node="ip-172-31-18-46" Jan 14 01:24:57.470030 kubelet[2915]: E0114 01:24:57.469803 2915 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.18.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 01:24:57.528119 kubelet[2915]: E0114 01:24:57.528057 2915 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.18.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-46&limit=500&resourceVersion=0\": dial tcp 172.31.18.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 01:24:57.893419 kubelet[2915]: E0114 01:24:57.893378 2915 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-46\" not found" node="ip-172-31-18-46" Jan 14 01:24:57.893812 kubelet[2915]: E0114 01:24:57.893647 2915 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-46\" not found" node="ip-172-31-18-46" Jan 14 01:24:57.918025 kubelet[2915]: E0114 01:24:57.917826 2915 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.18.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 01:24:58.110597 kubelet[2915]: E0114 01:24:58.110514 2915 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.18.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 01:25:00.018647 kubelet[2915]: E0114 01:25:00.018583 2915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-46?timeout=10s\": dial tcp 172.31.18.46:6443: connect: connection refused" interval="6.4s" Jan 14 01:25:00.169778 kubelet[2915]: E0114 01:25:00.169733 2915 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.18.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.46:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 14 01:25:00.224487 kubelet[2915]: I0114 01:25:00.224395 2915 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-46" Jan 14 01:25:00.225196 kubelet[2915]: E0114 01:25:00.225148 2915 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.46:6443/api/v1/nodes\": dial tcp 172.31.18.46:6443: connect: connection refused" node="ip-172-31-18-46" Jan 14 01:25:00.837571 kubelet[2915]: E0114 01:25:00.837507 2915 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-46\" not found" node="ip-172-31-18-46" Jan 14 01:25:01.359245 kubelet[2915]: E0114 01:25:01.359182 2915 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-46\" not found" node="ip-172-31-18-46" Jan 14 01:25:02.878155 kubelet[2915]: E0114 01:25:02.878124 2915 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-46\" not found" node="ip-172-31-18-46" Jan 14 01:25:03.887715 kubelet[2915]: E0114 01:25:03.887564 2915 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-46\" not found" Jan 14 01:25:04.172996 update_engine[1929]: I20260114 01:25:04.172410 1929 update_attempter.cc:509] Updating boot flags... Jan 14 01:25:04.258770 kubelet[2915]: E0114 01:25:04.258725 2915 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-18-46" not found Jan 14 01:25:04.629007 kubelet[2915]: E0114 01:25:04.628959 2915 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-18-46" not found Jan 14 01:25:04.779871 kubelet[2915]: I0114 01:25:04.778628 2915 apiserver.go:52] "Watching apiserver" Jan 14 01:25:04.804843 kubelet[2915]: I0114 01:25:04.804799 2915 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 01:25:05.146578 kubelet[2915]: E0114 01:25:05.144677 2915 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-18-46" not found Jan 14 01:25:06.044432 kubelet[2915]: E0114 01:25:06.044388 2915 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-18-46" not found Jan 14 01:25:06.424101 kubelet[2915]: E0114 01:25:06.424061 2915 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-46\" not found" node="ip-172-31-18-46" Jan 14 01:25:06.627891 kubelet[2915]: I0114 01:25:06.627850 2915 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-46" Jan 14 01:25:06.636997 kubelet[2915]: I0114 01:25:06.636542 2915 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-18-46" Jan 14 01:25:06.704923 kubelet[2915]: I0114 01:25:06.704792 2915 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-46" Jan 14 01:25:06.728868 kubelet[2915]: I0114 01:25:06.728268 2915 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-46" Jan 14 01:25:06.734378 kubelet[2915]: I0114 01:25:06.734306 2915 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-46" Jan 14 01:25:07.091437 systemd[1]: Reload requested from client PID 3462 ('systemctl') (unit session-8.scope)... Jan 14 01:25:07.091457 systemd[1]: Reloading... Jan 14 01:25:07.223570 zram_generator::config[3510]: No configuration found. Jan 14 01:25:07.534505 systemd[1]: Reloading finished in 442 ms. Jan 14 01:25:07.571776 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:25:07.584237 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 01:25:07.584703 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:25:07.588090 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 14 01:25:07.588158 kernel: audit: type=1131 audit(1768353907.583:409): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:07.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:07.584805 systemd[1]: kubelet.service: Consumed 943ms CPU time, 128.4M memory peak. Jan 14 01:25:07.590907 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:25:07.589000 audit: BPF prog-id=120 op=LOAD Jan 14 01:25:07.594815 kernel: audit: type=1334 audit(1768353907.589:410): prog-id=120 op=LOAD Jan 14 01:25:07.594918 kernel: audit: type=1334 audit(1768353907.589:411): prog-id=83 op=UNLOAD Jan 14 01:25:07.589000 audit: BPF prog-id=83 op=UNLOAD Jan 14 01:25:07.596845 kernel: audit: type=1334 audit(1768353907.591:412): prog-id=121 op=LOAD Jan 14 01:25:07.591000 audit: BPF prog-id=121 op=LOAD Jan 14 01:25:07.591000 audit: BPF prog-id=122 op=LOAD Jan 14 01:25:07.599944 kernel: audit: type=1334 audit(1768353907.591:413): prog-id=122 op=LOAD Jan 14 01:25:07.600046 kernel: audit: type=1334 audit(1768353907.591:414): prog-id=84 op=UNLOAD Jan 14 01:25:07.591000 audit: BPF prog-id=84 op=UNLOAD Jan 14 01:25:07.601585 kernel: audit: type=1334 audit(1768353907.591:415): prog-id=85 op=UNLOAD Jan 14 01:25:07.591000 audit: BPF prog-id=85 op=UNLOAD Jan 14 01:25:07.603370 kernel: audit: type=1334 audit(1768353907.592:416): prog-id=123 op=LOAD Jan 14 01:25:07.592000 audit: BPF prog-id=123 op=LOAD Jan 14 01:25:07.604919 kernel: audit: type=1334 audit(1768353907.592:417): prog-id=86 op=UNLOAD Jan 14 01:25:07.592000 audit: BPF prog-id=86 op=UNLOAD Jan 14 01:25:07.606424 kernel: audit: type=1334 audit(1768353907.593:418): prog-id=124 op=LOAD Jan 14 01:25:07.593000 audit: BPF prog-id=124 op=LOAD Jan 14 01:25:07.593000 audit: BPF prog-id=125 op=LOAD Jan 14 01:25:07.593000 audit: BPF prog-id=87 op=UNLOAD Jan 14 01:25:07.593000 audit: BPF prog-id=88 op=UNLOAD Jan 14 01:25:07.597000 audit: BPF prog-id=126 op=LOAD Jan 14 01:25:07.597000 audit: BPF prog-id=78 op=UNLOAD Jan 14 01:25:07.600000 audit: BPF prog-id=127 op=LOAD Jan 14 01:25:07.600000 audit: BPF prog-id=72 op=UNLOAD Jan 14 01:25:07.601000 audit: BPF prog-id=128 op=LOAD Jan 14 01:25:07.601000 audit: BPF prog-id=129 op=LOAD Jan 14 01:25:07.601000 audit: BPF prog-id=73 op=UNLOAD Jan 14 01:25:07.601000 audit: BPF prog-id=74 op=UNLOAD Jan 14 01:25:07.606000 audit: BPF prog-id=130 op=LOAD Jan 14 01:25:07.606000 audit: BPF prog-id=79 op=UNLOAD Jan 14 01:25:07.606000 audit: BPF prog-id=131 op=LOAD Jan 14 01:25:07.606000 audit: BPF prog-id=132 op=LOAD Jan 14 01:25:07.606000 audit: BPF prog-id=80 op=UNLOAD Jan 14 01:25:07.606000 audit: BPF prog-id=81 op=UNLOAD Jan 14 01:25:07.614000 audit: BPF prog-id=133 op=LOAD Jan 14 01:25:07.614000 audit: BPF prog-id=75 op=UNLOAD Jan 14 01:25:07.614000 audit: BPF prog-id=134 op=LOAD Jan 14 01:25:07.614000 audit: BPF prog-id=135 op=LOAD Jan 14 01:25:07.614000 audit: BPF prog-id=76 op=UNLOAD Jan 14 01:25:07.614000 audit: BPF prog-id=77 op=UNLOAD Jan 14 01:25:07.615000 audit: BPF prog-id=136 op=LOAD Jan 14 01:25:07.615000 audit: BPF prog-id=89 op=UNLOAD Jan 14 01:25:07.616000 audit: BPF prog-id=137 op=LOAD Jan 14 01:25:07.616000 audit: BPF prog-id=82 op=UNLOAD Jan 14 01:25:07.617000 audit: BPF prog-id=138 op=LOAD Jan 14 01:25:07.617000 audit: BPF prog-id=139 op=LOAD Jan 14 01:25:07.617000 audit: BPF prog-id=70 op=UNLOAD Jan 14 01:25:07.617000 audit: BPF prog-id=71 op=UNLOAD Jan 14 01:25:07.885910 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:25:07.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:07.896035 (kubelet)[3571]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 01:25:07.973573 kubelet[3571]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:25:07.973573 kubelet[3571]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 01:25:07.973573 kubelet[3571]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:25:07.973573 kubelet[3571]: I0114 01:25:07.972992 3571 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 01:25:07.991779 kubelet[3571]: I0114 01:25:07.991743 3571 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 14 01:25:07.991960 kubelet[3571]: I0114 01:25:07.991948 3571 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 01:25:07.992360 kubelet[3571]: I0114 01:25:07.992338 3571 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 01:25:08.001460 kubelet[3571]: I0114 01:25:08.001429 3571 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 14 01:25:08.009208 kubelet[3571]: I0114 01:25:08.009170 3571 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 01:25:08.033331 kubelet[3571]: I0114 01:25:08.033295 3571 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 01:25:08.040146 kubelet[3571]: I0114 01:25:08.040009 3571 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 01:25:08.040575 kubelet[3571]: I0114 01:25:08.040455 3571 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 01:25:08.040768 kubelet[3571]: I0114 01:25:08.040488 3571 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-46","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 01:25:08.040983 kubelet[3571]: I0114 01:25:08.040882 3571 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 01:25:08.040983 kubelet[3571]: I0114 01:25:08.040897 3571 container_manager_linux.go:303] "Creating device plugin manager" Jan 14 01:25:08.042977 kubelet[3571]: I0114 01:25:08.042886 3571 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:25:08.044770 kubelet[3571]: I0114 01:25:08.044724 3571 kubelet.go:480] "Attempting to sync node with API server" Jan 14 01:25:08.044770 kubelet[3571]: I0114 01:25:08.044751 3571 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 01:25:08.050826 kubelet[3571]: I0114 01:25:08.050788 3571 kubelet.go:386] "Adding apiserver pod source" Jan 14 01:25:08.050826 kubelet[3571]: I0114 01:25:08.050836 3571 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 01:25:08.072362 kubelet[3571]: I0114 01:25:08.071355 3571 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 01:25:08.072362 kubelet[3571]: I0114 01:25:08.072073 3571 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 01:25:08.084720 kubelet[3571]: I0114 01:25:08.084696 3571 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 01:25:08.084908 kubelet[3571]: I0114 01:25:08.084899 3571 server.go:1289] "Started kubelet" Jan 14 01:25:08.087780 kubelet[3571]: I0114 01:25:08.087756 3571 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 01:25:08.103593 kubelet[3571]: I0114 01:25:08.103519 3571 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 01:25:08.105472 kubelet[3571]: I0114 01:25:08.105447 3571 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 01:25:08.106460 kubelet[3571]: I0114 01:25:08.106439 3571 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 01:25:08.111404 kubelet[3571]: I0114 01:25:08.109468 3571 server.go:317] "Adding debug handlers to kubelet server" Jan 14 01:25:08.116391 kubelet[3571]: I0114 01:25:08.116357 3571 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 01:25:08.118948 kubelet[3571]: I0114 01:25:08.109682 3571 reconciler.go:26] "Reconciler: start to sync state" Jan 14 01:25:08.120334 kubelet[3571]: I0114 01:25:08.109504 3571 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 01:25:08.123567 kubelet[3571]: I0114 01:25:08.120159 3571 factory.go:223] Registration of the systemd container factory successfully Jan 14 01:25:08.124012 kubelet[3571]: I0114 01:25:08.123985 3571 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 01:25:08.124341 kubelet[3571]: I0114 01:25:08.124319 3571 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 01:25:08.124933 kubelet[3571]: I0114 01:25:08.124912 3571 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 14 01:25:08.129224 kubelet[3571]: I0114 01:25:08.127925 3571 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 14 01:25:08.129224 kubelet[3571]: I0114 01:25:08.127952 3571 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 14 01:25:08.129224 kubelet[3571]: I0114 01:25:08.127975 3571 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 01:25:08.129224 kubelet[3571]: I0114 01:25:08.127986 3571 kubelet.go:2436] "Starting kubelet main sync loop" Jan 14 01:25:08.129224 kubelet[3571]: E0114 01:25:08.128036 3571 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 01:25:08.135234 kubelet[3571]: E0114 01:25:08.135197 3571 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 01:25:08.137590 kubelet[3571]: I0114 01:25:08.137352 3571 factory.go:223] Registration of the containerd container factory successfully Jan 14 01:25:08.195595 kubelet[3571]: I0114 01:25:08.195150 3571 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 01:25:08.195595 kubelet[3571]: I0114 01:25:08.195170 3571 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 01:25:08.195595 kubelet[3571]: I0114 01:25:08.195191 3571 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:25:08.195595 kubelet[3571]: I0114 01:25:08.195347 3571 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 01:25:08.195595 kubelet[3571]: I0114 01:25:08.195356 3571 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 01:25:08.195595 kubelet[3571]: I0114 01:25:08.195379 3571 policy_none.go:49] "None policy: Start" Jan 14 01:25:08.195595 kubelet[3571]: I0114 01:25:08.195387 3571 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 01:25:08.195595 kubelet[3571]: I0114 01:25:08.195397 3571 state_mem.go:35] "Initializing new in-memory state store" Jan 14 01:25:08.195595 kubelet[3571]: I0114 01:25:08.195494 3571 state_mem.go:75] "Updated machine memory state" Jan 14 01:25:08.202855 kubelet[3571]: E0114 01:25:08.202043 3571 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 01:25:08.202855 kubelet[3571]: I0114 01:25:08.202201 3571 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 01:25:08.202855 kubelet[3571]: I0114 01:25:08.202211 3571 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 01:25:08.204123 kubelet[3571]: I0114 01:25:08.204102 3571 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 01:25:08.209821 kubelet[3571]: E0114 01:25:08.209793 3571 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 01:25:08.235329 kubelet[3571]: I0114 01:25:08.235289 3571 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-46" Jan 14 01:25:08.235579 kubelet[3571]: I0114 01:25:08.235540 3571 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-46" Jan 14 01:25:08.238920 kubelet[3571]: I0114 01:25:08.238893 3571 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-46" Jan 14 01:25:08.248279 kubelet[3571]: E0114 01:25:08.248236 3571 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-18-46\" already exists" pod="kube-system/kube-scheduler-ip-172-31-18-46" Jan 14 01:25:08.250584 kubelet[3571]: E0114 01:25:08.250557 3571 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-46\" already exists" pod="kube-system/kube-apiserver-ip-172-31-18-46" Jan 14 01:25:08.251831 kubelet[3571]: E0114 01:25:08.251758 3571 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-18-46\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-18-46" Jan 14 01:25:08.312205 kubelet[3571]: I0114 01:25:08.312170 3571 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-46" Jan 14 01:25:08.323875 kubelet[3571]: I0114 01:25:08.323543 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/055be3ba9eb4e32da466e5559de314cb-ca-certs\") pod \"kube-apiserver-ip-172-31-18-46\" (UID: \"055be3ba9eb4e32da466e5559de314cb\") " pod="kube-system/kube-apiserver-ip-172-31-18-46" Jan 14 01:25:08.323875 kubelet[3571]: I0114 01:25:08.323674 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/055be3ba9eb4e32da466e5559de314cb-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-46\" (UID: \"055be3ba9eb4e32da466e5559de314cb\") " pod="kube-system/kube-apiserver-ip-172-31-18-46" Jan 14 01:25:08.323875 kubelet[3571]: I0114 01:25:08.323734 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/055be3ba9eb4e32da466e5559de314cb-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-46\" (UID: \"055be3ba9eb4e32da466e5559de314cb\") " pod="kube-system/kube-apiserver-ip-172-31-18-46" Jan 14 01:25:08.323875 kubelet[3571]: I0114 01:25:08.323785 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1684c3f4c2dc400d672baeb5ddebfc7e-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-46\" (UID: \"1684c3f4c2dc400d672baeb5ddebfc7e\") " pod="kube-system/kube-controller-manager-ip-172-31-18-46" Jan 14 01:25:08.323875 kubelet[3571]: I0114 01:25:08.323811 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1684c3f4c2dc400d672baeb5ddebfc7e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-46\" (UID: \"1684c3f4c2dc400d672baeb5ddebfc7e\") " pod="kube-system/kube-controller-manager-ip-172-31-18-46" Jan 14 01:25:08.324219 kubelet[3571]: I0114 01:25:08.323942 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a589bbbdd7f65b3ba62c83422cd1e37e-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-46\" (UID: \"a589bbbdd7f65b3ba62c83422cd1e37e\") " pod="kube-system/kube-scheduler-ip-172-31-18-46" Jan 14 01:25:08.324219 kubelet[3571]: I0114 01:25:08.323966 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1684c3f4c2dc400d672baeb5ddebfc7e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-46\" (UID: \"1684c3f4c2dc400d672baeb5ddebfc7e\") " pod="kube-system/kube-controller-manager-ip-172-31-18-46" Jan 14 01:25:08.324219 kubelet[3571]: I0114 01:25:08.324015 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1684c3f4c2dc400d672baeb5ddebfc7e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-46\" (UID: \"1684c3f4c2dc400d672baeb5ddebfc7e\") " pod="kube-system/kube-controller-manager-ip-172-31-18-46" Jan 14 01:25:08.324219 kubelet[3571]: I0114 01:25:08.324038 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1684c3f4c2dc400d672baeb5ddebfc7e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-46\" (UID: \"1684c3f4c2dc400d672baeb5ddebfc7e\") " pod="kube-system/kube-controller-manager-ip-172-31-18-46" Jan 14 01:25:08.325663 kubelet[3571]: I0114 01:25:08.325634 3571 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-18-46" Jan 14 01:25:08.326075 kubelet[3571]: I0114 01:25:08.325775 3571 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-18-46" Jan 14 01:25:09.054342 kubelet[3571]: I0114 01:25:09.054293 3571 apiserver.go:52] "Watching apiserver" Jan 14 01:25:09.112335 kubelet[3571]: I0114 01:25:09.112279 3571 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 01:25:09.117258 kubelet[3571]: I0114 01:25:09.117181 3571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-46" podStartSLOduration=3.117163751 podStartE2EDuration="3.117163751s" podCreationTimestamp="2026-01-14 01:25:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:25:09.116908017 +0000 UTC m=+1.205196183" watchObservedRunningTime="2026-01-14 01:25:09.117163751 +0000 UTC m=+1.205451899" Jan 14 01:25:09.138830 kubelet[3571]: I0114 01:25:09.138780 3571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-46" podStartSLOduration=3.138766411 podStartE2EDuration="3.138766411s" podCreationTimestamp="2026-01-14 01:25:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:25:09.128452342 +0000 UTC m=+1.216740532" watchObservedRunningTime="2026-01-14 01:25:09.138766411 +0000 UTC m=+1.227054544" Jan 14 01:25:09.139832 kubelet[3571]: I0114 01:25:09.139763 3571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-46" podStartSLOduration=3.139750323 podStartE2EDuration="3.139750323s" podCreationTimestamp="2026-01-14 01:25:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:25:09.138978575 +0000 UTC m=+1.227266709" watchObservedRunningTime="2026-01-14 01:25:09.139750323 +0000 UTC m=+1.228038475" Jan 14 01:25:09.165646 kubelet[3571]: I0114 01:25:09.165540 3571 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-46" Jan 14 01:25:09.182064 kubelet[3571]: E0114 01:25:09.181670 3571 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-18-46\" already exists" pod="kube-system/kube-scheduler-ip-172-31-18-46" Jan 14 01:25:11.120567 kubelet[3571]: I0114 01:25:11.120488 3571 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 01:25:11.158564 containerd[1962]: time="2026-01-14T01:25:11.158481372Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 01:25:11.159304 kubelet[3571]: I0114 01:25:11.159273 3571 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 01:25:16.600432 systemd[1]: Created slice kubepods-besteffort-pod9ff6e30a_a36e_47d4_86a4_56ddc3783493.slice - libcontainer container kubepods-besteffort-pod9ff6e30a_a36e_47d4_86a4_56ddc3783493.slice. Jan 14 01:25:16.629838 systemd[1]: Created slice kubepods-besteffort-podf92c0dc2_1a91_471c_9cce_159e02ced74e.slice - libcontainer container kubepods-besteffort-podf92c0dc2_1a91_471c_9cce_159e02ced74e.slice. Jan 14 01:25:16.682393 kubelet[3571]: I0114 01:25:16.682347 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x947z\" (UniqueName: \"kubernetes.io/projected/f92c0dc2-1a91-471c-9cce-159e02ced74e-kube-api-access-x947z\") pod \"tigera-operator-7dcd859c48-27qbd\" (UID: \"f92c0dc2-1a91-471c-9cce-159e02ced74e\") " pod="tigera-operator/tigera-operator-7dcd859c48-27qbd" Jan 14 01:25:16.682393 kubelet[3571]: I0114 01:25:16.682398 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9ff6e30a-a36e-47d4-86a4-56ddc3783493-kube-proxy\") pod \"kube-proxy-tb885\" (UID: \"9ff6e30a-a36e-47d4-86a4-56ddc3783493\") " pod="kube-system/kube-proxy-tb885" Jan 14 01:25:16.682393 kubelet[3571]: I0114 01:25:16.682418 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ff6e30a-a36e-47d4-86a4-56ddc3783493-xtables-lock\") pod \"kube-proxy-tb885\" (UID: \"9ff6e30a-a36e-47d4-86a4-56ddc3783493\") " pod="kube-system/kube-proxy-tb885" Jan 14 01:25:16.682936 kubelet[3571]: I0114 01:25:16.682464 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q4gv\" (UniqueName: \"kubernetes.io/projected/9ff6e30a-a36e-47d4-86a4-56ddc3783493-kube-api-access-6q4gv\") pod \"kube-proxy-tb885\" (UID: \"9ff6e30a-a36e-47d4-86a4-56ddc3783493\") " pod="kube-system/kube-proxy-tb885" Jan 14 01:25:16.682936 kubelet[3571]: I0114 01:25:16.682487 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f92c0dc2-1a91-471c-9cce-159e02ced74e-var-lib-calico\") pod \"tigera-operator-7dcd859c48-27qbd\" (UID: \"f92c0dc2-1a91-471c-9cce-159e02ced74e\") " pod="tigera-operator/tigera-operator-7dcd859c48-27qbd" Jan 14 01:25:16.682936 kubelet[3571]: I0114 01:25:16.682505 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ff6e30a-a36e-47d4-86a4-56ddc3783493-lib-modules\") pod \"kube-proxy-tb885\" (UID: \"9ff6e30a-a36e-47d4-86a4-56ddc3783493\") " pod="kube-system/kube-proxy-tb885" Jan 14 01:25:16.911260 containerd[1962]: time="2026-01-14T01:25:16.911128378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tb885,Uid:9ff6e30a-a36e-47d4-86a4-56ddc3783493,Namespace:kube-system,Attempt:0,}" Jan 14 01:25:16.938207 containerd[1962]: time="2026-01-14T01:25:16.937964931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-27qbd,Uid:f92c0dc2-1a91-471c-9cce-159e02ced74e,Namespace:tigera-operator,Attempt:0,}" Jan 14 01:25:16.942783 containerd[1962]: time="2026-01-14T01:25:16.942601333Z" level=info msg="connecting to shim 718071988213e220a3890dbb36e14778b30084597eb1abc38ff5ef97a9eefb06" address="unix:///run/containerd/s/96fb784a155d240fc2c4e6640ae25f4d3f5b2e33a3f60d38e03c75b4e4acc769" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:25:16.982216 containerd[1962]: time="2026-01-14T01:25:16.982170330Z" level=info msg="connecting to shim c5810db3e706df4f748cd1e85159a282084b0ddf4c5a05434964fe519d47e3e1" address="unix:///run/containerd/s/4af328783471a9f1a24e3b69147c03bc999fed72a7b8880bdf97e0404f5b67f3" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:25:16.982859 systemd[1]: Started cri-containerd-718071988213e220a3890dbb36e14778b30084597eb1abc38ff5ef97a9eefb06.scope - libcontainer container 718071988213e220a3890dbb36e14778b30084597eb1abc38ff5ef97a9eefb06. Jan 14 01:25:17.022826 systemd[1]: Started cri-containerd-c5810db3e706df4f748cd1e85159a282084b0ddf4c5a05434964fe519d47e3e1.scope - libcontainer container c5810db3e706df4f748cd1e85159a282084b0ddf4c5a05434964fe519d47e3e1. Jan 14 01:25:17.028064 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 14 01:25:17.028183 kernel: audit: type=1334 audit(1768353917.023:451): prog-id=140 op=LOAD Jan 14 01:25:17.023000 audit: BPF prog-id=140 op=LOAD Jan 14 01:25:17.027000 audit: BPF prog-id=141 op=LOAD Jan 14 01:25:17.030564 kernel: audit: type=1334 audit(1768353917.027:452): prog-id=141 op=LOAD Jan 14 01:25:17.027000 audit[3641]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3629 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:17.035679 kernel: audit: type=1300 audit(1768353917.027:452): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3629 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:17.027000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731383037313938383231336532323061333839306462623336653134 Jan 14 01:25:17.047280 kernel: audit: type=1327 audit(1768353917.027:452): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731383037313938383231336532323061333839306462623336653134 Jan 14 01:25:17.047355 kernel: audit: type=1334 audit(1768353917.027:453): prog-id=141 op=UNLOAD Jan 14 01:25:17.027000 audit: BPF prog-id=141 op=UNLOAD Jan 14 01:25:17.027000 audit[3641]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3629 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:17.058678 kernel: audit: type=1300 audit(1768353917.027:453): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3629 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:17.058776 kernel: audit: type=1327 audit(1768353917.027:453): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731383037313938383231336532323061333839306462623336653134 Jan 14 01:25:17.027000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731383037313938383231336532323061333839306462623336653134 Jan 14 01:25:17.059997 kernel: audit: type=1334 audit(1768353917.027:454): prog-id=142 op=LOAD Jan 14 01:25:17.027000 audit: BPF prog-id=142 op=LOAD Jan 14 01:25:17.065012 kernel: audit: type=1300 audit(1768353917.027:454): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3629 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:17.027000 audit[3641]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3629 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:17.070059 kernel: audit: type=1327 audit(1768353917.027:454): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731383037313938383231336532323061333839306462623336653134 Jan 14 01:25:17.027000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731383037313938383231336532323061333839306462623336653134 Jan 14 01:25:17.027000 audit: BPF prog-id=143 op=LOAD Jan 14 01:25:17.027000 audit[3641]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3629 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:17.027000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731383037313938383231336532323061333839306462623336653134 Jan 14 01:25:17.027000 audit: BPF prog-id=143 op=UNLOAD Jan 14 01:25:17.027000 audit[3641]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3629 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:17.027000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731383037313938383231336532323061333839306462623336653134 Jan 14 01:25:17.027000 audit: BPF prog-id=142 op=UNLOAD Jan 14 01:25:17.027000 audit[3641]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3629 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:17.027000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731383037313938383231336532323061333839306462623336653134 Jan 14 01:25:17.027000 audit: BPF prog-id=144 op=LOAD Jan 14 01:25:17.027000 audit[3641]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3629 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:17.027000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731383037313938383231336532323061333839306462623336653134 Jan 14 01:25:17.041000 audit: BPF prog-id=145 op=LOAD Jan 14 01:25:17.042000 audit: BPF prog-id=146 op=LOAD Jan 14 01:25:17.042000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3660 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:17.042000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335383130646233653730366466346637343863643165383531353961 Jan 14 01:25:17.042000 audit: BPF prog-id=146 op=UNLOAD Jan 14 01:25:17.042000 audit[3680]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3660 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:17.042000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335383130646233653730366466346637343863643165383531353961 Jan 14 01:25:17.042000 audit: BPF prog-id=147 op=LOAD Jan 14 01:25:17.042000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3660 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:17.042000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335383130646233653730366466346637343863643165383531353961 Jan 14 01:25:17.042000 audit: BPF prog-id=148 op=LOAD Jan 14 01:25:17.042000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3660 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:17.042000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335383130646233653730366466346637343863643165383531353961 Jan 14 01:25:17.042000 audit: BPF prog-id=148 op=UNLOAD Jan 14 01:25:17.042000 audit[3680]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3660 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:17.042000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335383130646233653730366466346637343863643165383531353961 Jan 14 01:25:17.042000 audit: BPF prog-id=147 op=UNLOAD Jan 14 01:25:17.042000 audit[3680]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3660 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:17.042000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335383130646233653730366466346637343863643165383531353961 Jan 14 01:25:17.042000 audit: BPF prog-id=149 op=LOAD Jan 14 01:25:17.042000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3660 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:17.042000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335383130646233653730366466346637343863643165383531353961 Jan 14 01:25:17.081286 containerd[1962]: time="2026-01-14T01:25:17.081097771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tb885,Uid:9ff6e30a-a36e-47d4-86a4-56ddc3783493,Namespace:kube-system,Attempt:0,} returns sandbox id \"718071988213e220a3890dbb36e14778b30084597eb1abc38ff5ef97a9eefb06\"" Jan 14 01:25:17.094577 containerd[1962]: time="2026-01-14T01:25:17.094317847Z" level=info msg="CreateContainer within sandbox \"718071988213e220a3890dbb36e14778b30084597eb1abc38ff5ef97a9eefb06\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 01:25:17.119069 containerd[1962]: time="2026-01-14T01:25:17.119024914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-27qbd,Uid:f92c0dc2-1a91-471c-9cce-159e02ced74e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c5810db3e706df4f748cd1e85159a282084b0ddf4c5a05434964fe519d47e3e1\"" Jan 14 01:25:17.121250 containerd[1962]: time="2026-01-14T01:25:17.121186020Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 14 01:25:17.131754 containerd[1962]: time="2026-01-14T01:25:17.131698190Z" level=info msg="Container 03ba474373aeb4adae122ec4ebbf8cf3447c873e0c873c10118b2e72d8d202c1: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:25:17.201023 containerd[1962]: time="2026-01-14T01:25:17.200217719Z" level=info msg="CreateContainer within sandbox \"718071988213e220a3890dbb36e14778b30084597eb1abc38ff5ef97a9eefb06\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"03ba474373aeb4adae122ec4ebbf8cf3447c873e0c873c10118b2e72d8d202c1\"" Jan 14 01:25:17.206074 containerd[1962]: time="2026-01-14T01:25:17.205385268Z" level=info msg="StartContainer for \"03ba474373aeb4adae122ec4ebbf8cf3447c873e0c873c10118b2e72d8d202c1\"" Jan 14 01:25:17.208283 containerd[1962]: time="2026-01-14T01:25:17.208252360Z" level=info msg="connecting to shim 03ba474373aeb4adae122ec4ebbf8cf3447c873e0c873c10118b2e72d8d202c1" address="unix:///run/containerd/s/96fb784a155d240fc2c4e6640ae25f4d3f5b2e33a3f60d38e03c75b4e4acc769" protocol=ttrpc version=3 Jan 14 01:25:17.231805 systemd[1]: Started cri-containerd-03ba474373aeb4adae122ec4ebbf8cf3447c873e0c873c10118b2e72d8d202c1.scope - libcontainer container 03ba474373aeb4adae122ec4ebbf8cf3447c873e0c873c10118b2e72d8d202c1. Jan 14 01:25:17.282000 audit: BPF prog-id=150 op=LOAD Jan 14 01:25:17.282000 audit[3712]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3629 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:17.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033626134373433373361656234616461653132326563346562626638 Jan 14 01:25:17.282000 audit: BPF prog-id=151 op=LOAD Jan 14 01:25:17.282000 audit[3712]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3629 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:17.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033626134373433373361656234616461653132326563346562626638 Jan 14 01:25:17.282000 audit: BPF prog-id=151 op=UNLOAD Jan 14 01:25:17.282000 audit[3712]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3629 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:17.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033626134373433373361656234616461653132326563346562626638 Jan 14 01:25:17.282000 audit: BPF prog-id=150 op=UNLOAD Jan 14 01:25:17.282000 audit[3712]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3629 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:17.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033626134373433373361656234616461653132326563346562626638 Jan 14 01:25:17.282000 audit: BPF prog-id=152 op=LOAD Jan 14 01:25:17.282000 audit[3712]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3629 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:17.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033626134373433373361656234616461653132326563346562626638 Jan 14 01:25:17.309610 containerd[1962]: time="2026-01-14T01:25:17.309530461Z" level=info msg="StartContainer for \"03ba474373aeb4adae122ec4ebbf8cf3447c873e0c873c10118b2e72d8d202c1\" returns successfully" Jan 14 01:25:18.243529 kubelet[3571]: I0114 01:25:18.243291 3571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tb885" podStartSLOduration=7.243274906 podStartE2EDuration="7.243274906s" podCreationTimestamp="2026-01-14 01:25:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:25:18.243020495 +0000 UTC m=+10.331308649" watchObservedRunningTime="2026-01-14 01:25:18.243274906 +0000 UTC m=+10.331563060" Jan 14 01:25:18.783846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2123544357.mount: Deactivated successfully. Jan 14 01:25:19.626161 containerd[1962]: time="2026-01-14T01:25:19.626106767Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:25:19.628373 containerd[1962]: time="2026-01-14T01:25:19.628226568Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 14 01:25:19.630579 containerd[1962]: time="2026-01-14T01:25:19.630513564Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:25:19.634608 containerd[1962]: time="2026-01-14T01:25:19.634543090Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:25:19.635262 containerd[1962]: time="2026-01-14T01:25:19.635232622Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.514010343s" Jan 14 01:25:19.635349 containerd[1962]: time="2026-01-14T01:25:19.635336548Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 14 01:25:19.642207 containerd[1962]: time="2026-01-14T01:25:19.642158342Z" level=info msg="CreateContainer within sandbox \"c5810db3e706df4f748cd1e85159a282084b0ddf4c5a05434964fe519d47e3e1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 14 01:25:19.659098 containerd[1962]: time="2026-01-14T01:25:19.658246922Z" level=info msg="Container a3d024f4a41a21e584aca1946ba177f36e12ea53a2ea9eb2cfa56704ad6c8ed0: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:25:19.675104 containerd[1962]: time="2026-01-14T01:25:19.675022745Z" level=info msg="CreateContainer within sandbox \"c5810db3e706df4f748cd1e85159a282084b0ddf4c5a05434964fe519d47e3e1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a3d024f4a41a21e584aca1946ba177f36e12ea53a2ea9eb2cfa56704ad6c8ed0\"" Jan 14 01:25:19.676314 containerd[1962]: time="2026-01-14T01:25:19.676034665Z" level=info msg="StartContainer for \"a3d024f4a41a21e584aca1946ba177f36e12ea53a2ea9eb2cfa56704ad6c8ed0\"" Jan 14 01:25:19.677474 containerd[1962]: time="2026-01-14T01:25:19.677439707Z" level=info msg="connecting to shim a3d024f4a41a21e584aca1946ba177f36e12ea53a2ea9eb2cfa56704ad6c8ed0" address="unix:///run/containerd/s/4af328783471a9f1a24e3b69147c03bc999fed72a7b8880bdf97e0404f5b67f3" protocol=ttrpc version=3 Jan 14 01:25:19.706864 systemd[1]: Started cri-containerd-a3d024f4a41a21e584aca1946ba177f36e12ea53a2ea9eb2cfa56704ad6c8ed0.scope - libcontainer container a3d024f4a41a21e584aca1946ba177f36e12ea53a2ea9eb2cfa56704ad6c8ed0. Jan 14 01:25:19.721000 audit: BPF prog-id=153 op=LOAD Jan 14 01:25:19.721000 audit: BPF prog-id=154 op=LOAD Jan 14 01:25:19.721000 audit[3751]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3660 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:19.721000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133643032346634613431613231653538346163613139343662613137 Jan 14 01:25:19.721000 audit: BPF prog-id=154 op=UNLOAD Jan 14 01:25:19.721000 audit[3751]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3660 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:19.721000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133643032346634613431613231653538346163613139343662613137 Jan 14 01:25:19.721000 audit: BPF prog-id=155 op=LOAD Jan 14 01:25:19.721000 audit[3751]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3660 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:19.721000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133643032346634613431613231653538346163613139343662613137 Jan 14 01:25:19.721000 audit: BPF prog-id=156 op=LOAD Jan 14 01:25:19.721000 audit[3751]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3660 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:19.721000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133643032346634613431613231653538346163613139343662613137 Jan 14 01:25:19.722000 audit: BPF prog-id=156 op=UNLOAD Jan 14 01:25:19.722000 audit[3751]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3660 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:19.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133643032346634613431613231653538346163613139343662613137 Jan 14 01:25:19.722000 audit: BPF prog-id=155 op=UNLOAD Jan 14 01:25:19.722000 audit[3751]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3660 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:19.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133643032346634613431613231653538346163613139343662613137 Jan 14 01:25:19.722000 audit: BPF prog-id=157 op=LOAD Jan 14 01:25:19.722000 audit[3751]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3660 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:19.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133643032346634613431613231653538346163613139343662613137 Jan 14 01:25:19.752476 containerd[1962]: time="2026-01-14T01:25:19.752005340Z" level=info msg="StartContainer for \"a3d024f4a41a21e584aca1946ba177f36e12ea53a2ea9eb2cfa56704ad6c8ed0\" returns successfully" Jan 14 01:25:21.856000 audit[3817]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3817 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:25:21.856000 audit[3817]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffce14d50e0 a2=0 a3=7ffce14d50cc items=0 ppid=3724 pid=3817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:21.856000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 01:25:21.858000 audit[3819]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3819 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:25:21.858000 audit[3819]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc453f3890 a2=0 a3=7ffc453f387c items=0 ppid=3724 pid=3819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:21.858000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 01:25:21.859000 audit[3820]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_chain pid=3820 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:25:21.859000 audit[3820]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff15ae7e10 a2=0 a3=7fff15ae7dfc items=0 ppid=3724 pid=3820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:21.859000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 01:25:21.861000 audit[3821]: NETFILTER_CFG table=mangle:57 family=10 entries=1 op=nft_register_chain pid=3821 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:25:21.861000 audit[3821]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcc36aa880 a2=0 a3=7ffcc36aa86c items=0 ppid=3724 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:21.861000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 01:25:21.862000 audit[3822]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3822 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:25:21.862000 audit[3822]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3e9c54a0 a2=0 a3=7ffc3e9c548c items=0 ppid=3724 pid=3822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:21.862000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 01:25:21.864000 audit[3823]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3823 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:25:21.864000 audit[3823]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe7d69cf30 a2=0 a3=7ffe7d69cf1c items=0 ppid=3724 pid=3823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:21.864000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 01:25:21.996000 audit[3826]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3826 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:25:21.996000 audit[3826]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffcb1020560 a2=0 a3=7ffcb102054c items=0 ppid=3724 pid=3826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:21.996000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 01:25:22.004000 audit[3828]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3828 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:25:22.004000 audit[3828]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffa40c3cf0 a2=0 a3=7fffa40c3cdc items=0 ppid=3724 pid=3828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.004000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 14 01:25:22.010000 audit[3831]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3831 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:25:22.010000 audit[3831]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff123e2ff0 a2=0 a3=7fff123e2fdc items=0 ppid=3724 pid=3831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.010000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 14 01:25:22.011000 audit[3832]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3832 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:25:22.011000 audit[3832]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeea592a60 a2=0 a3=7ffeea592a4c items=0 ppid=3724 pid=3832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.011000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 01:25:22.015000 audit[3834]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3834 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:25:22.015000 audit[3834]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc5a5ef550 a2=0 a3=7ffc5a5ef53c items=0 ppid=3724 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.015000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 01:25:22.016000 audit[3835]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3835 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:25:22.016000 audit[3835]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd2ef67740 a2=0 a3=7ffd2ef6772c items=0 ppid=3724 pid=3835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.016000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 01:25:22.020000 audit[3837]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3837 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:25:22.020000 audit[3837]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd8d014d10 a2=0 a3=7ffd8d014cfc items=0 ppid=3724 pid=3837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.020000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 01:25:22.030524 kernel: kauditd_printk_skb: 110 callbacks suppressed Jan 14 01:25:22.030706 kernel: audit: type=1325 audit(1768353922.025:493): table=filter:67 family=2 entries=1 op=nft_register_rule pid=3840 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:25:22.025000 audit[3840]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3840 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:25:22.025000 audit[3840]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff1c9b3170 a2=0 a3=7fff1c9b315c items=0 ppid=3724 pid=3840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.037583 kernel: audit: type=1300 audit(1768353922.025:493): arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff1c9b3170 a2=0 a3=7fff1c9b315c items=0 ppid=3724 pid=3840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.025000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 14 01:25:22.047308 kernel: audit: type=1327 audit(1768353922.025:493): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 14 01:25:22.047429 kernel: audit: type=1325 audit(1768353922.030:494): table=filter:68 family=2 entries=1 op=nft_register_chain pid=3841 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:25:22.030000 audit[3841]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3841 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:25:22.030000 audit[3841]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff23230790 a2=0 a3=7fff2323077c items=0 ppid=3724 pid=3841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.057911 kernel: audit: type=1300 audit(1768353922.030:494): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff23230790 a2=0 a3=7fff2323077c items=0 ppid=3724 pid=3841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.058012 kernel: audit: type=1327 audit(1768353922.030:494): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 01:25:22.030000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 01:25:22.044000 audit[3843]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3843 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:25:22.067941 kernel: audit: type=1325 audit(1768353922.044:495): table=filter:69 family=2 entries=1 op=nft_register_rule pid=3843 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:25:22.068058 kernel: audit: type=1300 audit(1768353922.044:495): arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff83f00020 a2=0 a3=7fff83f0000c items=0 ppid=3724 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.044000 audit[3843]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff83f00020 a2=0 a3=7fff83f0000c items=0 ppid=3724 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.073368 kernel: audit: type=1327 audit(1768353922.044:495): proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 01:25:22.044000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 01:25:22.046000 audit[3844]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3844 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:25:22.046000 audit[3844]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd0f370fe0 a2=0 a3=7ffd0f370fcc items=0 ppid=3724 pid=3844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.046000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 01:25:22.049000 audit[3846]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3846 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:25:22.049000 audit[3846]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeaac453b0 a2=0 a3=7ffeaac4539c items=0 ppid=3724 pid=3846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.049000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 01:25:22.060000 audit[3849]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3849 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:25:22.060000 audit[3849]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc850bd470 a2=0 a3=7ffc850bd45c items=0 ppid=3724 pid=3849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.077632 kernel: audit: type=1325 audit(1768353922.046:496): table=filter:70 family=2 entries=1 op=nft_register_chain pid=3844 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:25:22.060000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 01:25:22.067000 audit[3852]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3852 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:25:22.067000 audit[3852]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe650e2090 a2=0 a3=7ffe650e207c items=0 ppid=3724 pid=3852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.067000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 01:25:22.069000 audit[3853]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3853 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:25:22.069000 audit[3853]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffeb2a67d0 a2=0 a3=7fffeb2a67bc items=0 ppid=3724 pid=3853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.069000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 01:25:22.074000 audit[3855]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3855 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:25:22.074000 audit[3855]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fffaa30e160 a2=0 a3=7fffaa30e14c items=0 ppid=3724 pid=3855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.074000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:25:22.079000 audit[3858]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3858 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:25:22.079000 audit[3858]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffeba8ef6d0 a2=0 a3=7ffeba8ef6bc items=0 ppid=3724 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.079000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:25:22.083000 audit[3859]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3859 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:25:22.083000 audit[3859]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe7c7fc720 a2=0 a3=7ffe7c7fc70c items=0 ppid=3724 pid=3859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.083000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 01:25:22.088000 audit[3861]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3861 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:25:22.088000 audit[3861]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc2b1e7a40 a2=0 a3=7ffc2b1e7a2c items=0 ppid=3724 pid=3861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.088000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 01:25:22.116000 audit[3867]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3867 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:25:22.116000 audit[3867]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffecc6a1030 a2=0 a3=7ffecc6a101c items=0 ppid=3724 pid=3867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.116000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:25:22.124000 audit[3867]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3867 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:25:22.124000 audit[3867]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffecc6a1030 a2=0 a3=7ffecc6a101c items=0 ppid=3724 pid=3867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.124000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:25:22.125000 audit[3872]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3872 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:25:22.125000 audit[3872]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffcf8b80e00 a2=0 a3=7ffcf8b80dec items=0 ppid=3724 pid=3872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.125000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 01:25:22.129000 audit[3874]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3874 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:25:22.129000 audit[3874]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd68996730 a2=0 a3=7ffd6899671c items=0 ppid=3724 pid=3874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.129000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 14 01:25:22.135000 audit[3877]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3877 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:25:22.135000 audit[3877]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff1fafac50 a2=0 a3=7fff1fafac3c items=0 ppid=3724 pid=3877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.135000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 14 01:25:22.136000 audit[3878]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3878 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:25:22.136000 audit[3878]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc72e6c8d0 a2=0 a3=7ffc72e6c8bc items=0 ppid=3724 pid=3878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.136000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 01:25:22.139000 audit[3880]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3880 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:25:22.139000 audit[3880]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd04b66000 a2=0 a3=7ffd04b65fec items=0 ppid=3724 pid=3880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.139000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 01:25:22.141000 audit[3881]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3881 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:25:22.141000 audit[3881]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb6ddea00 a2=0 a3=7ffcb6dde9ec items=0 ppid=3724 pid=3881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.141000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 01:25:22.144000 audit[3883]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3883 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:25:22.144000 audit[3883]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc3dd8bd40 a2=0 a3=7ffc3dd8bd2c items=0 ppid=3724 pid=3883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.144000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 14 01:25:22.150000 audit[3886]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3886 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:25:22.150000 audit[3886]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd8a7015d0 a2=0 a3=7ffd8a7015bc items=0 ppid=3724 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.150000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 01:25:22.151000 audit[3887]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3887 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:25:22.151000 audit[3887]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcbf35cfe0 a2=0 a3=7ffcbf35cfcc items=0 ppid=3724 pid=3887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.151000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 01:25:22.154000 audit[3889]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3889 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:25:22.154000 audit[3889]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdd35679d0 a2=0 a3=7ffdd35679bc items=0 ppid=3724 pid=3889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.154000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 01:25:22.155000 audit[3890]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3890 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:25:22.155000 audit[3890]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffead68a1a0 a2=0 a3=7ffead68a18c items=0 ppid=3724 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.155000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 01:25:22.158000 audit[3892]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3892 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:25:22.158000 audit[3892]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc8fef2e10 a2=0 a3=7ffc8fef2dfc items=0 ppid=3724 pid=3892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.158000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 01:25:22.164000 audit[3895]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3895 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:25:22.164000 audit[3895]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe760d89d0 a2=0 a3=7ffe760d89bc items=0 ppid=3724 pid=3895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.164000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 01:25:22.169000 audit[3898]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3898 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:25:22.169000 audit[3898]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffedae4aa60 a2=0 a3=7ffedae4aa4c items=0 ppid=3724 pid=3898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.169000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 14 01:25:22.170000 audit[3899]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3899 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:25:22.170000 audit[3899]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffda0314770 a2=0 a3=7ffda031475c items=0 ppid=3724 pid=3899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.170000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 01:25:22.173000 audit[3901]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3901 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:25:22.173000 audit[3901]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc611e27f0 a2=0 a3=7ffc611e27dc items=0 ppid=3724 pid=3901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.173000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:25:22.180000 audit[3904]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3904 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:25:22.180000 audit[3904]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe7fbab1e0 a2=0 a3=7ffe7fbab1cc items=0 ppid=3724 pid=3904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.180000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:25:22.182000 audit[3905]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3905 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:25:22.182000 audit[3905]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe17fdc100 a2=0 a3=7ffe17fdc0ec items=0 ppid=3724 pid=3905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.182000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 01:25:22.185000 audit[3907]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3907 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:25:22.185000 audit[3907]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff31f7c460 a2=0 a3=7fff31f7c44c items=0 ppid=3724 pid=3907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.185000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 01:25:22.186000 audit[3908]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3908 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:25:22.186000 audit[3908]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff5bfc6230 a2=0 a3=7fff5bfc621c items=0 ppid=3724 pid=3908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.186000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 01:25:22.189000 audit[3910]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3910 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:25:22.189000 audit[3910]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffca3554440 a2=0 a3=7ffca355442c items=0 ppid=3724 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.189000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:25:22.194000 audit[3913]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3913 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:25:22.194000 audit[3913]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe12328a50 a2=0 a3=7ffe12328a3c items=0 ppid=3724 pid=3913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.194000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:25:22.198000 audit[3915]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3915 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 01:25:22.198000 audit[3915]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffca3f60bd0 a2=0 a3=7ffca3f60bbc items=0 ppid=3724 pid=3915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.198000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:25:22.199000 audit[3915]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3915 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 01:25:22.199000 audit[3915]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffca3f60bd0 a2=0 a3=7ffca3f60bbc items=0 ppid=3724 pid=3915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:22.199000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:25:56.048762 kernel: kauditd_printk_skb: 104 callbacks suppressed Jan 14 01:25:56.048954 kernel: audit: type=1325 audit(1768353956.042:531): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3952 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:25:56.042000 audit[3952]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3952 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:25:56.042000 audit[3952]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe29b95c60 a2=0 a3=7ffe29b95c4c items=0 ppid=3724 pid=3952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:56.069698 kernel: audit: type=1300 audit(1768353956.042:531): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe29b95c60 a2=0 a3=7ffe29b95c4c items=0 ppid=3724 pid=3952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:56.042000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:25:56.075654 kernel: audit: type=1327 audit(1768353956.042:531): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:25:56.061000 audit[3952]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3952 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:25:56.081591 kernel: audit: type=1325 audit(1768353956.061:532): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3952 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:25:56.061000 audit[3952]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe29b95c60 a2=0 a3=0 items=0 ppid=3724 pid=3952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:56.097246 kernel: audit: type=1300 audit(1768353956.061:532): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe29b95c60 a2=0 a3=0 items=0 ppid=3724 pid=3952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:56.097362 kernel: audit: type=1327 audit(1768353956.061:532): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:25:56.061000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:25:56.117000 audit[3954]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3954 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:25:56.129753 kernel: audit: type=1325 audit(1768353956.117:533): table=filter:107 family=2 entries=16 op=nft_register_rule pid=3954 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:25:56.129909 kernel: audit: type=1300 audit(1768353956.117:533): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc734ea6f0 a2=0 a3=7ffc734ea6dc items=0 ppid=3724 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:56.117000 audit[3954]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc734ea6f0 a2=0 a3=7ffc734ea6dc items=0 ppid=3724 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:56.117000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:25:56.142600 kernel: audit: type=1327 audit(1768353956.117:533): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:25:56.130000 audit[3954]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3954 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:25:56.147649 kernel: audit: type=1325 audit(1768353956.130:534): table=nat:108 family=2 entries=12 op=nft_register_rule pid=3954 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:25:56.130000 audit[3954]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc734ea6f0 a2=0 a3=0 items=0 ppid=3724 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:56.130000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:25:56.953000 audit[2327]: USER_END pid=2327 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:25:56.953000 audit[2327]: CRED_DISP pid=2327 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:25:56.954759 sudo[2327]: pam_unix(sudo:session): session closed for user root Jan 14 01:25:57.032944 sshd[2326]: Connection closed by 4.153.228.146 port 48940 Jan 14 01:25:57.033386 sshd-session[2322]: pam_unix(sshd:session): session closed for user core Jan 14 01:25:57.035000 audit[2322]: USER_END pid=2322 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:25:57.035000 audit[2322]: CRED_DISP pid=2322 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:25:57.042490 systemd-logind[1928]: Session 8 logged out. Waiting for processes to exit. Jan 14 01:25:57.045570 systemd[1]: sshd@6-172.31.18.46:22-4.153.228.146:48940.service: Deactivated successfully. Jan 14 01:25:57.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.18.46:22-4.153.228.146:48940 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:57.051523 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 01:25:57.052113 systemd[1]: session-8.scope: Consumed 6.118s CPU time, 155.5M memory peak. Jan 14 01:25:57.057442 systemd-logind[1928]: Removed session 8. Jan 14 01:26:02.372161 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 14 01:26:02.372332 kernel: audit: type=1325 audit(1768353962.346:540): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3973 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:02.346000 audit[3973]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3973 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:02.346000 audit[3973]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffdd9935580 a2=0 a3=7ffdd993556c items=0 ppid=3724 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:02.393885 kernel: audit: type=1300 audit(1768353962.346:540): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffdd9935580 a2=0 a3=7ffdd993556c items=0 ppid=3724 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:02.346000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:02.416614 kernel: audit: type=1327 audit(1768353962.346:540): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:02.415000 audit[3973]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3973 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:02.432601 kernel: audit: type=1325 audit(1768353962.415:541): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3973 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:02.415000 audit[3973]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdd9935580 a2=0 a3=0 items=0 ppid=3724 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:02.460584 kernel: audit: type=1300 audit(1768353962.415:541): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdd9935580 a2=0 a3=0 items=0 ppid=3724 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:02.415000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:02.477581 kernel: audit: type=1327 audit(1768353962.415:541): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:02.721000 audit[3975]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3975 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:02.737587 kernel: audit: type=1325 audit(1768353962.721:542): table=filter:111 family=2 entries=19 op=nft_register_rule pid=3975 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:02.721000 audit[3975]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd69b432e0 a2=0 a3=7ffd69b432cc items=0 ppid=3724 pid=3975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:02.816612 kernel: audit: type=1300 audit(1768353962.721:542): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd69b432e0 a2=0 a3=7ffd69b432cc items=0 ppid=3724 pid=3975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:02.721000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:02.831647 kernel: audit: type=1327 audit(1768353962.721:542): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:02.816000 audit[3975]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3975 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:02.846593 kernel: audit: type=1325 audit(1768353962.816:543): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3975 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:02.816000 audit[3975]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd69b432e0 a2=0 a3=0 items=0 ppid=3724 pid=3975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:02.816000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:05.281000 audit[3977]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3977 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:05.281000 audit[3977]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc6f1c3590 a2=0 a3=7ffc6f1c357c items=0 ppid=3724 pid=3977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:05.281000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:05.293000 audit[3977]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3977 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:05.293000 audit[3977]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc6f1c3590 a2=0 a3=0 items=0 ppid=3724 pid=3977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:05.293000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:05.318000 audit[3979]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=3979 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:05.318000 audit[3979]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffdb735f480 a2=0 a3=7ffdb735f46c items=0 ppid=3724 pid=3979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:05.318000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:05.324000 audit[3979]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3979 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:05.324000 audit[3979]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdb735f480 a2=0 a3=0 items=0 ppid=3724 pid=3979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:05.324000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:05.340660 kubelet[3571]: I0114 01:26:05.340535 3571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-27qbd" podStartSLOduration=50.825096872 podStartE2EDuration="53.34049973s" podCreationTimestamp="2026-01-14 01:25:12 +0000 UTC" firstStartedPulling="2026-01-14 01:25:17.120737221 +0000 UTC m=+9.209025355" lastFinishedPulling="2026-01-14 01:25:19.636140079 +0000 UTC m=+11.724428213" observedRunningTime="2026-01-14 01:25:20.27440741 +0000 UTC m=+12.362695565" watchObservedRunningTime="2026-01-14 01:26:05.34049973 +0000 UTC m=+57.428787886" Jan 14 01:26:05.357681 systemd[1]: Created slice kubepods-besteffort-pod4d7cefb9_236e_41c3_adec_e26b34c5ccf7.slice - libcontainer container kubepods-besteffort-pod4d7cefb9_236e_41c3_adec_e26b34c5ccf7.slice. Jan 14 01:26:05.464843 kubelet[3571]: I0114 01:26:05.464781 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4d7cefb9-236e-41c3-adec-e26b34c5ccf7-typha-certs\") pod \"calico-typha-6c8df8b558-smwgv\" (UID: \"4d7cefb9-236e-41c3-adec-e26b34c5ccf7\") " pod="calico-system/calico-typha-6c8df8b558-smwgv" Jan 14 01:26:05.465195 kubelet[3571]: I0114 01:26:05.464857 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-797wk\" (UniqueName: \"kubernetes.io/projected/4d7cefb9-236e-41c3-adec-e26b34c5ccf7-kube-api-access-797wk\") pod \"calico-typha-6c8df8b558-smwgv\" (UID: \"4d7cefb9-236e-41c3-adec-e26b34c5ccf7\") " pod="calico-system/calico-typha-6c8df8b558-smwgv" Jan 14 01:26:05.465195 kubelet[3571]: I0114 01:26:05.464941 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d7cefb9-236e-41c3-adec-e26b34c5ccf7-tigera-ca-bundle\") pod \"calico-typha-6c8df8b558-smwgv\" (UID: \"4d7cefb9-236e-41c3-adec-e26b34c5ccf7\") " pod="calico-system/calico-typha-6c8df8b558-smwgv" Jan 14 01:26:05.531811 systemd[1]: Created slice kubepods-besteffort-poda303613c_a175_4eb1_8138_5b4f62695ac3.slice - libcontainer container kubepods-besteffort-poda303613c_a175_4eb1_8138_5b4f62695ac3.slice. Jan 14 01:26:05.567064 kubelet[3571]: I0114 01:26:05.566702 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a303613c-a175-4eb1-8138-5b4f62695ac3-cni-log-dir\") pod \"calico-node-6rztb\" (UID: \"a303613c-a175-4eb1-8138-5b4f62695ac3\") " pod="calico-system/calico-node-6rztb" Jan 14 01:26:05.567064 kubelet[3571]: I0114 01:26:05.566775 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a303613c-a175-4eb1-8138-5b4f62695ac3-flexvol-driver-host\") pod \"calico-node-6rztb\" (UID: \"a303613c-a175-4eb1-8138-5b4f62695ac3\") " pod="calico-system/calico-node-6rztb" Jan 14 01:26:05.567064 kubelet[3571]: I0114 01:26:05.566804 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a303613c-a175-4eb1-8138-5b4f62695ac3-tigera-ca-bundle\") pod \"calico-node-6rztb\" (UID: \"a303613c-a175-4eb1-8138-5b4f62695ac3\") " pod="calico-system/calico-node-6rztb" Jan 14 01:26:05.567064 kubelet[3571]: I0114 01:26:05.566828 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a303613c-a175-4eb1-8138-5b4f62695ac3-var-lib-calico\") pod \"calico-node-6rztb\" (UID: \"a303613c-a175-4eb1-8138-5b4f62695ac3\") " pod="calico-system/calico-node-6rztb" Jan 14 01:26:05.567064 kubelet[3571]: I0114 01:26:05.566850 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a303613c-a175-4eb1-8138-5b4f62695ac3-cni-net-dir\") pod \"calico-node-6rztb\" (UID: \"a303613c-a175-4eb1-8138-5b4f62695ac3\") " pod="calico-system/calico-node-6rztb" Jan 14 01:26:05.569475 kubelet[3571]: I0114 01:26:05.566871 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a303613c-a175-4eb1-8138-5b4f62695ac3-xtables-lock\") pod \"calico-node-6rztb\" (UID: \"a303613c-a175-4eb1-8138-5b4f62695ac3\") " pod="calico-system/calico-node-6rztb" Jan 14 01:26:05.569475 kubelet[3571]: I0114 01:26:05.566894 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a303613c-a175-4eb1-8138-5b4f62695ac3-cni-bin-dir\") pod \"calico-node-6rztb\" (UID: \"a303613c-a175-4eb1-8138-5b4f62695ac3\") " pod="calico-system/calico-node-6rztb" Jan 14 01:26:05.569475 kubelet[3571]: I0114 01:26:05.566933 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a303613c-a175-4eb1-8138-5b4f62695ac3-lib-modules\") pod \"calico-node-6rztb\" (UID: \"a303613c-a175-4eb1-8138-5b4f62695ac3\") " pod="calico-system/calico-node-6rztb" Jan 14 01:26:05.569475 kubelet[3571]: I0114 01:26:05.566958 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a303613c-a175-4eb1-8138-5b4f62695ac3-var-run-calico\") pod \"calico-node-6rztb\" (UID: \"a303613c-a175-4eb1-8138-5b4f62695ac3\") " pod="calico-system/calico-node-6rztb" Jan 14 01:26:05.569475 kubelet[3571]: I0114 01:26:05.566984 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxlg8\" (UniqueName: \"kubernetes.io/projected/a303613c-a175-4eb1-8138-5b4f62695ac3-kube-api-access-gxlg8\") pod \"calico-node-6rztb\" (UID: \"a303613c-a175-4eb1-8138-5b4f62695ac3\") " pod="calico-system/calico-node-6rztb" Jan 14 01:26:05.569710 kubelet[3571]: I0114 01:26:05.567014 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a303613c-a175-4eb1-8138-5b4f62695ac3-node-certs\") pod \"calico-node-6rztb\" (UID: \"a303613c-a175-4eb1-8138-5b4f62695ac3\") " pod="calico-system/calico-node-6rztb" Jan 14 01:26:05.569710 kubelet[3571]: I0114 01:26:05.567052 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a303613c-a175-4eb1-8138-5b4f62695ac3-policysync\") pod \"calico-node-6rztb\" (UID: \"a303613c-a175-4eb1-8138-5b4f62695ac3\") " pod="calico-system/calico-node-6rztb" Jan 14 01:26:05.664423 containerd[1962]: time="2026-01-14T01:26:05.664376262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c8df8b558-smwgv,Uid:4d7cefb9-236e-41c3-adec-e26b34c5ccf7,Namespace:calico-system,Attempt:0,}" Jan 14 01:26:05.687762 kubelet[3571]: E0114 01:26:05.687723 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.687969 kubelet[3571]: W0114 01:26:05.687913 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.687969 kubelet[3571]: E0114 01:26:05.687949 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.688489 kubelet[3571]: E0114 01:26:05.688468 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.688679 kubelet[3571]: W0114 01:26:05.688602 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.688679 kubelet[3571]: E0114 01:26:05.688621 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.708462 kubelet[3571]: E0114 01:26:05.708363 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.708462 kubelet[3571]: W0114 01:26:05.708388 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.708462 kubelet[3571]: E0114 01:26:05.708411 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.732980 containerd[1962]: time="2026-01-14T01:26:05.732814307Z" level=info msg="connecting to shim c212f758116c3c2a37bbb6f82e1ee23dbcddc2747fcdc0b0b82d6ef1cf57d22f" address="unix:///run/containerd/s/4037d2c0af8ec45a32de94a1044ee1c1136d4165e6c9b81e1353b1a08a539403" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:26:05.793411 systemd[1]: Started cri-containerd-c212f758116c3c2a37bbb6f82e1ee23dbcddc2747fcdc0b0b82d6ef1cf57d22f.scope - libcontainer container c212f758116c3c2a37bbb6f82e1ee23dbcddc2747fcdc0b0b82d6ef1cf57d22f. Jan 14 01:26:05.842674 kubelet[3571]: E0114 01:26:05.842272 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zrh2z" podUID="74b84cdc-323d-4b42-b95a-ceec7dfaa40f" Jan 14 01:26:05.852582 containerd[1962]: time="2026-01-14T01:26:05.852497181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6rztb,Uid:a303613c-a175-4eb1-8138-5b4f62695ac3,Namespace:calico-system,Attempt:0,}" Jan 14 01:26:05.892000 audit: BPF prog-id=158 op=LOAD Jan 14 01:26:05.894000 audit: BPF prog-id=159 op=LOAD Jan 14 01:26:05.894000 audit[4009]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3997 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:05.894000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332313266373538313136633363326133376262623666383265316565 Jan 14 01:26:05.894000 audit: BPF prog-id=159 op=UNLOAD Jan 14 01:26:05.894000 audit[4009]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3997 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:05.894000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332313266373538313136633363326133376262623666383265316565 Jan 14 01:26:05.894000 audit: BPF prog-id=160 op=LOAD Jan 14 01:26:05.894000 audit[4009]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3997 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:05.894000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332313266373538313136633363326133376262623666383265316565 Jan 14 01:26:05.895000 audit: BPF prog-id=161 op=LOAD Jan 14 01:26:05.895000 audit[4009]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3997 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:05.895000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332313266373538313136633363326133376262623666383265316565 Jan 14 01:26:05.895000 audit: BPF prog-id=161 op=UNLOAD Jan 14 01:26:05.895000 audit[4009]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3997 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:05.895000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332313266373538313136633363326133376262623666383265316565 Jan 14 01:26:05.895000 audit: BPF prog-id=160 op=UNLOAD Jan 14 01:26:05.895000 audit[4009]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3997 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:05.895000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332313266373538313136633363326133376262623666383265316565 Jan 14 01:26:05.895000 audit: BPF prog-id=162 op=LOAD Jan 14 01:26:05.895000 audit[4009]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3997 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:05.895000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332313266373538313136633363326133376262623666383265316565 Jan 14 01:26:05.929931 kubelet[3571]: E0114 01:26:05.929452 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.929931 kubelet[3571]: W0114 01:26:05.929589 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.929931 kubelet[3571]: E0114 01:26:05.929619 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.930797 kubelet[3571]: E0114 01:26:05.930080 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.930797 kubelet[3571]: W0114 01:26:05.930093 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.930797 kubelet[3571]: E0114 01:26:05.930109 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.930797 kubelet[3571]: E0114 01:26:05.930332 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.930797 kubelet[3571]: W0114 01:26:05.930343 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.930797 kubelet[3571]: E0114 01:26:05.930356 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.930797 kubelet[3571]: E0114 01:26:05.930620 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.930797 kubelet[3571]: W0114 01:26:05.930630 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.930797 kubelet[3571]: E0114 01:26:05.930642 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.931185 kubelet[3571]: E0114 01:26:05.930841 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.931185 kubelet[3571]: W0114 01:26:05.930850 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.931185 kubelet[3571]: E0114 01:26:05.930862 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.931185 kubelet[3571]: E0114 01:26:05.931047 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.931185 kubelet[3571]: W0114 01:26:05.931055 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.931185 kubelet[3571]: E0114 01:26:05.931066 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.931514 kubelet[3571]: E0114 01:26:05.931262 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.931514 kubelet[3571]: W0114 01:26:05.931273 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.931514 kubelet[3571]: E0114 01:26:05.931283 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.931514 kubelet[3571]: E0114 01:26:05.931474 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.931514 kubelet[3571]: W0114 01:26:05.931483 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.931514 kubelet[3571]: E0114 01:26:05.931494 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.935398 kubelet[3571]: E0114 01:26:05.931714 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.935398 kubelet[3571]: W0114 01:26:05.931724 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.935398 kubelet[3571]: E0114 01:26:05.931735 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.935398 kubelet[3571]: E0114 01:26:05.931914 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.935398 kubelet[3571]: W0114 01:26:05.931924 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.935398 kubelet[3571]: E0114 01:26:05.931936 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.935398 kubelet[3571]: E0114 01:26:05.932137 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.935398 kubelet[3571]: W0114 01:26:05.932146 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.935398 kubelet[3571]: E0114 01:26:05.932159 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.935398 kubelet[3571]: E0114 01:26:05.932350 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.936066 kubelet[3571]: W0114 01:26:05.932361 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.936066 kubelet[3571]: E0114 01:26:05.932373 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.936066 kubelet[3571]: E0114 01:26:05.932589 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.936066 kubelet[3571]: W0114 01:26:05.932598 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.936066 kubelet[3571]: E0114 01:26:05.932610 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.936066 kubelet[3571]: E0114 01:26:05.932787 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.936066 kubelet[3571]: W0114 01:26:05.932795 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.936066 kubelet[3571]: E0114 01:26:05.932806 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.936066 kubelet[3571]: E0114 01:26:05.932985 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.936066 kubelet[3571]: W0114 01:26:05.932994 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.936462 kubelet[3571]: E0114 01:26:05.933005 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.936462 kubelet[3571]: E0114 01:26:05.933261 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.936462 kubelet[3571]: W0114 01:26:05.933272 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.936462 kubelet[3571]: E0114 01:26:05.933283 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.936462 kubelet[3571]: E0114 01:26:05.933482 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.936462 kubelet[3571]: W0114 01:26:05.933491 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.936462 kubelet[3571]: E0114 01:26:05.933502 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.936462 kubelet[3571]: E0114 01:26:05.933745 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.936462 kubelet[3571]: W0114 01:26:05.933754 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.936462 kubelet[3571]: E0114 01:26:05.933765 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.937502 kubelet[3571]: E0114 01:26:05.933944 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.937502 kubelet[3571]: W0114 01:26:05.933953 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.937502 kubelet[3571]: E0114 01:26:05.933963 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.937502 kubelet[3571]: E0114 01:26:05.934146 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.937502 kubelet[3571]: W0114 01:26:05.934157 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.937502 kubelet[3571]: E0114 01:26:05.934167 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.971681 kubelet[3571]: E0114 01:26:05.971636 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.971920 kubelet[3571]: W0114 01:26:05.971852 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.971920 kubelet[3571]: E0114 01:26:05.971886 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.972243 kubelet[3571]: I0114 01:26:05.972118 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs6c7\" (UniqueName: \"kubernetes.io/projected/74b84cdc-323d-4b42-b95a-ceec7dfaa40f-kube-api-access-qs6c7\") pod \"csi-node-driver-zrh2z\" (UID: \"74b84cdc-323d-4b42-b95a-ceec7dfaa40f\") " pod="calico-system/csi-node-driver-zrh2z" Jan 14 01:26:05.972653 kubelet[3571]: E0114 01:26:05.972583 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.972653 kubelet[3571]: W0114 01:26:05.972614 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.972653 kubelet[3571]: E0114 01:26:05.972630 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.972953 kubelet[3571]: I0114 01:26:05.972827 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/74b84cdc-323d-4b42-b95a-ceec7dfaa40f-socket-dir\") pod \"csi-node-driver-zrh2z\" (UID: \"74b84cdc-323d-4b42-b95a-ceec7dfaa40f\") " pod="calico-system/csi-node-driver-zrh2z" Jan 14 01:26:05.973226 kubelet[3571]: E0114 01:26:05.973210 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.973226 kubelet[3571]: W0114 01:26:05.973242 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.973226 kubelet[3571]: E0114 01:26:05.973256 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.973226 kubelet[3571]: I0114 01:26:05.973278 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/74b84cdc-323d-4b42-b95a-ceec7dfaa40f-kubelet-dir\") pod \"csi-node-driver-zrh2z\" (UID: \"74b84cdc-323d-4b42-b95a-ceec7dfaa40f\") " pod="calico-system/csi-node-driver-zrh2z" Jan 14 01:26:05.974887 kubelet[3571]: E0114 01:26:05.974737 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.974887 kubelet[3571]: W0114 01:26:05.974755 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.974887 kubelet[3571]: E0114 01:26:05.974771 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.974887 kubelet[3571]: I0114 01:26:05.974853 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/74b84cdc-323d-4b42-b95a-ceec7dfaa40f-registration-dir\") pod \"csi-node-driver-zrh2z\" (UID: \"74b84cdc-323d-4b42-b95a-ceec7dfaa40f\") " pod="calico-system/csi-node-driver-zrh2z" Jan 14 01:26:05.977759 kubelet[3571]: E0114 01:26:05.977734 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.978089 kubelet[3571]: W0114 01:26:05.977891 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.978089 kubelet[3571]: E0114 01:26:05.977923 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.978225 kubelet[3571]: E0114 01:26:05.978212 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.978277 kubelet[3571]: W0114 01:26:05.978252 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.978277 kubelet[3571]: E0114 01:26:05.978271 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.978509 kubelet[3571]: E0114 01:26:05.978489 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.978509 kubelet[3571]: W0114 01:26:05.978505 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.978627 kubelet[3571]: E0114 01:26:05.978520 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.979829 kubelet[3571]: E0114 01:26:05.979808 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.979829 kubelet[3571]: W0114 01:26:05.979823 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.979962 kubelet[3571]: E0114 01:26:05.979840 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.980006 kubelet[3571]: I0114 01:26:05.979970 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/74b84cdc-323d-4b42-b95a-ceec7dfaa40f-varrun\") pod \"csi-node-driver-zrh2z\" (UID: \"74b84cdc-323d-4b42-b95a-ceec7dfaa40f\") " pod="calico-system/csi-node-driver-zrh2z" Jan 14 01:26:05.980641 kubelet[3571]: E0114 01:26:05.980622 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.980641 kubelet[3571]: W0114 01:26:05.980637 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.980751 kubelet[3571]: E0114 01:26:05.980652 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.981837 kubelet[3571]: E0114 01:26:05.981817 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.981837 kubelet[3571]: W0114 01:26:05.981833 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.981963 kubelet[3571]: E0114 01:26:05.981848 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.982977 kubelet[3571]: E0114 01:26:05.982955 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.982977 kubelet[3571]: W0114 01:26:05.982975 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.983214 kubelet[3571]: E0114 01:26:05.982990 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.985643 kubelet[3571]: E0114 01:26:05.985595 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.985643 kubelet[3571]: W0114 01:26:05.985617 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.985643 kubelet[3571]: E0114 01:26:05.985641 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.986626 kubelet[3571]: E0114 01:26:05.986604 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.986626 kubelet[3571]: W0114 01:26:05.986626 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.987311 kubelet[3571]: E0114 01:26:05.986641 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.987498 kubelet[3571]: E0114 01:26:05.987479 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.987600 kubelet[3571]: W0114 01:26:05.987498 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.987600 kubelet[3571]: E0114 01:26:05.987513 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.987808 kubelet[3571]: E0114 01:26:05.987753 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:05.987808 kubelet[3571]: W0114 01:26:05.987763 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:05.987808 kubelet[3571]: E0114 01:26:05.987776 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:05.988187 containerd[1962]: time="2026-01-14T01:26:05.988137076Z" level=info msg="connecting to shim 06b03833ff7af60b9c8415347e7379f69e68889c8b9f529059fffcc6b64627c5" address="unix:///run/containerd/s/21f67572a10b776ac80f2d677cbaeceef64d94a6ba9c84d7610fae711862653f" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:26:06.083325 kubelet[3571]: E0114 01:26:06.082925 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:06.083325 kubelet[3571]: W0114 01:26:06.082952 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:06.083325 kubelet[3571]: E0114 01:26:06.082997 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:06.083685 kubelet[3571]: E0114 01:26:06.083356 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:06.083685 kubelet[3571]: W0114 01:26:06.083367 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:06.083685 kubelet[3571]: E0114 01:26:06.083381 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:06.083846 kubelet[3571]: E0114 01:26:06.083719 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:06.083846 kubelet[3571]: W0114 01:26:06.083731 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:06.083846 kubelet[3571]: E0114 01:26:06.083748 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:06.084755 kubelet[3571]: E0114 01:26:06.084103 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:06.084755 kubelet[3571]: W0114 01:26:06.084118 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:06.084755 kubelet[3571]: E0114 01:26:06.084130 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:06.084755 kubelet[3571]: E0114 01:26:06.084389 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:06.084755 kubelet[3571]: W0114 01:26:06.084399 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:06.084755 kubelet[3571]: E0114 01:26:06.084420 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:06.084755 kubelet[3571]: E0114 01:26:06.084725 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:06.084755 kubelet[3571]: W0114 01:26:06.084734 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:06.084755 kubelet[3571]: E0114 01:26:06.084755 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:06.087527 kubelet[3571]: E0114 01:26:06.085764 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:06.087527 kubelet[3571]: W0114 01:26:06.085776 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:06.087527 kubelet[3571]: E0114 01:26:06.085790 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:06.087527 kubelet[3571]: E0114 01:26:06.086061 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:06.087527 kubelet[3571]: W0114 01:26:06.086069 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:06.087527 kubelet[3571]: E0114 01:26:06.086080 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:06.089024 kubelet[3571]: E0114 01:26:06.087669 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:06.089024 kubelet[3571]: W0114 01:26:06.087682 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:06.089024 kubelet[3571]: E0114 01:26:06.087697 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:06.089024 kubelet[3571]: E0114 01:26:06.087963 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:06.089024 kubelet[3571]: W0114 01:26:06.087972 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:06.089024 kubelet[3571]: E0114 01:26:06.087985 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:06.089843 kubelet[3571]: E0114 01:26:06.089821 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:06.089843 kubelet[3571]: W0114 01:26:06.089838 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:06.089843 kubelet[3571]: E0114 01:26:06.089856 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:06.091105 kubelet[3571]: E0114 01:26:06.091072 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:06.091105 kubelet[3571]: W0114 01:26:06.091095 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:06.091286 kubelet[3571]: E0114 01:26:06.091112 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:06.092979 kubelet[3571]: E0114 01:26:06.092882 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:06.092979 kubelet[3571]: W0114 01:26:06.092901 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:06.092979 kubelet[3571]: E0114 01:26:06.092917 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:06.094867 kubelet[3571]: E0114 01:26:06.094844 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:06.094867 kubelet[3571]: W0114 01:26:06.094861 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:06.095039 kubelet[3571]: E0114 01:26:06.094882 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:06.096924 kubelet[3571]: E0114 01:26:06.096899 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:06.096924 kubelet[3571]: W0114 01:26:06.096920 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:06.097067 kubelet[3571]: E0114 01:26:06.096939 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:06.100693 kubelet[3571]: E0114 01:26:06.100655 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:06.100693 kubelet[3571]: W0114 01:26:06.100677 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:06.100693 kubelet[3571]: E0114 01:26:06.100700 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:06.101015 kubelet[3571]: E0114 01:26:06.100999 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:06.101015 kubelet[3571]: W0114 01:26:06.101014 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:06.101266 kubelet[3571]: E0114 01:26:06.101030 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:06.103852 kubelet[3571]: E0114 01:26:06.103355 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:06.103852 kubelet[3571]: W0114 01:26:06.103375 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:06.103852 kubelet[3571]: E0114 01:26:06.103394 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:06.104391 kubelet[3571]: E0114 01:26:06.104367 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:06.104391 kubelet[3571]: W0114 01:26:06.104389 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:06.104599 kubelet[3571]: E0114 01:26:06.104408 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:06.104939 containerd[1962]: time="2026-01-14T01:26:06.104838761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c8df8b558-smwgv,Uid:4d7cefb9-236e-41c3-adec-e26b34c5ccf7,Namespace:calico-system,Attempt:0,} returns sandbox id \"c212f758116c3c2a37bbb6f82e1ee23dbcddc2747fcdc0b0b82d6ef1cf57d22f\"" Jan 14 01:26:06.105817 kubelet[3571]: E0114 01:26:06.105794 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:06.105817 kubelet[3571]: W0114 01:26:06.105812 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:06.106091 kubelet[3571]: E0114 01:26:06.105829 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:06.107718 kubelet[3571]: E0114 01:26:06.107474 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:06.107718 kubelet[3571]: W0114 01:26:06.107491 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:06.107718 kubelet[3571]: E0114 01:26:06.107509 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:06.108830 kubelet[3571]: E0114 01:26:06.108489 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:06.108830 kubelet[3571]: W0114 01:26:06.108508 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:06.108830 kubelet[3571]: E0114 01:26:06.108525 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:06.111243 containerd[1962]: time="2026-01-14T01:26:06.109961535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 14 01:26:06.111782 kubelet[3571]: E0114 01:26:06.111759 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:06.111860 kubelet[3571]: W0114 01:26:06.111784 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:06.111860 kubelet[3571]: E0114 01:26:06.111803 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:06.112537 kubelet[3571]: E0114 01:26:06.112506 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:06.112537 kubelet[3571]: W0114 01:26:06.112526 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:06.112698 kubelet[3571]: E0114 01:26:06.112542 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:06.113610 kubelet[3571]: E0114 01:26:06.113521 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:06.113794 kubelet[3571]: W0114 01:26:06.113542 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:06.113882 kubelet[3571]: E0114 01:26:06.113811 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:06.123834 systemd[1]: Started cri-containerd-06b03833ff7af60b9c8415347e7379f69e68889c8b9f529059fffcc6b64627c5.scope - libcontainer container 06b03833ff7af60b9c8415347e7379f69e68889c8b9f529059fffcc6b64627c5. Jan 14 01:26:06.149409 kubelet[3571]: E0114 01:26:06.149371 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:06.149587 kubelet[3571]: W0114 01:26:06.149513 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:06.149670 kubelet[3571]: E0114 01:26:06.149543 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:06.159000 audit: BPF prog-id=163 op=LOAD Jan 14 01:26:06.162000 audit: BPF prog-id=164 op=LOAD Jan 14 01:26:06.162000 audit[4095]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4068 pid=4095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:06.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036623033383333666637616636306239633834313533343765373337 Jan 14 01:26:06.163000 audit: BPF prog-id=164 op=UNLOAD Jan 14 01:26:06.163000 audit[4095]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4068 pid=4095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:06.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036623033383333666637616636306239633834313533343765373337 Jan 14 01:26:06.165000 audit: BPF prog-id=165 op=LOAD Jan 14 01:26:06.165000 audit[4095]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4068 pid=4095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:06.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036623033383333666637616636306239633834313533343765373337 Jan 14 01:26:06.165000 audit: BPF prog-id=166 op=LOAD Jan 14 01:26:06.165000 audit[4095]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4068 pid=4095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:06.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036623033383333666637616636306239633834313533343765373337 Jan 14 01:26:06.166000 audit: BPF prog-id=166 op=UNLOAD Jan 14 01:26:06.166000 audit[4095]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4068 pid=4095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:06.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036623033383333666637616636306239633834313533343765373337 Jan 14 01:26:06.166000 audit: BPF prog-id=165 op=UNLOAD Jan 14 01:26:06.166000 audit[4095]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4068 pid=4095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:06.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036623033383333666637616636306239633834313533343765373337 Jan 14 01:26:06.166000 audit: BPF prog-id=167 op=LOAD Jan 14 01:26:06.166000 audit[4095]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4068 pid=4095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:06.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036623033383333666637616636306239633834313533343765373337 Jan 14 01:26:06.222948 containerd[1962]: time="2026-01-14T01:26:06.222907308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6rztb,Uid:a303613c-a175-4eb1-8138-5b4f62695ac3,Namespace:calico-system,Attempt:0,} returns sandbox id \"06b03833ff7af60b9c8415347e7379f69e68889c8b9f529059fffcc6b64627c5\"" Jan 14 01:26:06.363000 audit[4156]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=4156 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:06.363000 audit[4156]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fffaef5a340 a2=0 a3=7fffaef5a32c items=0 ppid=3724 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:06.363000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:06.368000 audit[4156]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=4156 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:06.368000 audit[4156]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffaef5a340 a2=0 a3=0 items=0 ppid=3724 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:06.368000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:07.504416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1028631752.mount: Deactivated successfully. Jan 14 01:26:08.130259 kubelet[3571]: E0114 01:26:08.130208 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zrh2z" podUID="74b84cdc-323d-4b42-b95a-ceec7dfaa40f" Jan 14 01:26:09.423712 containerd[1962]: time="2026-01-14T01:26:09.423654007Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:26:09.425962 containerd[1962]: time="2026-01-14T01:26:09.425913835Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 14 01:26:09.428333 containerd[1962]: time="2026-01-14T01:26:09.428272748Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:26:09.431528 containerd[1962]: time="2026-01-14T01:26:09.431313364Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:26:09.431977 containerd[1962]: time="2026-01-14T01:26:09.431934118Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.320556193s" Jan 14 01:26:09.431977 containerd[1962]: time="2026-01-14T01:26:09.431966413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 14 01:26:09.433767 containerd[1962]: time="2026-01-14T01:26:09.433431001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 14 01:26:09.466654 containerd[1962]: time="2026-01-14T01:26:09.466607068Z" level=info msg="CreateContainer within sandbox \"c212f758116c3c2a37bbb6f82e1ee23dbcddc2747fcdc0b0b82d6ef1cf57d22f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 14 01:26:09.506569 containerd[1962]: time="2026-01-14T01:26:09.506123350Z" level=info msg="Container 13634d82a9180fbbbf85d752863ef16075af31657f162d6abe71192eae1f203b: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:26:09.520901 containerd[1962]: time="2026-01-14T01:26:09.520853056Z" level=info msg="CreateContainer within sandbox \"c212f758116c3c2a37bbb6f82e1ee23dbcddc2747fcdc0b0b82d6ef1cf57d22f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"13634d82a9180fbbbf85d752863ef16075af31657f162d6abe71192eae1f203b\"" Jan 14 01:26:09.521847 containerd[1962]: time="2026-01-14T01:26:09.521804226Z" level=info msg="StartContainer for \"13634d82a9180fbbbf85d752863ef16075af31657f162d6abe71192eae1f203b\"" Jan 14 01:26:09.523571 containerd[1962]: time="2026-01-14T01:26:09.523500611Z" level=info msg="connecting to shim 13634d82a9180fbbbf85d752863ef16075af31657f162d6abe71192eae1f203b" address="unix:///run/containerd/s/4037d2c0af8ec45a32de94a1044ee1c1136d4165e6c9b81e1353b1a08a539403" protocol=ttrpc version=3 Jan 14 01:26:09.558042 systemd[1]: Started cri-containerd-13634d82a9180fbbbf85d752863ef16075af31657f162d6abe71192eae1f203b.scope - libcontainer container 13634d82a9180fbbbf85d752863ef16075af31657f162d6abe71192eae1f203b. Jan 14 01:26:09.579378 kernel: kauditd_printk_skb: 64 callbacks suppressed Jan 14 01:26:09.579513 kernel: audit: type=1334 audit(1768353969.574:566): prog-id=168 op=LOAD Jan 14 01:26:09.574000 audit: BPF prog-id=168 op=LOAD Jan 14 01:26:09.575000 audit: BPF prog-id=169 op=LOAD Jan 14 01:26:09.587535 kernel: audit: type=1334 audit(1768353969.575:567): prog-id=169 op=LOAD Jan 14 01:26:09.587699 kernel: audit: type=1300 audit(1768353969.575:567): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3997 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:09.575000 audit[4169]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3997 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:09.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133363334643832613931383066626262663835643735323836336566 Jan 14 01:26:09.595650 kernel: audit: type=1327 audit(1768353969.575:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133363334643832613931383066626262663835643735323836336566 Jan 14 01:26:09.595787 kernel: audit: type=1334 audit(1768353969.575:568): prog-id=169 op=UNLOAD Jan 14 01:26:09.575000 audit: BPF prog-id=169 op=UNLOAD Jan 14 01:26:09.602114 kernel: audit: type=1300 audit(1768353969.575:568): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3997 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:09.575000 audit[4169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3997 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:09.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133363334643832613931383066626262663835643735323836336566 Jan 14 01:26:09.612289 kernel: audit: type=1327 audit(1768353969.575:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133363334643832613931383066626262663835643735323836336566 Jan 14 01:26:09.612420 kernel: audit: type=1334 audit(1768353969.576:569): prog-id=170 op=LOAD Jan 14 01:26:09.576000 audit: BPF prog-id=170 op=LOAD Jan 14 01:26:09.576000 audit[4169]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3997 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:09.625862 kernel: audit: type=1300 audit(1768353969.576:569): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3997 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:09.625984 kernel: audit: type=1327 audit(1768353969.576:569): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133363334643832613931383066626262663835643735323836336566 Jan 14 01:26:09.576000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133363334643832613931383066626262663835643735323836336566 Jan 14 01:26:09.576000 audit: BPF prog-id=171 op=LOAD Jan 14 01:26:09.576000 audit[4169]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3997 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:09.576000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133363334643832613931383066626262663835643735323836336566 Jan 14 01:26:09.576000 audit: BPF prog-id=171 op=UNLOAD Jan 14 01:26:09.576000 audit[4169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3997 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:09.576000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133363334643832613931383066626262663835643735323836336566 Jan 14 01:26:09.576000 audit: BPF prog-id=170 op=UNLOAD Jan 14 01:26:09.576000 audit[4169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3997 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:09.576000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133363334643832613931383066626262663835643735323836336566 Jan 14 01:26:09.576000 audit: BPF prog-id=172 op=LOAD Jan 14 01:26:09.576000 audit[4169]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3997 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:09.576000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133363334643832613931383066626262663835643735323836336566 Jan 14 01:26:09.648759 containerd[1962]: time="2026-01-14T01:26:09.648715805Z" level=info msg="StartContainer for \"13634d82a9180fbbbf85d752863ef16075af31657f162d6abe71192eae1f203b\" returns successfully" Jan 14 01:26:10.128483 kubelet[3571]: E0114 01:26:10.128410 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zrh2z" podUID="74b84cdc-323d-4b42-b95a-ceec7dfaa40f" Jan 14 01:26:10.443105 kubelet[3571]: I0114 01:26:10.442516 3571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6c8df8b558-smwgv" podStartSLOduration=2.119281526 podStartE2EDuration="5.442496697s" podCreationTimestamp="2026-01-14 01:26:05 +0000 UTC" firstStartedPulling="2026-01-14 01:26:06.109602738 +0000 UTC m=+58.197890876" lastFinishedPulling="2026-01-14 01:26:09.432817893 +0000 UTC m=+61.521106047" observedRunningTime="2026-01-14 01:26:10.427575103 +0000 UTC m=+62.515863258" watchObservedRunningTime="2026-01-14 01:26:10.442496697 +0000 UTC m=+62.530784851" Jan 14 01:26:10.467000 audit[4213]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=4213 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:10.467000 audit[4213]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe44e6f0c0 a2=0 a3=7ffe44e6f0ac items=0 ppid=3724 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:10.467000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:10.469895 kubelet[3571]: E0114 01:26:10.469854 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:10.469895 kubelet[3571]: W0114 01:26:10.469880 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:10.470150 kubelet[3571]: E0114 01:26:10.469907 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:10.470150 kubelet[3571]: E0114 01:26:10.470141 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:10.470390 kubelet[3571]: W0114 01:26:10.470152 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:10.470390 kubelet[3571]: E0114 01:26:10.470167 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:10.470390 kubelet[3571]: E0114 01:26:10.470362 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:10.470390 kubelet[3571]: W0114 01:26:10.470373 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:10.470605 kubelet[3571]: E0114 01:26:10.470394 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:10.470934 kubelet[3571]: E0114 01:26:10.470910 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:10.470934 kubelet[3571]: W0114 01:26:10.470930 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:10.471263 kubelet[3571]: E0114 01:26:10.470945 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:10.471310 kubelet[3571]: E0114 01:26:10.471299 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:10.471356 kubelet[3571]: W0114 01:26:10.471311 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:10.471356 kubelet[3571]: E0114 01:26:10.471324 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:10.471639 kubelet[3571]: E0114 01:26:10.471618 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:10.471715 kubelet[3571]: W0114 01:26:10.471657 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:10.471715 kubelet[3571]: E0114 01:26:10.471671 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:10.472581 kubelet[3571]: E0114 01:26:10.471979 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:10.472581 kubelet[3571]: W0114 01:26:10.471993 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:10.472581 kubelet[3571]: E0114 01:26:10.472005 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:10.472581 kubelet[3571]: E0114 01:26:10.472305 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:10.472581 kubelet[3571]: W0114 01:26:10.472316 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:10.472581 kubelet[3571]: E0114 01:26:10.472331 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:10.472894 kubelet[3571]: E0114 01:26:10.472641 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:10.472894 kubelet[3571]: W0114 01:26:10.472667 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:10.472894 kubelet[3571]: E0114 01:26:10.472681 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:10.472894 kubelet[3571]: E0114 01:26:10.472889 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:10.473074 kubelet[3571]: W0114 01:26:10.472899 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:10.473074 kubelet[3571]: E0114 01:26:10.472939 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:10.473649 kubelet[3571]: E0114 01:26:10.473201 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:10.473649 kubelet[3571]: W0114 01:26:10.473215 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:10.473649 kubelet[3571]: E0114 01:26:10.473227 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:10.473649 kubelet[3571]: E0114 01:26:10.473476 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:10.473649 kubelet[3571]: W0114 01:26:10.473589 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:10.473649 kubelet[3571]: E0114 01:26:10.473602 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:10.473959 kubelet[3571]: E0114 01:26:10.473904 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:10.473959 kubelet[3571]: W0114 01:26:10.473914 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:10.473959 kubelet[3571]: E0114 01:26:10.473926 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:10.474243 kubelet[3571]: E0114 01:26:10.474225 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:10.474243 kubelet[3571]: W0114 01:26:10.474243 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:10.474351 kubelet[3571]: E0114 01:26:10.474275 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:10.474648 kubelet[3571]: E0114 01:26:10.474607 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:10.474648 kubelet[3571]: W0114 01:26:10.474621 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:10.474648 kubelet[3571]: E0114 01:26:10.474632 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:10.474000 audit[4213]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=4213 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:10.474000 audit[4213]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe44e6f0c0 a2=0 a3=7ffe44e6f0ac items=0 ppid=3724 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:10.474000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:10.517964 kubelet[3571]: E0114 01:26:10.517923 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:10.517964 kubelet[3571]: W0114 01:26:10.517950 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:10.517964 kubelet[3571]: E0114 01:26:10.517973 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:10.518357 kubelet[3571]: E0114 01:26:10.518193 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:10.518357 kubelet[3571]: W0114 01:26:10.518200 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:10.518357 kubelet[3571]: E0114 01:26:10.518208 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:10.518675 kubelet[3571]: E0114 01:26:10.518512 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:10.518675 kubelet[3571]: W0114 01:26:10.518527 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:10.518675 kubelet[3571]: E0114 01:26:10.518539 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:10.518968 kubelet[3571]: E0114 01:26:10.518947 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:10.518968 kubelet[3571]: W0114 01:26:10.518962 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:10.518968 kubelet[3571]: E0114 01:26:10.518974 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:10.519188 kubelet[3571]: E0114 01:26:10.519154 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:10.519188 kubelet[3571]: W0114 01:26:10.519167 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:10.519339 kubelet[3571]: E0114 01:26:10.519210 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:10.519468 kubelet[3571]: E0114 01:26:10.519445 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:10.519468 kubelet[3571]: W0114 01:26:10.519460 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:10.519583 kubelet[3571]: E0114 01:26:10.519472 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:10.519865 kubelet[3571]: E0114 01:26:10.519851 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:10.519865 kubelet[3571]: W0114 01:26:10.519862 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:10.519940 kubelet[3571]: E0114 01:26:10.519878 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:10.520113 kubelet[3571]: E0114 01:26:10.520100 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:10.520113 kubelet[3571]: W0114 01:26:10.520109 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:10.520214 kubelet[3571]: E0114 01:26:10.520117 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:10.520323 kubelet[3571]: E0114 01:26:10.520310 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:10.520323 kubelet[3571]: W0114 01:26:10.520319 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:10.520390 kubelet[3571]: E0114 01:26:10.520327 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:10.520614 kubelet[3571]: E0114 01:26:10.520594 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:10.520614 kubelet[3571]: W0114 01:26:10.520611 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:10.520686 kubelet[3571]: E0114 01:26:10.520623 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:10.520915 kubelet[3571]: E0114 01:26:10.520899 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:10.520915 kubelet[3571]: W0114 01:26:10.520912 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:10.520980 kubelet[3571]: E0114 01:26:10.520920 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:10.521124 kubelet[3571]: E0114 01:26:10.521109 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:10.521124 kubelet[3571]: W0114 01:26:10.521120 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:10.521188 kubelet[3571]: E0114 01:26:10.521129 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:10.521689 kubelet[3571]: E0114 01:26:10.521475 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:10.521689 kubelet[3571]: W0114 01:26:10.521652 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:10.521689 kubelet[3571]: E0114 01:26:10.521669 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:10.522140 kubelet[3571]: E0114 01:26:10.522118 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:10.522140 kubelet[3571]: W0114 01:26:10.522132 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:10.522256 kubelet[3571]: E0114 01:26:10.522145 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:10.522411 kubelet[3571]: E0114 01:26:10.522395 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:10.522411 kubelet[3571]: W0114 01:26:10.522408 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:10.522531 kubelet[3571]: E0114 01:26:10.522421 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:10.522720 kubelet[3571]: E0114 01:26:10.522703 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:10.522720 kubelet[3571]: W0114 01:26:10.522717 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:10.522822 kubelet[3571]: E0114 01:26:10.522730 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:10.523028 kubelet[3571]: E0114 01:26:10.523008 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:10.523028 kubelet[3571]: W0114 01:26:10.523022 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:10.523132 kubelet[3571]: E0114 01:26:10.523034 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:10.523637 kubelet[3571]: E0114 01:26:10.523611 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:26:10.523637 kubelet[3571]: W0114 01:26:10.523626 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:26:10.523725 kubelet[3571]: E0114 01:26:10.523639 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:26:10.882274 containerd[1962]: time="2026-01-14T01:26:10.882220923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:26:10.883410 containerd[1962]: time="2026-01-14T01:26:10.883276212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 14 01:26:10.884584 containerd[1962]: time="2026-01-14T01:26:10.884431176Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:26:10.887215 containerd[1962]: time="2026-01-14T01:26:10.887181731Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:26:10.887808 containerd[1962]: time="2026-01-14T01:26:10.887780046Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.454050646s" Jan 14 01:26:10.887908 containerd[1962]: time="2026-01-14T01:26:10.887884759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 14 01:26:10.895177 containerd[1962]: time="2026-01-14T01:26:10.895136195Z" level=info msg="CreateContainer within sandbox \"06b03833ff7af60b9c8415347e7379f69e68889c8b9f529059fffcc6b64627c5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 14 01:26:10.910574 containerd[1962]: time="2026-01-14T01:26:10.908875697Z" level=info msg="Container dbbf2f53ce236188ee70ed9710b041cf774e76c7ca0f5aa961a2df60b8dc9425: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:26:10.922901 containerd[1962]: time="2026-01-14T01:26:10.922832483Z" level=info msg="CreateContainer within sandbox \"06b03833ff7af60b9c8415347e7379f69e68889c8b9f529059fffcc6b64627c5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"dbbf2f53ce236188ee70ed9710b041cf774e76c7ca0f5aa961a2df60b8dc9425\"" Jan 14 01:26:10.923698 containerd[1962]: time="2026-01-14T01:26:10.923628609Z" level=info msg="StartContainer for \"dbbf2f53ce236188ee70ed9710b041cf774e76c7ca0f5aa961a2df60b8dc9425\"" Jan 14 01:26:10.925326 containerd[1962]: time="2026-01-14T01:26:10.925290535Z" level=info msg="connecting to shim dbbf2f53ce236188ee70ed9710b041cf774e76c7ca0f5aa961a2df60b8dc9425" address="unix:///run/containerd/s/21f67572a10b776ac80f2d677cbaeceef64d94a6ba9c84d7610fae711862653f" protocol=ttrpc version=3 Jan 14 01:26:10.954849 systemd[1]: Started cri-containerd-dbbf2f53ce236188ee70ed9710b041cf774e76c7ca0f5aa961a2df60b8dc9425.scope - libcontainer container dbbf2f53ce236188ee70ed9710b041cf774e76c7ca0f5aa961a2df60b8dc9425. Jan 14 01:26:11.010000 audit: BPF prog-id=173 op=LOAD Jan 14 01:26:11.010000 audit[4251]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000206488 a2=98 a3=0 items=0 ppid=4068 pid=4251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:11.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462626632663533636532333631383865653730656439373130623034 Jan 14 01:26:11.010000 audit: BPF prog-id=174 op=LOAD Jan 14 01:26:11.010000 audit[4251]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000206218 a2=98 a3=0 items=0 ppid=4068 pid=4251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:11.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462626632663533636532333631383865653730656439373130623034 Jan 14 01:26:11.010000 audit: BPF prog-id=174 op=UNLOAD Jan 14 01:26:11.010000 audit[4251]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4068 pid=4251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:11.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462626632663533636532333631383865653730656439373130623034 Jan 14 01:26:11.010000 audit: BPF prog-id=173 op=UNLOAD Jan 14 01:26:11.010000 audit[4251]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4068 pid=4251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:11.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462626632663533636532333631383865653730656439373130623034 Jan 14 01:26:11.010000 audit: BPF prog-id=175 op=LOAD Jan 14 01:26:11.010000 audit[4251]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0002066e8 a2=98 a3=0 items=0 ppid=4068 pid=4251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:11.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462626632663533636532333631383865653730656439373130623034 Jan 14 01:26:11.048071 containerd[1962]: time="2026-01-14T01:26:11.048024245Z" level=info msg="StartContainer for \"dbbf2f53ce236188ee70ed9710b041cf774e76c7ca0f5aa961a2df60b8dc9425\" returns successfully" Jan 14 01:26:11.054589 systemd[1]: cri-containerd-dbbf2f53ce236188ee70ed9710b041cf774e76c7ca0f5aa961a2df60b8dc9425.scope: Deactivated successfully. Jan 14 01:26:11.057000 audit: BPF prog-id=175 op=UNLOAD Jan 14 01:26:11.076216 containerd[1962]: time="2026-01-14T01:26:11.076160657Z" level=info msg="received container exit event container_id:\"dbbf2f53ce236188ee70ed9710b041cf774e76c7ca0f5aa961a2df60b8dc9425\" id:\"dbbf2f53ce236188ee70ed9710b041cf774e76c7ca0f5aa961a2df60b8dc9425\" pid:4264 exited_at:{seconds:1768353971 nanos:59118649}" Jan 14 01:26:11.108019 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbbf2f53ce236188ee70ed9710b041cf774e76c7ca0f5aa961a2df60b8dc9425-rootfs.mount: Deactivated successfully. Jan 14 01:26:11.415753 containerd[1962]: time="2026-01-14T01:26:11.415704427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 14 01:26:12.128805 kubelet[3571]: E0114 01:26:12.128746 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zrh2z" podUID="74b84cdc-323d-4b42-b95a-ceec7dfaa40f" Jan 14 01:26:14.128575 kubelet[3571]: E0114 01:26:14.128484 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zrh2z" podUID="74b84cdc-323d-4b42-b95a-ceec7dfaa40f" Jan 14 01:26:16.130262 kubelet[3571]: E0114 01:26:16.130161 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zrh2z" podUID="74b84cdc-323d-4b42-b95a-ceec7dfaa40f" Jan 14 01:26:17.512880 containerd[1962]: time="2026-01-14T01:26:17.512775078Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:26:17.514174 containerd[1962]: time="2026-01-14T01:26:17.514015823Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 14 01:26:17.515262 containerd[1962]: time="2026-01-14T01:26:17.515223822Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:26:17.518589 containerd[1962]: time="2026-01-14T01:26:17.517828118Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:26:17.518589 containerd[1962]: time="2026-01-14T01:26:17.518416233Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 6.102669739s" Jan 14 01:26:17.518589 containerd[1962]: time="2026-01-14T01:26:17.518449763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 14 01:26:17.539062 containerd[1962]: time="2026-01-14T01:26:17.539007118Z" level=info msg="CreateContainer within sandbox \"06b03833ff7af60b9c8415347e7379f69e68889c8b9f529059fffcc6b64627c5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 01:26:17.552597 containerd[1962]: time="2026-01-14T01:26:17.551802416Z" level=info msg="Container 80f037a5c21706b0a98a0002d35310ed7a2b1724fa31f7c8ea304d791b9729ce: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:26:17.564747 containerd[1962]: time="2026-01-14T01:26:17.564686502Z" level=info msg="CreateContainer within sandbox \"06b03833ff7af60b9c8415347e7379f69e68889c8b9f529059fffcc6b64627c5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"80f037a5c21706b0a98a0002d35310ed7a2b1724fa31f7c8ea304d791b9729ce\"" Jan 14 01:26:17.565699 containerd[1962]: time="2026-01-14T01:26:17.565667500Z" level=info msg="StartContainer for \"80f037a5c21706b0a98a0002d35310ed7a2b1724fa31f7c8ea304d791b9729ce\"" Jan 14 01:26:17.577211 containerd[1962]: time="2026-01-14T01:26:17.577154416Z" level=info msg="connecting to shim 80f037a5c21706b0a98a0002d35310ed7a2b1724fa31f7c8ea304d791b9729ce" address="unix:///run/containerd/s/21f67572a10b776ac80f2d677cbaeceef64d94a6ba9c84d7610fae711862653f" protocol=ttrpc version=3 Jan 14 01:26:17.635832 systemd[1]: Started cri-containerd-80f037a5c21706b0a98a0002d35310ed7a2b1724fa31f7c8ea304d791b9729ce.scope - libcontainer container 80f037a5c21706b0a98a0002d35310ed7a2b1724fa31f7c8ea304d791b9729ce. Jan 14 01:26:17.728667 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 14 01:26:17.728803 kernel: audit: type=1334 audit(1768353977.725:582): prog-id=176 op=LOAD Jan 14 01:26:17.729057 kernel: audit: type=1300 audit(1768353977.725:582): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4068 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:17.725000 audit: BPF prog-id=176 op=LOAD Jan 14 01:26:17.725000 audit[4307]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4068 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:17.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830663033376135633231373036623061393861303030326433353331 Jan 14 01:26:17.742591 kernel: audit: type=1327 audit(1768353977.725:582): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830663033376135633231373036623061393861303030326433353331 Jan 14 01:26:17.725000 audit: BPF prog-id=177 op=LOAD Jan 14 01:26:17.725000 audit[4307]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4068 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:17.747048 kernel: audit: type=1334 audit(1768353977.725:583): prog-id=177 op=LOAD Jan 14 01:26:17.747290 kernel: audit: type=1300 audit(1768353977.725:583): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4068 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:17.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830663033376135633231373036623061393861303030326433353331 Jan 14 01:26:17.752996 kernel: audit: type=1327 audit(1768353977.725:583): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830663033376135633231373036623061393861303030326433353331 Jan 14 01:26:17.725000 audit: BPF prog-id=177 op=UNLOAD Jan 14 01:26:17.758373 kernel: audit: type=1334 audit(1768353977.725:584): prog-id=177 op=UNLOAD Jan 14 01:26:17.758464 kernel: audit: type=1300 audit(1768353977.725:584): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4068 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:17.725000 audit[4307]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4068 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:17.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830663033376135633231373036623061393861303030326433353331 Jan 14 01:26:17.766347 kernel: audit: type=1327 audit(1768353977.725:584): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830663033376135633231373036623061393861303030326433353331 Jan 14 01:26:17.725000 audit: BPF prog-id=176 op=UNLOAD Jan 14 01:26:17.770726 kernel: audit: type=1334 audit(1768353977.725:585): prog-id=176 op=UNLOAD Jan 14 01:26:17.725000 audit[4307]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4068 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:17.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830663033376135633231373036623061393861303030326433353331 Jan 14 01:26:17.725000 audit: BPF prog-id=178 op=LOAD Jan 14 01:26:17.725000 audit[4307]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4068 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:17.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830663033376135633231373036623061393861303030326433353331 Jan 14 01:26:17.807171 containerd[1962]: time="2026-01-14T01:26:17.803234875Z" level=info msg="StartContainer for \"80f037a5c21706b0a98a0002d35310ed7a2b1724fa31f7c8ea304d791b9729ce\" returns successfully" Jan 14 01:26:18.131571 kubelet[3571]: E0114 01:26:18.129666 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zrh2z" podUID="74b84cdc-323d-4b42-b95a-ceec7dfaa40f" Jan 14 01:26:18.709768 systemd[1]: cri-containerd-80f037a5c21706b0a98a0002d35310ed7a2b1724fa31f7c8ea304d791b9729ce.scope: Deactivated successfully. Jan 14 01:26:18.710092 systemd[1]: cri-containerd-80f037a5c21706b0a98a0002d35310ed7a2b1724fa31f7c8ea304d791b9729ce.scope: Consumed 609ms CPU time, 155.3M memory peak, 8.4M read from disk, 171.3M written to disk. Jan 14 01:26:18.713000 audit: BPF prog-id=178 op=UNLOAD Jan 14 01:26:18.741851 containerd[1962]: time="2026-01-14T01:26:18.719414213Z" level=info msg="received container exit event container_id:\"80f037a5c21706b0a98a0002d35310ed7a2b1724fa31f7c8ea304d791b9729ce\" id:\"80f037a5c21706b0a98a0002d35310ed7a2b1724fa31f7c8ea304d791b9729ce\" pid:4320 exited_at:{seconds:1768353978 nanos:716737947}" Jan 14 01:26:18.775893 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80f037a5c21706b0a98a0002d35310ed7a2b1724fa31f7c8ea304d791b9729ce-rootfs.mount: Deactivated successfully. Jan 14 01:26:18.796390 kubelet[3571]: I0114 01:26:18.796075 3571 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 14 01:26:19.059587 systemd[1]: Created slice kubepods-burstable-pod257e1b40_fca0_492c_87be_45f394e92bdc.slice - libcontainer container kubepods-burstable-pod257e1b40_fca0_492c_87be_45f394e92bdc.slice. Jan 14 01:26:19.072238 systemd[1]: Created slice kubepods-besteffort-pod525cee0b_846b_43ba_9b3e_e192ccc373a0.slice - libcontainer container kubepods-besteffort-pod525cee0b_846b_43ba_9b3e_e192ccc373a0.slice. Jan 14 01:26:19.110938 systemd[1]: Created slice kubepods-besteffort-pod4263d4be_fc9d_471e_8df9_42f06716a4f0.slice - libcontainer container kubepods-besteffort-pod4263d4be_fc9d_471e_8df9_42f06716a4f0.slice. Jan 14 01:26:19.121919 systemd[1]: Created slice kubepods-besteffort-pod89c07bdd_9dad_4c41_8dfe_3de894f6f743.slice - libcontainer container kubepods-besteffort-pod89c07bdd_9dad_4c41_8dfe_3de894f6f743.slice. Jan 14 01:26:19.130516 systemd[1]: Created slice kubepods-besteffort-pod993f578d_b707_42bd_b6e9_14c5aa23a03f.slice - libcontainer container kubepods-besteffort-pod993f578d_b707_42bd_b6e9_14c5aa23a03f.slice. Jan 14 01:26:19.140949 systemd[1]: Created slice kubepods-besteffort-pod4fe5960b_c32f_4a52_8f07_a5dac29b6214.slice - libcontainer container kubepods-besteffort-pod4fe5960b_c32f_4a52_8f07_a5dac29b6214.slice. Jan 14 01:26:19.156304 systemd[1]: Created slice kubepods-burstable-pod2a03d66f_dd24_454c_84df_6cece7cb808c.slice - libcontainer container kubepods-burstable-pod2a03d66f_dd24_454c_84df_6cece7cb808c.slice. Jan 14 01:26:19.163414 systemd[1]: Created slice kubepods-besteffort-pod56394477_d28d_42eb_bee5_a9a20263c11f.slice - libcontainer container kubepods-besteffort-pod56394477_d28d_42eb_bee5_a9a20263c11f.slice. Jan 14 01:26:19.189019 kubelet[3571]: I0114 01:26:19.188859 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/993f578d-b707-42bd-b6e9-14c5aa23a03f-goldmane-ca-bundle\") pod \"goldmane-666569f655-5mdck\" (UID: \"993f578d-b707-42bd-b6e9-14c5aa23a03f\") " pod="calico-system/goldmane-666569f655-5mdck" Jan 14 01:26:19.190734 kubelet[3571]: I0114 01:26:19.190666 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcshs\" (UniqueName: \"kubernetes.io/projected/4263d4be-fc9d-471e-8df9-42f06716a4f0-kube-api-access-zcshs\") pod \"calico-apiserver-574d5c8798-6q7jw\" (UID: \"4263d4be-fc9d-471e-8df9-42f06716a4f0\") " pod="calico-apiserver/calico-apiserver-574d5c8798-6q7jw" Jan 14 01:26:19.190848 kubelet[3571]: I0114 01:26:19.190801 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4263d4be-fc9d-471e-8df9-42f06716a4f0-calico-apiserver-certs\") pod \"calico-apiserver-574d5c8798-6q7jw\" (UID: \"4263d4be-fc9d-471e-8df9-42f06716a4f0\") " pod="calico-apiserver/calico-apiserver-574d5c8798-6q7jw" Jan 14 01:26:19.190901 kubelet[3571]: I0114 01:26:19.190837 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/56394477-d28d-42eb-bee5-a9a20263c11f-calico-apiserver-certs\") pod \"calico-apiserver-65b9745b8-fxksz\" (UID: \"56394477-d28d-42eb-bee5-a9a20263c11f\") " pod="calico-apiserver/calico-apiserver-65b9745b8-fxksz" Jan 14 01:26:19.190950 kubelet[3571]: I0114 01:26:19.190915 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4r5x\" (UniqueName: \"kubernetes.io/projected/56394477-d28d-42eb-bee5-a9a20263c11f-kube-api-access-h4r5x\") pod \"calico-apiserver-65b9745b8-fxksz\" (UID: \"56394477-d28d-42eb-bee5-a9a20263c11f\") " pod="calico-apiserver/calico-apiserver-65b9745b8-fxksz" Jan 14 01:26:19.190999 kubelet[3571]: I0114 01:26:19.190987 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bkhl\" (UniqueName: \"kubernetes.io/projected/2a03d66f-dd24-454c-84df-6cece7cb808c-kube-api-access-9bkhl\") pod \"coredns-674b8bbfcf-k7zqh\" (UID: \"2a03d66f-dd24-454c-84df-6cece7cb808c\") " pod="kube-system/coredns-674b8bbfcf-k7zqh" Jan 14 01:26:19.191083 kubelet[3571]: I0114 01:26:19.191066 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4fe5960b-c32f-4a52-8f07-a5dac29b6214-whisker-backend-key-pair\") pod \"whisker-84845f84c5-f7vsl\" (UID: \"4fe5960b-c32f-4a52-8f07-a5dac29b6214\") " pod="calico-system/whisker-84845f84c5-f7vsl" Jan 14 01:26:19.191173 kubelet[3571]: I0114 01:26:19.191145 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fe5960b-c32f-4a52-8f07-a5dac29b6214-whisker-ca-bundle\") pod \"whisker-84845f84c5-f7vsl\" (UID: \"4fe5960b-c32f-4a52-8f07-a5dac29b6214\") " pod="calico-system/whisker-84845f84c5-f7vsl" Jan 14 01:26:19.191238 kubelet[3571]: I0114 01:26:19.191221 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a03d66f-dd24-454c-84df-6cece7cb808c-config-volume\") pod \"coredns-674b8bbfcf-k7zqh\" (UID: \"2a03d66f-dd24-454c-84df-6cece7cb808c\") " pod="kube-system/coredns-674b8bbfcf-k7zqh" Jan 14 01:26:19.191325 kubelet[3571]: I0114 01:26:19.191309 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/993f578d-b707-42bd-b6e9-14c5aa23a03f-config\") pod \"goldmane-666569f655-5mdck\" (UID: \"993f578d-b707-42bd-b6e9-14c5aa23a03f\") " pod="calico-system/goldmane-666569f655-5mdck" Jan 14 01:26:19.191399 kubelet[3571]: I0114 01:26:19.191384 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zsbb\" (UniqueName: \"kubernetes.io/projected/4fe5960b-c32f-4a52-8f07-a5dac29b6214-kube-api-access-5zsbb\") pod \"whisker-84845f84c5-f7vsl\" (UID: \"4fe5960b-c32f-4a52-8f07-a5dac29b6214\") " pod="calico-system/whisker-84845f84c5-f7vsl" Jan 14 01:26:19.191480 kubelet[3571]: I0114 01:26:19.191464 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnbz2\" (UniqueName: \"kubernetes.io/projected/89c07bdd-9dad-4c41-8dfe-3de894f6f743-kube-api-access-bnbz2\") pod \"calico-kube-controllers-5799fb7c8b-6l2xc\" (UID: \"89c07bdd-9dad-4c41-8dfe-3de894f6f743\") " pod="calico-system/calico-kube-controllers-5799fb7c8b-6l2xc" Jan 14 01:26:19.191592 kubelet[3571]: I0114 01:26:19.191494 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/993f578d-b707-42bd-b6e9-14c5aa23a03f-goldmane-key-pair\") pod \"goldmane-666569f655-5mdck\" (UID: \"993f578d-b707-42bd-b6e9-14c5aa23a03f\") " pod="calico-system/goldmane-666569f655-5mdck" Jan 14 01:26:19.191646 kubelet[3571]: I0114 01:26:19.191617 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf4gh\" (UniqueName: \"kubernetes.io/projected/257e1b40-fca0-492c-87be-45f394e92bdc-kube-api-access-qf4gh\") pod \"coredns-674b8bbfcf-2rswm\" (UID: \"257e1b40-fca0-492c-87be-45f394e92bdc\") " pod="kube-system/coredns-674b8bbfcf-2rswm" Jan 14 01:26:19.191723 kubelet[3571]: I0114 01:26:19.191707 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89c07bdd-9dad-4c41-8dfe-3de894f6f743-tigera-ca-bundle\") pod \"calico-kube-controllers-5799fb7c8b-6l2xc\" (UID: \"89c07bdd-9dad-4c41-8dfe-3de894f6f743\") " pod="calico-system/calico-kube-controllers-5799fb7c8b-6l2xc" Jan 14 01:26:19.191782 kubelet[3571]: I0114 01:26:19.191769 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zblph\" (UniqueName: \"kubernetes.io/projected/993f578d-b707-42bd-b6e9-14c5aa23a03f-kube-api-access-zblph\") pod \"goldmane-666569f655-5mdck\" (UID: \"993f578d-b707-42bd-b6e9-14c5aa23a03f\") " pod="calico-system/goldmane-666569f655-5mdck" Jan 14 01:26:19.191837 kubelet[3571]: I0114 01:26:19.191796 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/525cee0b-846b-43ba-9b3e-e192ccc373a0-calico-apiserver-certs\") pod \"calico-apiserver-574d5c8798-9hhk4\" (UID: \"525cee0b-846b-43ba-9b3e-e192ccc373a0\") " pod="calico-apiserver/calico-apiserver-574d5c8798-9hhk4" Jan 14 01:26:19.191909 kubelet[3571]: I0114 01:26:19.191852 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/257e1b40-fca0-492c-87be-45f394e92bdc-config-volume\") pod \"coredns-674b8bbfcf-2rswm\" (UID: \"257e1b40-fca0-492c-87be-45f394e92bdc\") " pod="kube-system/coredns-674b8bbfcf-2rswm" Jan 14 01:26:19.191960 kubelet[3571]: I0114 01:26:19.191929 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdvfp\" (UniqueName: \"kubernetes.io/projected/525cee0b-846b-43ba-9b3e-e192ccc373a0-kube-api-access-jdvfp\") pod \"calico-apiserver-574d5c8798-9hhk4\" (UID: \"525cee0b-846b-43ba-9b3e-e192ccc373a0\") " pod="calico-apiserver/calico-apiserver-574d5c8798-9hhk4" Jan 14 01:26:19.366714 containerd[1962]: time="2026-01-14T01:26:19.366658402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2rswm,Uid:257e1b40-fca0-492c-87be-45f394e92bdc,Namespace:kube-system,Attempt:0,}" Jan 14 01:26:19.389397 containerd[1962]: time="2026-01-14T01:26:19.388828472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574d5c8798-9hhk4,Uid:525cee0b-846b-43ba-9b3e-e192ccc373a0,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:26:19.420478 containerd[1962]: time="2026-01-14T01:26:19.420249956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574d5c8798-6q7jw,Uid:4263d4be-fc9d-471e-8df9-42f06716a4f0,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:26:19.439867 containerd[1962]: time="2026-01-14T01:26:19.439819320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5799fb7c8b-6l2xc,Uid:89c07bdd-9dad-4c41-8dfe-3de894f6f743,Namespace:calico-system,Attempt:0,}" Jan 14 01:26:19.460190 containerd[1962]: time="2026-01-14T01:26:19.460146206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84845f84c5-f7vsl,Uid:4fe5960b-c32f-4a52-8f07-a5dac29b6214,Namespace:calico-system,Attempt:0,}" Jan 14 01:26:19.469890 containerd[1962]: time="2026-01-14T01:26:19.469830158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5mdck,Uid:993f578d-b707-42bd-b6e9-14c5aa23a03f,Namespace:calico-system,Attempt:0,}" Jan 14 01:26:19.472997 containerd[1962]: time="2026-01-14T01:26:19.472930789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65b9745b8-fxksz,Uid:56394477-d28d-42eb-bee5-a9a20263c11f,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:26:19.474725 containerd[1962]: time="2026-01-14T01:26:19.474522105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k7zqh,Uid:2a03d66f-dd24-454c-84df-6cece7cb808c,Namespace:kube-system,Attempt:0,}" Jan 14 01:26:19.506244 containerd[1962]: time="2026-01-14T01:26:19.506193035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 14 01:26:19.909219 containerd[1962]: time="2026-01-14T01:26:19.909161765Z" level=error msg="Failed to destroy network for sandbox \"9f0b2cfda4c2347956ef62ae8587fd634831aad471aa31ff04cc3e9c8ce4cc99\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:19.915895 systemd[1]: run-netns-cni\x2d5534e352\x2d176d\x2d1176\x2d030f\x2d1366337dcd1d.mount: Deactivated successfully. Jan 14 01:26:19.921884 containerd[1962]: time="2026-01-14T01:26:19.921542026Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5799fb7c8b-6l2xc,Uid:89c07bdd-9dad-4c41-8dfe-3de894f6f743,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f0b2cfda4c2347956ef62ae8587fd634831aad471aa31ff04cc3e9c8ce4cc99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:19.936213 kubelet[3571]: E0114 01:26:19.936070 3571 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f0b2cfda4c2347956ef62ae8587fd634831aad471aa31ff04cc3e9c8ce4cc99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:19.937459 kubelet[3571]: E0114 01:26:19.936626 3571 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f0b2cfda4c2347956ef62ae8587fd634831aad471aa31ff04cc3e9c8ce4cc99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5799fb7c8b-6l2xc" Jan 14 01:26:19.937459 kubelet[3571]: E0114 01:26:19.936667 3571 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f0b2cfda4c2347956ef62ae8587fd634831aad471aa31ff04cc3e9c8ce4cc99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5799fb7c8b-6l2xc" Jan 14 01:26:19.937459 kubelet[3571]: E0114 01:26:19.936741 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5799fb7c8b-6l2xc_calico-system(89c07bdd-9dad-4c41-8dfe-3de894f6f743)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5799fb7c8b-6l2xc_calico-system(89c07bdd-9dad-4c41-8dfe-3de894f6f743)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f0b2cfda4c2347956ef62ae8587fd634831aad471aa31ff04cc3e9c8ce4cc99\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5799fb7c8b-6l2xc" podUID="89c07bdd-9dad-4c41-8dfe-3de894f6f743" Jan 14 01:26:19.984101 containerd[1962]: time="2026-01-14T01:26:19.984042562Z" level=error msg="Failed to destroy network for sandbox \"7783bb6d806db9e2a2d310d68e543e715a21951b8a9cf07612d43dd519f3f5fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:19.988918 containerd[1962]: time="2026-01-14T01:26:19.987070432Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574d5c8798-6q7jw,Uid:4263d4be-fc9d-471e-8df9-42f06716a4f0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7783bb6d806db9e2a2d310d68e543e715a21951b8a9cf07612d43dd519f3f5fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:19.989598 kubelet[3571]: E0114 01:26:19.989189 3571 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7783bb6d806db9e2a2d310d68e543e715a21951b8a9cf07612d43dd519f3f5fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:19.989598 kubelet[3571]: E0114 01:26:19.989263 3571 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7783bb6d806db9e2a2d310d68e543e715a21951b8a9cf07612d43dd519f3f5fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-574d5c8798-6q7jw" Jan 14 01:26:19.989598 kubelet[3571]: E0114 01:26:19.989290 3571 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7783bb6d806db9e2a2d310d68e543e715a21951b8a9cf07612d43dd519f3f5fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-574d5c8798-6q7jw" Jan 14 01:26:19.989787 kubelet[3571]: E0114 01:26:19.989354 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-574d5c8798-6q7jw_calico-apiserver(4263d4be-fc9d-471e-8df9-42f06716a4f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-574d5c8798-6q7jw_calico-apiserver(4263d4be-fc9d-471e-8df9-42f06716a4f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7783bb6d806db9e2a2d310d68e543e715a21951b8a9cf07612d43dd519f3f5fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-574d5c8798-6q7jw" podUID="4263d4be-fc9d-471e-8df9-42f06716a4f0" Jan 14 01:26:19.991844 systemd[1]: run-netns-cni\x2d479c7244\x2d0587\x2d6579\x2d475a\x2d59a58f730456.mount: Deactivated successfully. Jan 14 01:26:20.010466 containerd[1962]: time="2026-01-14T01:26:20.010410061Z" level=error msg="Failed to destroy network for sandbox \"c39d56913494a3bc904c1e8ce042247f7a5af1257eaa13fa19c0034bf4afc39c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:20.013493 containerd[1962]: time="2026-01-14T01:26:20.013392713Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65b9745b8-fxksz,Uid:56394477-d28d-42eb-bee5-a9a20263c11f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c39d56913494a3bc904c1e8ce042247f7a5af1257eaa13fa19c0034bf4afc39c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:20.014224 kubelet[3571]: E0114 01:26:20.014150 3571 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c39d56913494a3bc904c1e8ce042247f7a5af1257eaa13fa19c0034bf4afc39c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:20.014420 kubelet[3571]: E0114 01:26:20.014311 3571 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c39d56913494a3bc904c1e8ce042247f7a5af1257eaa13fa19c0034bf4afc39c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65b9745b8-fxksz" Jan 14 01:26:20.015022 kubelet[3571]: E0114 01:26:20.014501 3571 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c39d56913494a3bc904c1e8ce042247f7a5af1257eaa13fa19c0034bf4afc39c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65b9745b8-fxksz" Jan 14 01:26:20.015302 kubelet[3571]: E0114 01:26:20.015164 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65b9745b8-fxksz_calico-apiserver(56394477-d28d-42eb-bee5-a9a20263c11f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65b9745b8-fxksz_calico-apiserver(56394477-d28d-42eb-bee5-a9a20263c11f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c39d56913494a3bc904c1e8ce042247f7a5af1257eaa13fa19c0034bf4afc39c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65b9745b8-fxksz" podUID="56394477-d28d-42eb-bee5-a9a20263c11f" Jan 14 01:26:20.016252 systemd[1]: run-netns-cni\x2da29af670\x2dc04b\x2d383b\x2d01ec\x2dff02e0dcbbe2.mount: Deactivated successfully. Jan 14 01:26:20.018622 containerd[1962]: time="2026-01-14T01:26:20.018580164Z" level=error msg="Failed to destroy network for sandbox \"f5d06789e11bc924ab4dc804c788b246235535d14d4d1982561a63a99c087fd1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:20.020184 containerd[1962]: time="2026-01-14T01:26:20.019704967Z" level=error msg="Failed to destroy network for sandbox \"a809e57f33aa523451949deb6561358e30b32c82f28be63af2890d98b2009531\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:20.029110 systemd[1]: run-netns-cni\x2d7c7e8a15\x2dba41\x2d1557\x2d888b\x2db8092d47d906.mount: Deactivated successfully. Jan 14 01:26:20.029268 systemd[1]: run-netns-cni\x2d9e65834b\x2d4dc6\x2d8860\x2d05cd\x2d6f2e53685fa3.mount: Deactivated successfully. Jan 14 01:26:20.034145 containerd[1962]: time="2026-01-14T01:26:20.034086497Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84845f84c5-f7vsl,Uid:4fe5960b-c32f-4a52-8f07-a5dac29b6214,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5d06789e11bc924ab4dc804c788b246235535d14d4d1982561a63a99c087fd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:20.034783 kubelet[3571]: E0114 01:26:20.034741 3571 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5d06789e11bc924ab4dc804c788b246235535d14d4d1982561a63a99c087fd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:20.034891 kubelet[3571]: E0114 01:26:20.034812 3571 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5d06789e11bc924ab4dc804c788b246235535d14d4d1982561a63a99c087fd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-84845f84c5-f7vsl" Jan 14 01:26:20.034891 kubelet[3571]: E0114 01:26:20.034843 3571 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5d06789e11bc924ab4dc804c788b246235535d14d4d1982561a63a99c087fd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-84845f84c5-f7vsl" Jan 14 01:26:20.034986 kubelet[3571]: E0114 01:26:20.034907 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-84845f84c5-f7vsl_calico-system(4fe5960b-c32f-4a52-8f07-a5dac29b6214)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-84845f84c5-f7vsl_calico-system(4fe5960b-c32f-4a52-8f07-a5dac29b6214)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5d06789e11bc924ab4dc804c788b246235535d14d4d1982561a63a99c087fd1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-84845f84c5-f7vsl" podUID="4fe5960b-c32f-4a52-8f07-a5dac29b6214" Jan 14 01:26:20.038817 containerd[1962]: time="2026-01-14T01:26:20.038687917Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5mdck,Uid:993f578d-b707-42bd-b6e9-14c5aa23a03f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a809e57f33aa523451949deb6561358e30b32c82f28be63af2890d98b2009531\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:20.039114 kubelet[3571]: E0114 01:26:20.038953 3571 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a809e57f33aa523451949deb6561358e30b32c82f28be63af2890d98b2009531\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:20.039114 kubelet[3571]: E0114 01:26:20.039014 3571 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a809e57f33aa523451949deb6561358e30b32c82f28be63af2890d98b2009531\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5mdck" Jan 14 01:26:20.039114 kubelet[3571]: E0114 01:26:20.039041 3571 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a809e57f33aa523451949deb6561358e30b32c82f28be63af2890d98b2009531\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5mdck" Jan 14 01:26:20.039271 kubelet[3571]: E0114 01:26:20.039103 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-5mdck_calico-system(993f578d-b707-42bd-b6e9-14c5aa23a03f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-5mdck_calico-system(993f578d-b707-42bd-b6e9-14c5aa23a03f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a809e57f33aa523451949deb6561358e30b32c82f28be63af2890d98b2009531\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-5mdck" podUID="993f578d-b707-42bd-b6e9-14c5aa23a03f" Jan 14 01:26:20.051740 containerd[1962]: time="2026-01-14T01:26:20.051682855Z" level=error msg="Failed to destroy network for sandbox \"6feb8f28ddd2796cfe5b68e4012dc15f7c13f25d54a379d146b5b776a9004b9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:20.054839 containerd[1962]: time="2026-01-14T01:26:20.054765505Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k7zqh,Uid:2a03d66f-dd24-454c-84df-6cece7cb808c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6feb8f28ddd2796cfe5b68e4012dc15f7c13f25d54a379d146b5b776a9004b9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:20.055413 kubelet[3571]: E0114 01:26:20.055369 3571 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6feb8f28ddd2796cfe5b68e4012dc15f7c13f25d54a379d146b5b776a9004b9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:20.055592 kubelet[3571]: E0114 01:26:20.055443 3571 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6feb8f28ddd2796cfe5b68e4012dc15f7c13f25d54a379d146b5b776a9004b9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-k7zqh" Jan 14 01:26:20.055592 kubelet[3571]: E0114 01:26:20.055468 3571 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6feb8f28ddd2796cfe5b68e4012dc15f7c13f25d54a379d146b5b776a9004b9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-k7zqh" Jan 14 01:26:20.056130 kubelet[3571]: E0114 01:26:20.055904 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-k7zqh_kube-system(2a03d66f-dd24-454c-84df-6cece7cb808c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-k7zqh_kube-system(2a03d66f-dd24-454c-84df-6cece7cb808c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6feb8f28ddd2796cfe5b68e4012dc15f7c13f25d54a379d146b5b776a9004b9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-k7zqh" podUID="2a03d66f-dd24-454c-84df-6cece7cb808c" Jan 14 01:26:20.059708 containerd[1962]: time="2026-01-14T01:26:20.059577235Z" level=error msg="Failed to destroy network for sandbox \"5248431fb7f7427b318f5849f62d33689006140c729ec5377d1552ddacabe5a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:20.059708 containerd[1962]: time="2026-01-14T01:26:20.059630933Z" level=error msg="Failed to destroy network for sandbox \"ccbeeb7da5ae723b71099edee2ec1b5c74164bb70691ae383fc414542a60d553\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:20.062606 containerd[1962]: time="2026-01-14T01:26:20.062498504Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2rswm,Uid:257e1b40-fca0-492c-87be-45f394e92bdc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5248431fb7f7427b318f5849f62d33689006140c729ec5377d1552ddacabe5a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:20.063061 kubelet[3571]: E0114 01:26:20.063023 3571 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5248431fb7f7427b318f5849f62d33689006140c729ec5377d1552ddacabe5a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:20.063246 kubelet[3571]: E0114 01:26:20.063097 3571 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5248431fb7f7427b318f5849f62d33689006140c729ec5377d1552ddacabe5a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2rswm" Jan 14 01:26:20.063246 kubelet[3571]: E0114 01:26:20.063119 3571 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5248431fb7f7427b318f5849f62d33689006140c729ec5377d1552ddacabe5a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2rswm" Jan 14 01:26:20.063246 kubelet[3571]: E0114 01:26:20.063191 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-2rswm_kube-system(257e1b40-fca0-492c-87be-45f394e92bdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-2rswm_kube-system(257e1b40-fca0-492c-87be-45f394e92bdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5248431fb7f7427b318f5849f62d33689006140c729ec5377d1552ddacabe5a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-2rswm" podUID="257e1b40-fca0-492c-87be-45f394e92bdc" Jan 14 01:26:20.064742 containerd[1962]: time="2026-01-14T01:26:20.064684814Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574d5c8798-9hhk4,Uid:525cee0b-846b-43ba-9b3e-e192ccc373a0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccbeeb7da5ae723b71099edee2ec1b5c74164bb70691ae383fc414542a60d553\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:20.065209 kubelet[3571]: E0114 01:26:20.065082 3571 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccbeeb7da5ae723b71099edee2ec1b5c74164bb70691ae383fc414542a60d553\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:20.065403 kubelet[3571]: E0114 01:26:20.065343 3571 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccbeeb7da5ae723b71099edee2ec1b5c74164bb70691ae383fc414542a60d553\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-574d5c8798-9hhk4" Jan 14 01:26:20.066074 kubelet[3571]: E0114 01:26:20.065655 3571 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccbeeb7da5ae723b71099edee2ec1b5c74164bb70691ae383fc414542a60d553\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-574d5c8798-9hhk4" Jan 14 01:26:20.066074 kubelet[3571]: E0114 01:26:20.065762 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-574d5c8798-9hhk4_calico-apiserver(525cee0b-846b-43ba-9b3e-e192ccc373a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-574d5c8798-9hhk4_calico-apiserver(525cee0b-846b-43ba-9b3e-e192ccc373a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ccbeeb7da5ae723b71099edee2ec1b5c74164bb70691ae383fc414542a60d553\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-574d5c8798-9hhk4" podUID="525cee0b-846b-43ba-9b3e-e192ccc373a0" Jan 14 01:26:20.142697 systemd[1]: Created slice kubepods-besteffort-pod74b84cdc_323d_4b42_b95a_ceec7dfaa40f.slice - libcontainer container kubepods-besteffort-pod74b84cdc_323d_4b42_b95a_ceec7dfaa40f.slice. Jan 14 01:26:20.145521 containerd[1962]: time="2026-01-14T01:26:20.145428781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zrh2z,Uid:74b84cdc-323d-4b42-b95a-ceec7dfaa40f,Namespace:calico-system,Attempt:0,}" Jan 14 01:26:20.227259 containerd[1962]: time="2026-01-14T01:26:20.227125321Z" level=error msg="Failed to destroy network for sandbox \"d8ea8fa6df105dd0ecc143e8d64caf694095fe85eebd0f604c1aa9714ce83afe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:20.231098 containerd[1962]: time="2026-01-14T01:26:20.231035871Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zrh2z,Uid:74b84cdc-323d-4b42-b95a-ceec7dfaa40f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8ea8fa6df105dd0ecc143e8d64caf694095fe85eebd0f604c1aa9714ce83afe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:20.231730 kubelet[3571]: E0114 01:26:20.231324 3571 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8ea8fa6df105dd0ecc143e8d64caf694095fe85eebd0f604c1aa9714ce83afe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:20.231730 kubelet[3571]: E0114 01:26:20.231383 3571 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8ea8fa6df105dd0ecc143e8d64caf694095fe85eebd0f604c1aa9714ce83afe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zrh2z" Jan 14 01:26:20.231730 kubelet[3571]: E0114 01:26:20.231413 3571 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8ea8fa6df105dd0ecc143e8d64caf694095fe85eebd0f604c1aa9714ce83afe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zrh2z" Jan 14 01:26:20.232136 kubelet[3571]: E0114 01:26:20.231480 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zrh2z_calico-system(74b84cdc-323d-4b42-b95a-ceec7dfaa40f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zrh2z_calico-system(74b84cdc-323d-4b42-b95a-ceec7dfaa40f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8ea8fa6df105dd0ecc143e8d64caf694095fe85eebd0f604c1aa9714ce83afe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zrh2z" podUID="74b84cdc-323d-4b42-b95a-ceec7dfaa40f" Jan 14 01:26:20.775161 systemd[1]: run-netns-cni\x2d395986b4\x2d0ff3\x2d05e5\x2d0e3f\x2d558b64bb6bef.mount: Deactivated successfully. Jan 14 01:26:20.775334 systemd[1]: run-netns-cni\x2d2e469ca6\x2d6faa\x2d6564\x2d5e7d\x2d7ef30c7d7fca.mount: Deactivated successfully. Jan 14 01:26:20.775588 systemd[1]: run-netns-cni\x2d38267a24\x2d1609\x2df665\x2da7e3\x2d519dd9023968.mount: Deactivated successfully. Jan 14 01:26:28.759686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3210436951.mount: Deactivated successfully. Jan 14 01:26:28.797607 containerd[1962]: time="2026-01-14T01:26:28.797374808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:26:28.812770 containerd[1962]: time="2026-01-14T01:26:28.812698372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 14 01:26:28.835541 containerd[1962]: time="2026-01-14T01:26:28.835496336Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:26:28.838661 containerd[1962]: time="2026-01-14T01:26:28.838538566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:26:28.839233 containerd[1962]: time="2026-01-14T01:26:28.839205704Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 9.332714269s" Jan 14 01:26:28.839337 containerd[1962]: time="2026-01-14T01:26:28.839324244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 14 01:26:28.904293 containerd[1962]: time="2026-01-14T01:26:28.904251106Z" level=info msg="CreateContainer within sandbox \"06b03833ff7af60b9c8415347e7379f69e68889c8b9f529059fffcc6b64627c5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 14 01:26:29.026806 containerd[1962]: time="2026-01-14T01:26:29.024696780Z" level=info msg="Container bae4ae4438244016567a0ade4703ace7c60afc7bee1655182251993c7441fec4: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:26:29.026334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount948373137.mount: Deactivated successfully. Jan 14 01:26:29.065221 containerd[1962]: time="2026-01-14T01:26:29.065166275Z" level=info msg="CreateContainer within sandbox \"06b03833ff7af60b9c8415347e7379f69e68889c8b9f529059fffcc6b64627c5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bae4ae4438244016567a0ade4703ace7c60afc7bee1655182251993c7441fec4\"" Jan 14 01:26:29.066587 containerd[1962]: time="2026-01-14T01:26:29.066176488Z" level=info msg="StartContainer for \"bae4ae4438244016567a0ade4703ace7c60afc7bee1655182251993c7441fec4\"" Jan 14 01:26:29.072767 containerd[1962]: time="2026-01-14T01:26:29.072729533Z" level=info msg="connecting to shim bae4ae4438244016567a0ade4703ace7c60afc7bee1655182251993c7441fec4" address="unix:///run/containerd/s/21f67572a10b776ac80f2d677cbaeceef64d94a6ba9c84d7610fae711862653f" protocol=ttrpc version=3 Jan 14 01:26:29.229917 systemd[1]: Started cri-containerd-bae4ae4438244016567a0ade4703ace7c60afc7bee1655182251993c7441fec4.scope - libcontainer container bae4ae4438244016567a0ade4703ace7c60afc7bee1655182251993c7441fec4. Jan 14 01:26:29.318596 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 14 01:26:29.318739 kernel: audit: type=1334 audit(1768353989.314:588): prog-id=179 op=LOAD Jan 14 01:26:29.318771 kernel: audit: type=1300 audit(1768353989.314:588): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000214488 a2=98 a3=0 items=0 ppid=4068 pid=4617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:29.314000 audit: BPF prog-id=179 op=LOAD Jan 14 01:26:29.383139 kernel: audit: type=1327 audit(1768353989.314:588): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261653461653434333832343430313635363761306164653437303361 Jan 14 01:26:29.383340 kernel: audit: type=1334 audit(1768353989.314:589): prog-id=180 op=LOAD Jan 14 01:26:29.383393 kernel: audit: type=1300 audit(1768353989.314:589): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000214218 a2=98 a3=0 items=0 ppid=4068 pid=4617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:29.383421 kernel: audit: type=1327 audit(1768353989.314:589): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261653461653434333832343430313635363761306164653437303361 Jan 14 01:26:29.383468 kernel: audit: type=1334 audit(1768353989.314:590): prog-id=180 op=UNLOAD Jan 14 01:26:29.383492 kernel: audit: type=1300 audit(1768353989.314:590): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4068 pid=4617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:29.385721 kernel: audit: type=1327 audit(1768353989.314:590): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261653461653434333832343430313635363761306164653437303361 Jan 14 01:26:29.385782 kernel: audit: type=1334 audit(1768353989.314:591): prog-id=179 op=UNLOAD Jan 14 01:26:29.314000 audit[4617]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000214488 a2=98 a3=0 items=0 ppid=4068 pid=4617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:29.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261653461653434333832343430313635363761306164653437303361 Jan 14 01:26:29.314000 audit: BPF prog-id=180 op=LOAD Jan 14 01:26:29.314000 audit[4617]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000214218 a2=98 a3=0 items=0 ppid=4068 pid=4617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:29.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261653461653434333832343430313635363761306164653437303361 Jan 14 01:26:29.314000 audit: BPF prog-id=180 op=UNLOAD Jan 14 01:26:29.314000 audit[4617]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4068 pid=4617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:29.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261653461653434333832343430313635363761306164653437303361 Jan 14 01:26:29.314000 audit: BPF prog-id=179 op=UNLOAD Jan 14 01:26:29.314000 audit[4617]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4068 pid=4617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:29.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261653461653434333832343430313635363761306164653437303361 Jan 14 01:26:29.314000 audit: BPF prog-id=181 op=LOAD Jan 14 01:26:29.314000 audit[4617]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0002146e8 a2=98 a3=0 items=0 ppid=4068 pid=4617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:29.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261653461653434333832343430313635363761306164653437303361 Jan 14 01:26:29.389068 containerd[1962]: time="2026-01-14T01:26:29.389021612Z" level=info msg="StartContainer for \"bae4ae4438244016567a0ade4703ace7c60afc7bee1655182251993c7441fec4\" returns successfully" Jan 14 01:26:29.680054 kubelet[3571]: I0114 01:26:29.675988 3571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6rztb" podStartSLOduration=2.059115225 podStartE2EDuration="24.674729921s" podCreationTimestamp="2026-01-14 01:26:05 +0000 UTC" firstStartedPulling="2026-01-14 01:26:06.224587346 +0000 UTC m=+58.312875480" lastFinishedPulling="2026-01-14 01:26:28.840202043 +0000 UTC m=+80.928490176" observedRunningTime="2026-01-14 01:26:29.663276617 +0000 UTC m=+81.751564765" watchObservedRunningTime="2026-01-14 01:26:29.674729921 +0000 UTC m=+81.763018076" Jan 14 01:26:31.129939 containerd[1962]: time="2026-01-14T01:26:31.129867954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2rswm,Uid:257e1b40-fca0-492c-87be-45f394e92bdc,Namespace:kube-system,Attempt:0,}" Jan 14 01:26:31.130289 containerd[1962]: time="2026-01-14T01:26:31.129874519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5mdck,Uid:993f578d-b707-42bd-b6e9-14c5aa23a03f,Namespace:calico-system,Attempt:0,}" Jan 14 01:26:31.252845 containerd[1962]: time="2026-01-14T01:26:31.252694948Z" level=error msg="Failed to destroy network for sandbox \"1b4c8cd101de0b3195dc26238ad1f0f99e8a853633bb987f703e07da38691db4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:31.257298 systemd[1]: run-netns-cni\x2d013e4327\x2de6b6\x2df77b\x2ddf82\x2d0b1ca812d6dc.mount: Deactivated successfully. Jan 14 01:26:31.259362 containerd[1962]: time="2026-01-14T01:26:31.258150903Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2rswm,Uid:257e1b40-fca0-492c-87be-45f394e92bdc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b4c8cd101de0b3195dc26238ad1f0f99e8a853633bb987f703e07da38691db4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:31.259543 kubelet[3571]: E0114 01:26:31.258422 3571 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b4c8cd101de0b3195dc26238ad1f0f99e8a853633bb987f703e07da38691db4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:31.259543 kubelet[3571]: E0114 01:26:31.258510 3571 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b4c8cd101de0b3195dc26238ad1f0f99e8a853633bb987f703e07da38691db4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2rswm" Jan 14 01:26:31.259543 kubelet[3571]: E0114 01:26:31.258539 3571 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b4c8cd101de0b3195dc26238ad1f0f99e8a853633bb987f703e07da38691db4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2rswm" Jan 14 01:26:31.260355 kubelet[3571]: E0114 01:26:31.258668 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-2rswm_kube-system(257e1b40-fca0-492c-87be-45f394e92bdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-2rswm_kube-system(257e1b40-fca0-492c-87be-45f394e92bdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b4c8cd101de0b3195dc26238ad1f0f99e8a853633bb987f703e07da38691db4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-2rswm" podUID="257e1b40-fca0-492c-87be-45f394e92bdc" Jan 14 01:26:31.268133 containerd[1962]: time="2026-01-14T01:26:31.268085421Z" level=error msg="Failed to destroy network for sandbox \"83b965bfc5b3b2e374693538524ec6982036859be4e5da80e2b76a64d40581cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:31.271828 systemd[1]: run-netns-cni\x2d486b183c\x2d07e1\x2dee3d\x2d8c08\x2d035dfc1725d4.mount: Deactivated successfully. Jan 14 01:26:31.272544 containerd[1962]: time="2026-01-14T01:26:31.272499212Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5mdck,Uid:993f578d-b707-42bd-b6e9-14c5aa23a03f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"83b965bfc5b3b2e374693538524ec6982036859be4e5da80e2b76a64d40581cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:31.274324 kubelet[3571]: E0114 01:26:31.274269 3571 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83b965bfc5b3b2e374693538524ec6982036859be4e5da80e2b76a64d40581cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:31.274431 kubelet[3571]: E0114 01:26:31.274334 3571 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83b965bfc5b3b2e374693538524ec6982036859be4e5da80e2b76a64d40581cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5mdck" Jan 14 01:26:31.274431 kubelet[3571]: E0114 01:26:31.274361 3571 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83b965bfc5b3b2e374693538524ec6982036859be4e5da80e2b76a64d40581cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5mdck" Jan 14 01:26:31.274517 kubelet[3571]: E0114 01:26:31.274427 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-5mdck_calico-system(993f578d-b707-42bd-b6e9-14c5aa23a03f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-5mdck_calico-system(993f578d-b707-42bd-b6e9-14c5aa23a03f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83b965bfc5b3b2e374693538524ec6982036859be4e5da80e2b76a64d40581cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-5mdck" podUID="993f578d-b707-42bd-b6e9-14c5aa23a03f" Jan 14 01:26:31.604158 kubelet[3571]: I0114 01:26:31.604123 3571 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 01:26:32.130722 containerd[1962]: time="2026-01-14T01:26:32.130408035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574d5c8798-9hhk4,Uid:525cee0b-846b-43ba-9b3e-e192ccc373a0,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:26:32.131338 containerd[1962]: time="2026-01-14T01:26:32.131285427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zrh2z,Uid:74b84cdc-323d-4b42-b95a-ceec7dfaa40f,Namespace:calico-system,Attempt:0,}" Jan 14 01:26:32.237151 containerd[1962]: time="2026-01-14T01:26:32.237094482Z" level=error msg="Failed to destroy network for sandbox \"73967eae553d7eecba06e46e4d87b6b1a206925f046a308418cfe2057460a31b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:32.239890 containerd[1962]: time="2026-01-14T01:26:32.239135524Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574d5c8798-9hhk4,Uid:525cee0b-846b-43ba-9b3e-e192ccc373a0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"73967eae553d7eecba06e46e4d87b6b1a206925f046a308418cfe2057460a31b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:32.240077 kubelet[3571]: E0114 01:26:32.239418 3571 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73967eae553d7eecba06e46e4d87b6b1a206925f046a308418cfe2057460a31b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:32.240077 kubelet[3571]: E0114 01:26:32.239479 3571 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73967eae553d7eecba06e46e4d87b6b1a206925f046a308418cfe2057460a31b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-574d5c8798-9hhk4" Jan 14 01:26:32.240077 kubelet[3571]: E0114 01:26:32.239511 3571 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73967eae553d7eecba06e46e4d87b6b1a206925f046a308418cfe2057460a31b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-574d5c8798-9hhk4" Jan 14 01:26:32.243590 kubelet[3571]: E0114 01:26:32.239599 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-574d5c8798-9hhk4_calico-apiserver(525cee0b-846b-43ba-9b3e-e192ccc373a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-574d5c8798-9hhk4_calico-apiserver(525cee0b-846b-43ba-9b3e-e192ccc373a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73967eae553d7eecba06e46e4d87b6b1a206925f046a308418cfe2057460a31b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-574d5c8798-9hhk4" podUID="525cee0b-846b-43ba-9b3e-e192ccc373a0" Jan 14 01:26:32.246924 systemd[1]: run-netns-cni\x2dde43655d\x2d27e5\x2db942\x2d9179\x2df186f8c8c452.mount: Deactivated successfully. Jan 14 01:26:32.250412 containerd[1962]: time="2026-01-14T01:26:32.250345008Z" level=error msg="Failed to destroy network for sandbox \"115143b91b1902d5c54dad2fe750bcedc284ed4458876fe5a46dfa04d975c3ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:32.254686 containerd[1962]: time="2026-01-14T01:26:32.253156846Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zrh2z,Uid:74b84cdc-323d-4b42-b95a-ceec7dfaa40f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"115143b91b1902d5c54dad2fe750bcedc284ed4458876fe5a46dfa04d975c3ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:32.254816 kubelet[3571]: E0114 01:26:32.253591 3571 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"115143b91b1902d5c54dad2fe750bcedc284ed4458876fe5a46dfa04d975c3ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:32.254816 kubelet[3571]: E0114 01:26:32.253669 3571 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"115143b91b1902d5c54dad2fe750bcedc284ed4458876fe5a46dfa04d975c3ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zrh2z" Jan 14 01:26:32.254816 kubelet[3571]: E0114 01:26:32.253751 3571 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"115143b91b1902d5c54dad2fe750bcedc284ed4458876fe5a46dfa04d975c3ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zrh2z" Jan 14 01:26:32.254965 kubelet[3571]: E0114 01:26:32.253842 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zrh2z_calico-system(74b84cdc-323d-4b42-b95a-ceec7dfaa40f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zrh2z_calico-system(74b84cdc-323d-4b42-b95a-ceec7dfaa40f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"115143b91b1902d5c54dad2fe750bcedc284ed4458876fe5a46dfa04d975c3ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zrh2z" podUID="74b84cdc-323d-4b42-b95a-ceec7dfaa40f" Jan 14 01:26:32.255906 systemd[1]: run-netns-cni\x2d22cb8107\x2d2514\x2da8db\x2d558e\x2d1b0244c9adb7.mount: Deactivated successfully. Jan 14 01:26:33.129646 containerd[1962]: time="2026-01-14T01:26:33.129599957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5799fb7c8b-6l2xc,Uid:89c07bdd-9dad-4c41-8dfe-3de894f6f743,Namespace:calico-system,Attempt:0,}" Jan 14 01:26:33.210705 containerd[1962]: time="2026-01-14T01:26:33.210647722Z" level=error msg="Failed to destroy network for sandbox \"09a2743f136a442d366108062234db7c719aa208c743ab840ec16a8173c4f881\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:33.215774 systemd[1]: run-netns-cni\x2d7e7ab45e\x2d0fde\x2d88a1\x2d9c52\x2db1edebe3893f.mount: Deactivated successfully. Jan 14 01:26:33.220354 containerd[1962]: time="2026-01-14T01:26:33.220294070Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5799fb7c8b-6l2xc,Uid:89c07bdd-9dad-4c41-8dfe-3de894f6f743,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"09a2743f136a442d366108062234db7c719aa208c743ab840ec16a8173c4f881\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:33.223802 kubelet[3571]: E0114 01:26:33.220568 3571 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09a2743f136a442d366108062234db7c719aa208c743ab840ec16a8173c4f881\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:26:33.279902 kubelet[3571]: E0114 01:26:33.278053 3571 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09a2743f136a442d366108062234db7c719aa208c743ab840ec16a8173c4f881\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5799fb7c8b-6l2xc" Jan 14 01:26:33.279902 kubelet[3571]: E0114 01:26:33.278138 3571 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09a2743f136a442d366108062234db7c719aa208c743ab840ec16a8173c4f881\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5799fb7c8b-6l2xc" Jan 14 01:26:33.279902 kubelet[3571]: E0114 01:26:33.278230 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5799fb7c8b-6l2xc_calico-system(89c07bdd-9dad-4c41-8dfe-3de894f6f743)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5799fb7c8b-6l2xc_calico-system(89c07bdd-9dad-4c41-8dfe-3de894f6f743)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09a2743f136a442d366108062234db7c719aa208c743ab840ec16a8173c4f881\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5799fb7c8b-6l2xc" podUID="89c07bdd-9dad-4c41-8dfe-3de894f6f743" Jan 14 01:26:33.538215 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 14 01:26:33.538953 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 14 01:26:34.133057 containerd[1962]: time="2026-01-14T01:26:34.132134273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k7zqh,Uid:2a03d66f-dd24-454c-84df-6cece7cb808c,Namespace:kube-system,Attempt:0,}" Jan 14 01:26:34.133690 containerd[1962]: time="2026-01-14T01:26:34.133644953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84845f84c5-f7vsl,Uid:4fe5960b-c32f-4a52-8f07-a5dac29b6214,Namespace:calico-system,Attempt:0,}" Jan 14 01:26:35.130700 containerd[1962]: time="2026-01-14T01:26:35.130622727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65b9745b8-fxksz,Uid:56394477-d28d-42eb-bee5-a9a20263c11f,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:26:35.135625 containerd[1962]: time="2026-01-14T01:26:35.134076892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574d5c8798-6q7jw,Uid:4263d4be-fc9d-471e-8df9-42f06716a4f0,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:26:36.322511 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 14 01:26:36.323493 kernel: audit: type=1334 audit(1768353996.318:593): prog-id=182 op=LOAD Jan 14 01:26:36.324345 kernel: audit: type=1300 audit(1768353996.318:593): arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe663869b0 a2=98 a3=1fffffffffffffff items=0 ppid=4950 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.318000 audit: BPF prog-id=182 op=LOAD Jan 14 01:26:36.353238 kernel: audit: type=1327 audit(1768353996.318:593): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:26:36.353864 kernel: audit: type=1334 audit(1768353996.318:594): prog-id=182 op=UNLOAD Jan 14 01:26:36.353901 kernel: audit: type=1300 audit(1768353996.318:594): arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe66386980 a3=0 items=0 ppid=4950 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.318000 audit[5040]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe663869b0 a2=98 a3=1fffffffffffffff items=0 ppid=4950 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.318000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:26:36.318000 audit: BPF prog-id=182 op=UNLOAD Jan 14 01:26:36.318000 audit[5040]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe66386980 a3=0 items=0 ppid=4950 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.358433 kernel: audit: type=1327 audit(1768353996.318:594): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:26:36.318000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:26:36.318000 audit: BPF prog-id=183 op=LOAD Jan 14 01:26:36.363126 kernel: audit: type=1334 audit(1768353996.318:595): prog-id=183 op=LOAD Jan 14 01:26:36.318000 audit[5040]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe66386890 a2=94 a3=3 items=0 ppid=4950 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.366191 kernel: audit: type=1300 audit(1768353996.318:595): arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe66386890 a2=94 a3=3 items=0 ppid=4950 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.318000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:26:36.372440 kernel: audit: type=1327 audit(1768353996.318:595): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:26:36.318000 audit: BPF prog-id=183 op=UNLOAD Jan 14 01:26:36.376853 kernel: audit: type=1334 audit(1768353996.318:596): prog-id=183 op=UNLOAD Jan 14 01:26:36.318000 audit[5040]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe66386890 a2=94 a3=3 items=0 ppid=4950 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.318000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:26:36.318000 audit: BPF prog-id=184 op=LOAD Jan 14 01:26:36.318000 audit[5040]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe663868d0 a2=94 a3=7ffe66386ab0 items=0 ppid=4950 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.318000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:26:36.318000 audit: BPF prog-id=184 op=UNLOAD Jan 14 01:26:36.318000 audit[5040]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe663868d0 a2=94 a3=7ffe66386ab0 items=0 ppid=4950 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.318000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:26:36.324000 audit: BPF prog-id=185 op=LOAD Jan 14 01:26:36.324000 audit[5041]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffda79bba30 a2=98 a3=3 items=0 ppid=4950 pid=5041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.324000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:26:36.324000 audit: BPF prog-id=185 op=UNLOAD Jan 14 01:26:36.324000 audit[5041]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffda79bba00 a3=0 items=0 ppid=4950 pid=5041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.324000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:26:36.326000 audit: BPF prog-id=186 op=LOAD Jan 14 01:26:36.326000 audit[5041]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffda79bb820 a2=94 a3=54428f items=0 ppid=4950 pid=5041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.326000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:26:36.326000 audit: BPF prog-id=186 op=UNLOAD Jan 14 01:26:36.326000 audit[5041]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffda79bb820 a2=94 a3=54428f items=0 ppid=4950 pid=5041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.326000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:26:36.326000 audit: BPF prog-id=187 op=LOAD Jan 14 01:26:36.326000 audit[5041]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffda79bb850 a2=94 a3=2 items=0 ppid=4950 pid=5041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.326000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:26:36.326000 audit: BPF prog-id=187 op=UNLOAD Jan 14 01:26:36.326000 audit[5041]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffda79bb850 a2=0 a3=2 items=0 ppid=4950 pid=5041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.326000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:26:36.519000 audit: BPF prog-id=188 op=LOAD Jan 14 01:26:36.519000 audit[5041]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffda79bb710 a2=94 a3=1 items=0 ppid=4950 pid=5041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.519000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:26:36.520000 audit: BPF prog-id=188 op=UNLOAD Jan 14 01:26:36.520000 audit[5041]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffda79bb710 a2=94 a3=1 items=0 ppid=4950 pid=5041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.520000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:26:36.532000 audit: BPF prog-id=189 op=LOAD Jan 14 01:26:36.532000 audit[5041]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffda79bb700 a2=94 a3=4 items=0 ppid=4950 pid=5041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.532000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:26:36.532000 audit: BPF prog-id=189 op=UNLOAD Jan 14 01:26:36.532000 audit[5041]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffda79bb700 a2=0 a3=4 items=0 ppid=4950 pid=5041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.532000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:26:36.532000 audit: BPF prog-id=190 op=LOAD Jan 14 01:26:36.532000 audit[5041]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffda79bb560 a2=94 a3=5 items=0 ppid=4950 pid=5041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.532000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:26:36.532000 audit: BPF prog-id=190 op=UNLOAD Jan 14 01:26:36.532000 audit[5041]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffda79bb560 a2=0 a3=5 items=0 ppid=4950 pid=5041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.532000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:26:36.533000 audit: BPF prog-id=191 op=LOAD Jan 14 01:26:36.533000 audit[5041]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffda79bb780 a2=94 a3=6 items=0 ppid=4950 pid=5041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.533000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:26:36.533000 audit: BPF prog-id=191 op=UNLOAD Jan 14 01:26:36.533000 audit[5041]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffda79bb780 a2=0 a3=6 items=0 ppid=4950 pid=5041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.533000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:26:36.533000 audit: BPF prog-id=192 op=LOAD Jan 14 01:26:36.533000 audit[5041]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffda79baf30 a2=94 a3=88 items=0 ppid=4950 pid=5041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.533000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:26:36.533000 audit: BPF prog-id=193 op=LOAD Jan 14 01:26:36.533000 audit[5041]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffda79badb0 a2=94 a3=2 items=0 ppid=4950 pid=5041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.533000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:26:36.533000 audit: BPF prog-id=193 op=UNLOAD Jan 14 01:26:36.533000 audit[5041]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffda79bade0 a2=0 a3=7ffda79baee0 items=0 ppid=4950 pid=5041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.533000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:26:36.534000 audit: BPF prog-id=192 op=UNLOAD Jan 14 01:26:36.534000 audit[5041]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=3828d10 a2=0 a3=ffc92f69a476c871 items=0 ppid=4950 pid=5041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.534000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:26:36.584000 audit: BPF prog-id=194 op=LOAD Jan 14 01:26:36.584000 audit[5044]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdab6d3030 a2=98 a3=1999999999999999 items=0 ppid=4950 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.584000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:26:36.586000 audit: BPF prog-id=194 op=UNLOAD Jan 14 01:26:36.586000 audit[5044]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffdab6d3000 a3=0 items=0 ppid=4950 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.586000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:26:36.586000 audit: BPF prog-id=195 op=LOAD Jan 14 01:26:36.586000 audit[5044]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdab6d2f10 a2=94 a3=ffff items=0 ppid=4950 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.586000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:26:36.586000 audit: BPF prog-id=195 op=UNLOAD Jan 14 01:26:36.586000 audit[5044]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdab6d2f10 a2=94 a3=ffff items=0 ppid=4950 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.586000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:26:36.586000 audit: BPF prog-id=196 op=LOAD Jan 14 01:26:36.586000 audit[5044]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdab6d2f50 a2=94 a3=7ffdab6d3130 items=0 ppid=4950 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.586000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:26:36.586000 audit: BPF prog-id=196 op=UNLOAD Jan 14 01:26:36.586000 audit[5044]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdab6d2f50 a2=94 a3=7ffdab6d3130 items=0 ppid=4950 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.586000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:26:36.693514 (udev-worker)[5057]: Network interface NamePolicy= disabled on kernel command line. Jan 14 01:26:36.699207 systemd-networkd[1548]: vxlan.calico: Link UP Jan 14 01:26:36.703706 systemd-networkd[1548]: vxlan.calico: Gained carrier Jan 14 01:26:36.769000 audit: BPF prog-id=197 op=LOAD Jan 14 01:26:36.769000 audit[5075]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeae6b24a0 a2=98 a3=0 items=0 ppid=4950 pid=5075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.769000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:26:36.769000 audit: BPF prog-id=197 op=UNLOAD Jan 14 01:26:36.769000 audit[5075]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffeae6b2470 a3=0 items=0 ppid=4950 pid=5075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.769000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:26:36.771421 (udev-worker)[5058]: Network interface NamePolicy= disabled on kernel command line. Jan 14 01:26:36.771461 (udev-worker)[5074]: Network interface NamePolicy= disabled on kernel command line. Jan 14 01:26:36.793000 audit: BPF prog-id=198 op=LOAD Jan 14 01:26:36.793000 audit[5075]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeae6b22b0 a2=94 a3=54428f items=0 ppid=4950 pid=5075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.793000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:26:36.793000 audit: BPF prog-id=198 op=UNLOAD Jan 14 01:26:36.793000 audit[5075]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeae6b22b0 a2=94 a3=54428f items=0 ppid=4950 pid=5075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.793000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:26:36.793000 audit: BPF prog-id=199 op=LOAD Jan 14 01:26:36.793000 audit[5075]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeae6b22e0 a2=94 a3=2 items=0 ppid=4950 pid=5075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.793000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:26:36.794000 audit: BPF prog-id=199 op=UNLOAD Jan 14 01:26:36.794000 audit[5075]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeae6b22e0 a2=0 a3=2 items=0 ppid=4950 pid=5075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.794000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:26:36.794000 audit: BPF prog-id=200 op=LOAD Jan 14 01:26:36.794000 audit[5075]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeae6b2090 a2=94 a3=4 items=0 ppid=4950 pid=5075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.794000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:26:36.794000 audit: BPF prog-id=200 op=UNLOAD Jan 14 01:26:36.794000 audit[5075]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffeae6b2090 a2=94 a3=4 items=0 ppid=4950 pid=5075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.794000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:26:36.794000 audit: BPF prog-id=201 op=LOAD Jan 14 01:26:36.794000 audit[5075]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeae6b2190 a2=94 a3=7ffeae6b2310 items=0 ppid=4950 pid=5075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.794000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:26:36.794000 audit: BPF prog-id=201 op=UNLOAD Jan 14 01:26:36.794000 audit[5075]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffeae6b2190 a2=0 a3=7ffeae6b2310 items=0 ppid=4950 pid=5075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.794000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:26:36.795000 audit: BPF prog-id=202 op=LOAD Jan 14 01:26:36.795000 audit[5075]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeae6b18c0 a2=94 a3=2 items=0 ppid=4950 pid=5075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.795000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:26:36.795000 audit: BPF prog-id=202 op=UNLOAD Jan 14 01:26:36.795000 audit[5075]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffeae6b18c0 a2=0 a3=2 items=0 ppid=4950 pid=5075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.795000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:26:36.795000 audit: BPF prog-id=203 op=LOAD Jan 14 01:26:36.795000 audit[5075]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeae6b19c0 a2=94 a3=30 items=0 ppid=4950 pid=5075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.795000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:26:36.810000 audit: BPF prog-id=204 op=LOAD Jan 14 01:26:36.810000 audit[5082]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdffafe9c0 a2=98 a3=0 items=0 ppid=4950 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.810000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:26:36.810000 audit: BPF prog-id=204 op=UNLOAD Jan 14 01:26:36.810000 audit[5082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffdffafe990 a3=0 items=0 ppid=4950 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.810000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:26:36.810000 audit: BPF prog-id=205 op=LOAD Jan 14 01:26:36.810000 audit[5082]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdffafe7b0 a2=94 a3=54428f items=0 ppid=4950 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.810000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:26:36.810000 audit: BPF prog-id=205 op=UNLOAD Jan 14 01:26:36.810000 audit[5082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffdffafe7b0 a2=94 a3=54428f items=0 ppid=4950 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.810000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:26:36.810000 audit: BPF prog-id=206 op=LOAD Jan 14 01:26:36.810000 audit[5082]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdffafe7e0 a2=94 a3=2 items=0 ppid=4950 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.810000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:26:36.810000 audit: BPF prog-id=206 op=UNLOAD Jan 14 01:26:36.810000 audit[5082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffdffafe7e0 a2=0 a3=2 items=0 ppid=4950 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.810000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:26:36.991000 audit: BPF prog-id=207 op=LOAD Jan 14 01:26:36.991000 audit[5082]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdffafe6a0 a2=94 a3=1 items=0 ppid=4950 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.991000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:26:36.991000 audit: BPF prog-id=207 op=UNLOAD Jan 14 01:26:36.991000 audit[5082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffdffafe6a0 a2=94 a3=1 items=0 ppid=4950 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:36.991000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:26:37.008000 audit: BPF prog-id=208 op=LOAD Jan 14 01:26:37.008000 audit[5082]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdffafe690 a2=94 a3=4 items=0 ppid=4950 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:37.008000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:26:37.008000 audit: BPF prog-id=208 op=UNLOAD Jan 14 01:26:37.008000 audit[5082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffdffafe690 a2=0 a3=4 items=0 ppid=4950 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:37.008000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:26:37.009000 audit: BPF prog-id=209 op=LOAD Jan 14 01:26:37.009000 audit[5082]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdffafe4f0 a2=94 a3=5 items=0 ppid=4950 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:37.009000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:26:37.009000 audit: BPF prog-id=209 op=UNLOAD Jan 14 01:26:37.009000 audit[5082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdffafe4f0 a2=0 a3=5 items=0 ppid=4950 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:37.009000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:26:37.009000 audit: BPF prog-id=210 op=LOAD Jan 14 01:26:37.009000 audit[5082]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdffafe710 a2=94 a3=6 items=0 ppid=4950 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:37.009000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:26:37.009000 audit: BPF prog-id=210 op=UNLOAD Jan 14 01:26:37.009000 audit[5082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffdffafe710 a2=0 a3=6 items=0 ppid=4950 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:37.009000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:26:37.009000 audit: BPF prog-id=211 op=LOAD Jan 14 01:26:37.009000 audit[5082]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdffafdec0 a2=94 a3=88 items=0 ppid=4950 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:37.009000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:26:37.010000 audit: BPF prog-id=212 op=LOAD Jan 14 01:26:37.010000 audit[5082]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffdffafdd40 a2=94 a3=2 items=0 ppid=4950 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:37.010000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:26:37.010000 audit: BPF prog-id=212 op=UNLOAD Jan 14 01:26:37.010000 audit[5082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffdffafdd70 a2=0 a3=7ffdffafde70 items=0 ppid=4950 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:37.010000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:26:37.010000 audit: BPF prog-id=211 op=UNLOAD Jan 14 01:26:37.010000 audit[5082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=2b63cd10 a2=0 a3=990a3cdab66e2faa items=0 ppid=4950 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:37.010000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:26:37.082000 audit: BPF prog-id=203 op=UNLOAD Jan 14 01:26:37.082000 audit[4950]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000f4e140 a2=0 a3=0 items=0 ppid=4930 pid=4950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:37.082000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 14 01:26:37.185000 audit[5106]: NETFILTER_CFG table=nat:121 family=2 entries=15 op=nft_register_chain pid=5106 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:26:37.185000 audit[5106]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffc8ef42470 a2=0 a3=7ffc8ef4245c items=0 ppid=4950 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:37.185000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:26:37.191000 audit[5108]: NETFILTER_CFG table=mangle:122 family=2 entries=16 op=nft_register_chain pid=5108 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:26:37.191000 audit[5108]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffffa2b6260 a2=0 a3=7ffffa2b624c items=0 ppid=4950 pid=5108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:37.191000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:26:37.203000 audit[5105]: NETFILTER_CFG table=raw:123 family=2 entries=21 op=nft_register_chain pid=5105 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:26:37.203000 audit[5105]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffcfc8fd670 a2=0 a3=7ffcfc8fd65c items=0 ppid=4950 pid=5105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:37.203000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:26:37.206000 audit[5111]: NETFILTER_CFG table=filter:124 family=2 entries=39 op=nft_register_chain pid=5111 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:26:37.206000 audit[5111]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7fff127ca710 a2=0 a3=7fff127ca6fc items=0 ppid=4950 pid=5111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:37.206000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:26:37.619908 systemd-networkd[1548]: cali1105652f90a: Link UP Jan 14 01:26:37.622452 systemd-networkd[1548]: cali1105652f90a: Gained carrier Jan 14 01:26:37.648986 containerd[1962]: 2026-01-14 01:26:35.279 [INFO][4898] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 01:26:37.648986 containerd[1962]: 2026-01-14 01:26:35.332 [INFO][4898] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--46-k8s-calico--apiserver--574d5c8798--6q7jw-eth0 calico-apiserver-574d5c8798- calico-apiserver 4263d4be-fc9d-471e-8df9-42f06716a4f0 938 0 2026-01-14 01:25:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:574d5c8798 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-46 calico-apiserver-574d5c8798-6q7jw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1105652f90a [] [] }} ContainerID="10d472fbee22d017515116909491fc8a1b7d4b369887c93dc436feedc5d47bc7" Namespace="calico-apiserver" Pod="calico-apiserver-574d5c8798-6q7jw" WorkloadEndpoint="ip--172--31--18--46-k8s-calico--apiserver--574d5c8798--6q7jw-" Jan 14 01:26:37.648986 containerd[1962]: 2026-01-14 01:26:35.333 [INFO][4898] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="10d472fbee22d017515116909491fc8a1b7d4b369887c93dc436feedc5d47bc7" Namespace="calico-apiserver" Pod="calico-apiserver-574d5c8798-6q7jw" WorkloadEndpoint="ip--172--31--18--46-k8s-calico--apiserver--574d5c8798--6q7jw-eth0" Jan 14 01:26:37.648986 containerd[1962]: 2026-01-14 01:26:37.472 [INFO][4917] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10d472fbee22d017515116909491fc8a1b7d4b369887c93dc436feedc5d47bc7" HandleID="k8s-pod-network.10d472fbee22d017515116909491fc8a1b7d4b369887c93dc436feedc5d47bc7" Workload="ip--172--31--18--46-k8s-calico--apiserver--574d5c8798--6q7jw-eth0" Jan 14 01:26:37.649765 containerd[1962]: 2026-01-14 01:26:37.474 [INFO][4917] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="10d472fbee22d017515116909491fc8a1b7d4b369887c93dc436feedc5d47bc7" HandleID="k8s-pod-network.10d472fbee22d017515116909491fc8a1b7d4b369887c93dc436feedc5d47bc7" Workload="ip--172--31--18--46-k8s-calico--apiserver--574d5c8798--6q7jw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003676b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-46", "pod":"calico-apiserver-574d5c8798-6q7jw", "timestamp":"2026-01-14 01:26:37.472927439 +0000 UTC"}, Hostname:"ip-172-31-18-46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:26:37.649765 containerd[1962]: 2026-01-14 01:26:37.474 [INFO][4917] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:26:37.649765 containerd[1962]: 2026-01-14 01:26:37.474 [INFO][4917] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:26:37.649765 containerd[1962]: 2026-01-14 01:26:37.475 [INFO][4917] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-46' Jan 14 01:26:37.649765 containerd[1962]: 2026-01-14 01:26:37.492 [INFO][4917] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.10d472fbee22d017515116909491fc8a1b7d4b369887c93dc436feedc5d47bc7" host="ip-172-31-18-46" Jan 14 01:26:37.649765 containerd[1962]: 2026-01-14 01:26:37.572 [INFO][4917] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-46" Jan 14 01:26:37.649765 containerd[1962]: 2026-01-14 01:26:37.578 [INFO][4917] ipam/ipam.go 511: Trying affinity for 192.168.5.192/26 host="ip-172-31-18-46" Jan 14 01:26:37.649765 containerd[1962]: 2026-01-14 01:26:37.581 [INFO][4917] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.192/26 host="ip-172-31-18-46" Jan 14 01:26:37.649765 containerd[1962]: 2026-01-14 01:26:37.584 [INFO][4917] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.192/26 host="ip-172-31-18-46" Jan 14 01:26:37.650396 containerd[1962]: 2026-01-14 01:26:37.584 [INFO][4917] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.192/26 handle="k8s-pod-network.10d472fbee22d017515116909491fc8a1b7d4b369887c93dc436feedc5d47bc7" host="ip-172-31-18-46" Jan 14 01:26:37.650396 containerd[1962]: 2026-01-14 01:26:37.586 [INFO][4917] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.10d472fbee22d017515116909491fc8a1b7d4b369887c93dc436feedc5d47bc7 Jan 14 01:26:37.650396 containerd[1962]: 2026-01-14 01:26:37.590 [INFO][4917] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.192/26 handle="k8s-pod-network.10d472fbee22d017515116909491fc8a1b7d4b369887c93dc436feedc5d47bc7" host="ip-172-31-18-46" Jan 14 01:26:37.650396 containerd[1962]: 2026-01-14 01:26:37.598 [INFO][4917] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.193/26] block=192.168.5.192/26 handle="k8s-pod-network.10d472fbee22d017515116909491fc8a1b7d4b369887c93dc436feedc5d47bc7" host="ip-172-31-18-46" Jan 14 01:26:37.650396 containerd[1962]: 2026-01-14 01:26:37.598 [INFO][4917] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.193/26] handle="k8s-pod-network.10d472fbee22d017515116909491fc8a1b7d4b369887c93dc436feedc5d47bc7" host="ip-172-31-18-46" Jan 14 01:26:37.650396 containerd[1962]: 2026-01-14 01:26:37.598 [INFO][4917] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:26:37.650396 containerd[1962]: 2026-01-14 01:26:37.599 [INFO][4917] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.193/26] IPv6=[] ContainerID="10d472fbee22d017515116909491fc8a1b7d4b369887c93dc436feedc5d47bc7" HandleID="k8s-pod-network.10d472fbee22d017515116909491fc8a1b7d4b369887c93dc436feedc5d47bc7" Workload="ip--172--31--18--46-k8s-calico--apiserver--574d5c8798--6q7jw-eth0" Jan 14 01:26:37.650708 containerd[1962]: 2026-01-14 01:26:37.607 [INFO][4898] cni-plugin/k8s.go 418: Populated endpoint ContainerID="10d472fbee22d017515116909491fc8a1b7d4b369887c93dc436feedc5d47bc7" Namespace="calico-apiserver" Pod="calico-apiserver-574d5c8798-6q7jw" WorkloadEndpoint="ip--172--31--18--46-k8s-calico--apiserver--574d5c8798--6q7jw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--46-k8s-calico--apiserver--574d5c8798--6q7jw-eth0", GenerateName:"calico-apiserver-574d5c8798-", Namespace:"calico-apiserver", SelfLink:"", UID:"4263d4be-fc9d-471e-8df9-42f06716a4f0", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 25, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"574d5c8798", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-46", ContainerID:"", Pod:"calico-apiserver-574d5c8798-6q7jw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1105652f90a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:26:37.650825 containerd[1962]: 2026-01-14 01:26:37.607 [INFO][4898] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.193/32] ContainerID="10d472fbee22d017515116909491fc8a1b7d4b369887c93dc436feedc5d47bc7" Namespace="calico-apiserver" Pod="calico-apiserver-574d5c8798-6q7jw" WorkloadEndpoint="ip--172--31--18--46-k8s-calico--apiserver--574d5c8798--6q7jw-eth0" Jan 14 01:26:37.650825 containerd[1962]: 2026-01-14 01:26:37.608 [INFO][4898] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1105652f90a ContainerID="10d472fbee22d017515116909491fc8a1b7d4b369887c93dc436feedc5d47bc7" Namespace="calico-apiserver" Pod="calico-apiserver-574d5c8798-6q7jw" WorkloadEndpoint="ip--172--31--18--46-k8s-calico--apiserver--574d5c8798--6q7jw-eth0" Jan 14 01:26:37.650825 containerd[1962]: 2026-01-14 01:26:37.624 [INFO][4898] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10d472fbee22d017515116909491fc8a1b7d4b369887c93dc436feedc5d47bc7" Namespace="calico-apiserver" Pod="calico-apiserver-574d5c8798-6q7jw" WorkloadEndpoint="ip--172--31--18--46-k8s-calico--apiserver--574d5c8798--6q7jw-eth0" Jan 14 01:26:37.650970 containerd[1962]: 2026-01-14 01:26:37.626 [INFO][4898] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="10d472fbee22d017515116909491fc8a1b7d4b369887c93dc436feedc5d47bc7" Namespace="calico-apiserver" Pod="calico-apiserver-574d5c8798-6q7jw" WorkloadEndpoint="ip--172--31--18--46-k8s-calico--apiserver--574d5c8798--6q7jw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--46-k8s-calico--apiserver--574d5c8798--6q7jw-eth0", GenerateName:"calico-apiserver-574d5c8798-", Namespace:"calico-apiserver", SelfLink:"", UID:"4263d4be-fc9d-471e-8df9-42f06716a4f0", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 25, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"574d5c8798", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-46", ContainerID:"10d472fbee22d017515116909491fc8a1b7d4b369887c93dc436feedc5d47bc7", Pod:"calico-apiserver-574d5c8798-6q7jw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1105652f90a", MAC:"c2:5b:d5:5c:16:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:26:37.651086 containerd[1962]: 2026-01-14 01:26:37.643 [INFO][4898] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="10d472fbee22d017515116909491fc8a1b7d4b369887c93dc436feedc5d47bc7" Namespace="calico-apiserver" Pod="calico-apiserver-574d5c8798-6q7jw" WorkloadEndpoint="ip--172--31--18--46-k8s-calico--apiserver--574d5c8798--6q7jw-eth0" Jan 14 01:26:37.775312 systemd-networkd[1548]: cali38bc2ee4ada: Link UP Jan 14 01:26:37.782505 systemd-networkd[1548]: cali38bc2ee4ada: Gained carrier Jan 14 01:26:37.806881 containerd[1962]: time="2026-01-14T01:26:37.806038584Z" level=info msg="connecting to shim 10d472fbee22d017515116909491fc8a1b7d4b369887c93dc436feedc5d47bc7" address="unix:///run/containerd/s/2f910fd716b6e7c1dae5dda7aa8827d57cd9e96bc88c73457686fffe84f6ddfd" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:26:37.847181 containerd[1962]: 2026-01-14 01:26:35.220 [INFO][4887] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 01:26:37.847181 containerd[1962]: 2026-01-14 01:26:35.265 [INFO][4887] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--46-k8s-calico--apiserver--65b9745b8--fxksz-eth0 calico-apiserver-65b9745b8- calico-apiserver 56394477-d28d-42eb-bee5-a9a20263c11f 941 0 2026-01-14 01:25:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65b9745b8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-46 calico-apiserver-65b9745b8-fxksz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali38bc2ee4ada [] [] }} ContainerID="5d9459b2b6917f33906075a18813577a6ec59b334523153692ce76aeb1d466d7" Namespace="calico-apiserver" Pod="calico-apiserver-65b9745b8-fxksz" WorkloadEndpoint="ip--172--31--18--46-k8s-calico--apiserver--65b9745b8--fxksz-" Jan 14 01:26:37.847181 containerd[1962]: 2026-01-14 01:26:35.265 [INFO][4887] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d9459b2b6917f33906075a18813577a6ec59b334523153692ce76aeb1d466d7" Namespace="calico-apiserver" Pod="calico-apiserver-65b9745b8-fxksz" WorkloadEndpoint="ip--172--31--18--46-k8s-calico--apiserver--65b9745b8--fxksz-eth0" Jan 14 01:26:37.847181 containerd[1962]: 2026-01-14 01:26:37.472 [INFO][4911] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d9459b2b6917f33906075a18813577a6ec59b334523153692ce76aeb1d466d7" HandleID="k8s-pod-network.5d9459b2b6917f33906075a18813577a6ec59b334523153692ce76aeb1d466d7" Workload="ip--172--31--18--46-k8s-calico--apiserver--65b9745b8--fxksz-eth0" Jan 14 01:26:37.847834 containerd[1962]: 2026-01-14 01:26:37.474 [INFO][4911] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5d9459b2b6917f33906075a18813577a6ec59b334523153692ce76aeb1d466d7" HandleID="k8s-pod-network.5d9459b2b6917f33906075a18813577a6ec59b334523153692ce76aeb1d466d7" Workload="ip--172--31--18--46-k8s-calico--apiserver--65b9745b8--fxksz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003162c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-46", "pod":"calico-apiserver-65b9745b8-fxksz", "timestamp":"2026-01-14 01:26:37.472246695 +0000 UTC"}, Hostname:"ip-172-31-18-46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:26:37.847834 containerd[1962]: 2026-01-14 01:26:37.475 [INFO][4911] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:26:37.847834 containerd[1962]: 2026-01-14 01:26:37.599 [INFO][4911] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:26:37.847834 containerd[1962]: 2026-01-14 01:26:37.599 [INFO][4911] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-46' Jan 14 01:26:37.847834 containerd[1962]: 2026-01-14 01:26:37.618 [INFO][4911] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5d9459b2b6917f33906075a18813577a6ec59b334523153692ce76aeb1d466d7" host="ip-172-31-18-46" Jan 14 01:26:37.847834 containerd[1962]: 2026-01-14 01:26:37.673 [INFO][4911] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-46" Jan 14 01:26:37.847834 containerd[1962]: 2026-01-14 01:26:37.682 [INFO][4911] ipam/ipam.go 511: Trying affinity for 192.168.5.192/26 host="ip-172-31-18-46" Jan 14 01:26:37.847834 containerd[1962]: 2026-01-14 01:26:37.684 [INFO][4911] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.192/26 host="ip-172-31-18-46" Jan 14 01:26:37.847834 containerd[1962]: 2026-01-14 01:26:37.688 [INFO][4911] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.192/26 host="ip-172-31-18-46" Jan 14 01:26:37.848391 containerd[1962]: 2026-01-14 01:26:37.688 [INFO][4911] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.192/26 handle="k8s-pod-network.5d9459b2b6917f33906075a18813577a6ec59b334523153692ce76aeb1d466d7" host="ip-172-31-18-46" Jan 14 01:26:37.848391 containerd[1962]: 2026-01-14 01:26:37.695 [INFO][4911] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5d9459b2b6917f33906075a18813577a6ec59b334523153692ce76aeb1d466d7 Jan 14 01:26:37.848391 containerd[1962]: 2026-01-14 01:26:37.715 [INFO][4911] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.192/26 handle="k8s-pod-network.5d9459b2b6917f33906075a18813577a6ec59b334523153692ce76aeb1d466d7" host="ip-172-31-18-46" Jan 14 01:26:37.848391 containerd[1962]: 2026-01-14 01:26:37.726 [INFO][4911] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.194/26] block=192.168.5.192/26 handle="k8s-pod-network.5d9459b2b6917f33906075a18813577a6ec59b334523153692ce76aeb1d466d7" host="ip-172-31-18-46" Jan 14 01:26:37.848391 containerd[1962]: 2026-01-14 01:26:37.727 [INFO][4911] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.194/26] handle="k8s-pod-network.5d9459b2b6917f33906075a18813577a6ec59b334523153692ce76aeb1d466d7" host="ip-172-31-18-46" Jan 14 01:26:37.848391 containerd[1962]: 2026-01-14 01:26:37.727 [INFO][4911] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:26:37.848391 containerd[1962]: 2026-01-14 01:26:37.727 [INFO][4911] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.194/26] IPv6=[] ContainerID="5d9459b2b6917f33906075a18813577a6ec59b334523153692ce76aeb1d466d7" HandleID="k8s-pod-network.5d9459b2b6917f33906075a18813577a6ec59b334523153692ce76aeb1d466d7" Workload="ip--172--31--18--46-k8s-calico--apiserver--65b9745b8--fxksz-eth0" Jan 14 01:26:37.849195 containerd[1962]: 2026-01-14 01:26:37.742 [INFO][4887] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d9459b2b6917f33906075a18813577a6ec59b334523153692ce76aeb1d466d7" Namespace="calico-apiserver" Pod="calico-apiserver-65b9745b8-fxksz" WorkloadEndpoint="ip--172--31--18--46-k8s-calico--apiserver--65b9745b8--fxksz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--46-k8s-calico--apiserver--65b9745b8--fxksz-eth0", GenerateName:"calico-apiserver-65b9745b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"56394477-d28d-42eb-bee5-a9a20263c11f", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 25, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65b9745b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-46", ContainerID:"", Pod:"calico-apiserver-65b9745b8-fxksz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali38bc2ee4ada", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:26:37.849299 containerd[1962]: 2026-01-14 01:26:37.742 [INFO][4887] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.194/32] ContainerID="5d9459b2b6917f33906075a18813577a6ec59b334523153692ce76aeb1d466d7" Namespace="calico-apiserver" Pod="calico-apiserver-65b9745b8-fxksz" WorkloadEndpoint="ip--172--31--18--46-k8s-calico--apiserver--65b9745b8--fxksz-eth0" Jan 14 01:26:37.849299 containerd[1962]: 2026-01-14 01:26:37.745 [INFO][4887] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali38bc2ee4ada ContainerID="5d9459b2b6917f33906075a18813577a6ec59b334523153692ce76aeb1d466d7" Namespace="calico-apiserver" Pod="calico-apiserver-65b9745b8-fxksz" WorkloadEndpoint="ip--172--31--18--46-k8s-calico--apiserver--65b9745b8--fxksz-eth0" Jan 14 01:26:37.849299 containerd[1962]: 2026-01-14 01:26:37.778 [INFO][4887] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d9459b2b6917f33906075a18813577a6ec59b334523153692ce76aeb1d466d7" Namespace="calico-apiserver" Pod="calico-apiserver-65b9745b8-fxksz" WorkloadEndpoint="ip--172--31--18--46-k8s-calico--apiserver--65b9745b8--fxksz-eth0" Jan 14 01:26:37.849506 containerd[1962]: 2026-01-14 01:26:37.781 [INFO][4887] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d9459b2b6917f33906075a18813577a6ec59b334523153692ce76aeb1d466d7" Namespace="calico-apiserver" Pod="calico-apiserver-65b9745b8-fxksz" WorkloadEndpoint="ip--172--31--18--46-k8s-calico--apiserver--65b9745b8--fxksz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--46-k8s-calico--apiserver--65b9745b8--fxksz-eth0", GenerateName:"calico-apiserver-65b9745b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"56394477-d28d-42eb-bee5-a9a20263c11f", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 25, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65b9745b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-46", ContainerID:"5d9459b2b6917f33906075a18813577a6ec59b334523153692ce76aeb1d466d7", Pod:"calico-apiserver-65b9745b8-fxksz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali38bc2ee4ada", MAC:"96:ad:2a:c4:15:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:26:37.849630 containerd[1962]: 2026-01-14 01:26:37.826 [INFO][4887] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d9459b2b6917f33906075a18813577a6ec59b334523153692ce76aeb1d466d7" Namespace="calico-apiserver" Pod="calico-apiserver-65b9745b8-fxksz" WorkloadEndpoint="ip--172--31--18--46-k8s-calico--apiserver--65b9745b8--fxksz-eth0" Jan 14 01:26:37.809000 audit[5137]: NETFILTER_CFG table=filter:125 family=2 entries=46 op=nft_register_chain pid=5137 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:26:37.809000 audit[5137]: SYSCALL arch=c000003e syscall=46 success=yes exit=27020 a0=3 a1=7ffde18b8da0 a2=0 a3=7ffde18b8d8c items=0 ppid=4950 pid=5137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:37.809000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:26:37.919507 systemd-networkd[1548]: calif5b432813b4: Link UP Jan 14 01:26:37.922802 systemd-networkd[1548]: calif5b432813b4: Gained carrier Jan 14 01:26:37.945000 audit[5180]: NETFILTER_CFG table=filter:126 family=2 entries=37 op=nft_register_chain pid=5180 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:26:37.945000 audit[5180]: SYSCALL arch=c000003e syscall=46 success=yes exit=21888 a0=3 a1=7ffdedf875f0 a2=0 a3=7ffdedf875dc items=0 ppid=4950 pid=5180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:37.945000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:26:37.954050 containerd[1962]: time="2026-01-14T01:26:37.953867923Z" level=info msg="connecting to shim 5d9459b2b6917f33906075a18813577a6ec59b334523153692ce76aeb1d466d7" address="unix:///run/containerd/s/c389eb022b474c49391a6274e9955270ab1bd3917139798dc5a4f7f9c6c54125" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:26:37.969565 containerd[1962]: 2026-01-14 01:26:34.221 [INFO][4844] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 01:26:37.969565 containerd[1962]: 2026-01-14 01:26:34.265 [INFO][4844] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--46-k8s-whisker--84845f84c5--f7vsl-eth0 whisker-84845f84c5- calico-system 4fe5960b-c32f-4a52-8f07-a5dac29b6214 998 0 2026-01-14 01:26:09 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:84845f84c5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-18-46 whisker-84845f84c5-f7vsl eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif5b432813b4 [] [] }} ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" Namespace="calico-system" Pod="whisker-84845f84c5-f7vsl" WorkloadEndpoint="ip--172--31--18--46-k8s-whisker--84845f84c5--f7vsl-" Jan 14 01:26:37.969565 containerd[1962]: 2026-01-14 01:26:34.266 [INFO][4844] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" Namespace="calico-system" Pod="whisker-84845f84c5-f7vsl" WorkloadEndpoint="ip--172--31--18--46-k8s-whisker--84845f84c5--f7vsl-eth0" Jan 14 01:26:37.969565 containerd[1962]: 2026-01-14 01:26:37.472 [INFO][4864] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" HandleID="k8s-pod-network.141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" Workload="ip--172--31--18--46-k8s-whisker--84845f84c5--f7vsl-eth0" Jan 14 01:26:37.969975 containerd[1962]: 2026-01-14 01:26:37.474 [INFO][4864] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" HandleID="k8s-pod-network.141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" Workload="ip--172--31--18--46-k8s-whisker--84845f84c5--f7vsl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000102950), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-46", "pod":"whisker-84845f84c5-f7vsl", "timestamp":"2026-01-14 01:26:37.472246212 +0000 UTC"}, Hostname:"ip-172-31-18-46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:26:37.969975 containerd[1962]: 2026-01-14 01:26:37.475 [INFO][4864] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:26:37.969975 containerd[1962]: 2026-01-14 01:26:37.727 [INFO][4864] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:26:37.969975 containerd[1962]: 2026-01-14 01:26:37.727 [INFO][4864] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-46' Jan 14 01:26:37.969975 containerd[1962]: 2026-01-14 01:26:37.784 [INFO][4864] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" host="ip-172-31-18-46" Jan 14 01:26:37.969975 containerd[1962]: 2026-01-14 01:26:37.799 [INFO][4864] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-46" Jan 14 01:26:37.969975 containerd[1962]: 2026-01-14 01:26:37.821 [INFO][4864] ipam/ipam.go 511: Trying affinity for 192.168.5.192/26 host="ip-172-31-18-46" Jan 14 01:26:37.969975 containerd[1962]: 2026-01-14 01:26:37.836 [INFO][4864] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.192/26 host="ip-172-31-18-46" Jan 14 01:26:37.969975 containerd[1962]: 2026-01-14 01:26:37.860 [INFO][4864] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.192/26 host="ip-172-31-18-46" Jan 14 01:26:37.969975 containerd[1962]: 2026-01-14 01:26:37.861 [INFO][4864] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.192/26 handle="k8s-pod-network.141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" host="ip-172-31-18-46" Jan 14 01:26:37.970294 containerd[1962]: 2026-01-14 01:26:37.867 [INFO][4864] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e Jan 14 01:26:37.970294 containerd[1962]: 2026-01-14 01:26:37.878 [INFO][4864] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.192/26 handle="k8s-pod-network.141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" host="ip-172-31-18-46" Jan 14 01:26:37.970294 containerd[1962]: 2026-01-14 01:26:37.906 [INFO][4864] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.195/26] block=192.168.5.192/26 handle="k8s-pod-network.141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" host="ip-172-31-18-46" Jan 14 01:26:37.970294 containerd[1962]: 2026-01-14 01:26:37.906 [INFO][4864] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.195/26] handle="k8s-pod-network.141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" host="ip-172-31-18-46" Jan 14 01:26:37.970294 containerd[1962]: 2026-01-14 01:26:37.906 [INFO][4864] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:26:37.970294 containerd[1962]: 2026-01-14 01:26:37.907 [INFO][4864] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.195/26] IPv6=[] ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" HandleID="k8s-pod-network.141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" Workload="ip--172--31--18--46-k8s-whisker--84845f84c5--f7vsl-eth0" Jan 14 01:26:37.970466 containerd[1962]: 2026-01-14 01:26:37.912 [INFO][4844] cni-plugin/k8s.go 418: Populated endpoint ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" Namespace="calico-system" Pod="whisker-84845f84c5-f7vsl" WorkloadEndpoint="ip--172--31--18--46-k8s-whisker--84845f84c5--f7vsl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--46-k8s-whisker--84845f84c5--f7vsl-eth0", GenerateName:"whisker-84845f84c5-", Namespace:"calico-system", SelfLink:"", UID:"4fe5960b-c32f-4a52-8f07-a5dac29b6214", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 26, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84845f84c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-46", ContainerID:"", Pod:"whisker-84845f84c5-f7vsl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.5.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif5b432813b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:26:37.970466 containerd[1962]: 2026-01-14 01:26:37.912 [INFO][4844] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.195/32] ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" Namespace="calico-system" Pod="whisker-84845f84c5-f7vsl" WorkloadEndpoint="ip--172--31--18--46-k8s-whisker--84845f84c5--f7vsl-eth0" Jan 14 01:26:37.972644 containerd[1962]: 2026-01-14 01:26:37.912 [INFO][4844] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif5b432813b4 ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" Namespace="calico-system" Pod="whisker-84845f84c5-f7vsl" WorkloadEndpoint="ip--172--31--18--46-k8s-whisker--84845f84c5--f7vsl-eth0" Jan 14 01:26:37.972644 containerd[1962]: 2026-01-14 01:26:37.919 [INFO][4844] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" Namespace="calico-system" Pod="whisker-84845f84c5-f7vsl" WorkloadEndpoint="ip--172--31--18--46-k8s-whisker--84845f84c5--f7vsl-eth0" Jan 14 01:26:37.972767 containerd[1962]: 2026-01-14 01:26:37.919 [INFO][4844] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" Namespace="calico-system" Pod="whisker-84845f84c5-f7vsl" WorkloadEndpoint="ip--172--31--18--46-k8s-whisker--84845f84c5--f7vsl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--46-k8s-whisker--84845f84c5--f7vsl-eth0", GenerateName:"whisker-84845f84c5-", Namespace:"calico-system", SelfLink:"", UID:"4fe5960b-c32f-4a52-8f07-a5dac29b6214", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 26, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84845f84c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-46", ContainerID:"141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e", Pod:"whisker-84845f84c5-f7vsl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.5.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif5b432813b4", MAC:"e2:03:29:f8:6d:cb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:26:37.972880 containerd[1962]: 2026-01-14 01:26:37.959 [INFO][4844] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" Namespace="calico-system" Pod="whisker-84845f84c5-f7vsl" WorkloadEndpoint="ip--172--31--18--46-k8s-whisker--84845f84c5--f7vsl-eth0" Jan 14 01:26:37.982834 systemd[1]: Started cri-containerd-10d472fbee22d017515116909491fc8a1b7d4b369887c93dc436feedc5d47bc7.scope - libcontainer container 10d472fbee22d017515116909491fc8a1b7d4b369887c93dc436feedc5d47bc7. Jan 14 01:26:38.074859 systemd[1]: Started cri-containerd-5d9459b2b6917f33906075a18813577a6ec59b334523153692ce76aeb1d466d7.scope - libcontainer container 5d9459b2b6917f33906075a18813577a6ec59b334523153692ce76aeb1d466d7. Jan 14 01:26:38.076062 containerd[1962]: time="2026-01-14T01:26:38.075988420Z" level=info msg="connecting to shim 141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" address="unix:///run/containerd/s/90915ab77246df040270123a24da4a31c77b25ad4d37febd8ee2e3d2b11f7af0" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:26:38.109013 systemd-networkd[1548]: vxlan.calico: Gained IPv6LL Jan 14 01:26:38.112000 audit: BPF prog-id=213 op=LOAD Jan 14 01:26:38.116000 audit: BPF prog-id=214 op=LOAD Jan 14 01:26:38.116000 audit[5163]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5145 pid=5163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130643437326662656532326430313735313531313639303934393166 Jan 14 01:26:38.117000 audit: BPF prog-id=214 op=UNLOAD Jan 14 01:26:38.117000 audit[5163]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5145 pid=5163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130643437326662656532326430313735313531313639303934393166 Jan 14 01:26:38.118000 audit: BPF prog-id=215 op=LOAD Jan 14 01:26:38.118000 audit[5163]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5145 pid=5163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.118000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130643437326662656532326430313735313531313639303934393166 Jan 14 01:26:38.119000 audit: BPF prog-id=216 op=LOAD Jan 14 01:26:38.119000 audit[5163]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5145 pid=5163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130643437326662656532326430313735313531313639303934393166 Jan 14 01:26:38.120000 audit: BPF prog-id=216 op=UNLOAD Jan 14 01:26:38.120000 audit[5163]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5145 pid=5163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.120000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130643437326662656532326430313735313531313639303934393166 Jan 14 01:26:38.120000 audit: BPF prog-id=215 op=UNLOAD Jan 14 01:26:38.120000 audit[5163]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5145 pid=5163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.120000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130643437326662656532326430313735313531313639303934393166 Jan 14 01:26:38.120000 audit: BPF prog-id=217 op=LOAD Jan 14 01:26:38.120000 audit[5163]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5145 pid=5163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.120000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130643437326662656532326430313735313531313639303934393166 Jan 14 01:26:38.125000 audit[5251]: NETFILTER_CFG table=filter:127 family=2 entries=67 op=nft_register_chain pid=5251 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:26:38.125000 audit[5251]: SYSCALL arch=c000003e syscall=46 success=yes exit=38236 a0=3 a1=7ffd1d389f80 a2=0 a3=7ffd1d389f6c items=0 ppid=4950 pid=5251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.125000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:26:38.135442 systemd-networkd[1548]: calice29bf40714: Link UP Jan 14 01:26:38.144708 systemd-networkd[1548]: calice29bf40714: Gained carrier Jan 14 01:26:38.158000 audit: BPF prog-id=218 op=LOAD Jan 14 01:26:38.160000 audit: BPF prog-id=219 op=LOAD Jan 14 01:26:38.160000 audit[5207]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000a0238 a2=98 a3=0 items=0 ppid=5182 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.160000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564393435396232623639313766333339303630373561313838313335 Jan 14 01:26:38.160000 audit: BPF prog-id=219 op=UNLOAD Jan 14 01:26:38.160000 audit[5207]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5182 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.160000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564393435396232623639313766333339303630373561313838313335 Jan 14 01:26:38.160000 audit: BPF prog-id=220 op=LOAD Jan 14 01:26:38.160000 audit[5207]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000a0488 a2=98 a3=0 items=0 ppid=5182 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.160000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564393435396232623639313766333339303630373561313838313335 Jan 14 01:26:38.162000 audit: BPF prog-id=221 op=LOAD Jan 14 01:26:38.162000 audit[5207]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0000a0218 a2=98 a3=0 items=0 ppid=5182 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564393435396232623639313766333339303630373561313838313335 Jan 14 01:26:38.162000 audit: BPF prog-id=221 op=UNLOAD Jan 14 01:26:38.162000 audit[5207]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5182 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564393435396232623639313766333339303630373561313838313335 Jan 14 01:26:38.162000 audit: BPF prog-id=220 op=UNLOAD Jan 14 01:26:38.162000 audit[5207]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5182 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564393435396232623639313766333339303630373561313838313335 Jan 14 01:26:38.162000 audit: BPF prog-id=222 op=LOAD Jan 14 01:26:38.162000 audit[5207]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000a06e8 a2=98 a3=0 items=0 ppid=5182 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564393435396232623639313766333339303630373561313838313335 Jan 14 01:26:38.217060 containerd[1962]: 2026-01-14 01:26:34.198 [INFO][4834] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 01:26:38.217060 containerd[1962]: 2026-01-14 01:26:34.268 [INFO][4834] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--46-k8s-coredns--674b8bbfcf--k7zqh-eth0 coredns-674b8bbfcf- kube-system 2a03d66f-dd24-454c-84df-6cece7cb808c 940 0 2026-01-14 01:25:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-46 coredns-674b8bbfcf-k7zqh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calice29bf40714 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="82e8aa5f553ea348dea90d00e7dd77b5e4b547c4f02aec8681303c20b713538e" Namespace="kube-system" Pod="coredns-674b8bbfcf-k7zqh" WorkloadEndpoint="ip--172--31--18--46-k8s-coredns--674b8bbfcf--k7zqh-" Jan 14 01:26:38.217060 containerd[1962]: 2026-01-14 01:26:34.268 [INFO][4834] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="82e8aa5f553ea348dea90d00e7dd77b5e4b547c4f02aec8681303c20b713538e" Namespace="kube-system" Pod="coredns-674b8bbfcf-k7zqh" WorkloadEndpoint="ip--172--31--18--46-k8s-coredns--674b8bbfcf--k7zqh-eth0" Jan 14 01:26:38.217060 containerd[1962]: 2026-01-14 01:26:37.472 [INFO][4866] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="82e8aa5f553ea348dea90d00e7dd77b5e4b547c4f02aec8681303c20b713538e" HandleID="k8s-pod-network.82e8aa5f553ea348dea90d00e7dd77b5e4b547c4f02aec8681303c20b713538e" Workload="ip--172--31--18--46-k8s-coredns--674b8bbfcf--k7zqh-eth0" Jan 14 01:26:38.217424 containerd[1962]: 2026-01-14 01:26:37.475 [INFO][4866] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="82e8aa5f553ea348dea90d00e7dd77b5e4b547c4f02aec8681303c20b713538e" HandleID="k8s-pod-network.82e8aa5f553ea348dea90d00e7dd77b5e4b547c4f02aec8681303c20b713538e" Workload="ip--172--31--18--46-k8s-coredns--674b8bbfcf--k7zqh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001fa780), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-46", "pod":"coredns-674b8bbfcf-k7zqh", "timestamp":"2026-01-14 01:26:37.472609465 +0000 UTC"}, Hostname:"ip-172-31-18-46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:26:38.217424 containerd[1962]: 2026-01-14 01:26:37.475 [INFO][4866] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:26:38.217424 containerd[1962]: 2026-01-14 01:26:37.908 [INFO][4866] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:26:38.217424 containerd[1962]: 2026-01-14 01:26:37.908 [INFO][4866] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-46' Jan 14 01:26:38.217424 containerd[1962]: 2026-01-14 01:26:37.966 [INFO][4866] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.82e8aa5f553ea348dea90d00e7dd77b5e4b547c4f02aec8681303c20b713538e" host="ip-172-31-18-46" Jan 14 01:26:38.217424 containerd[1962]: 2026-01-14 01:26:37.983 [INFO][4866] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-46" Jan 14 01:26:38.217424 containerd[1962]: 2026-01-14 01:26:38.005 [INFO][4866] ipam/ipam.go 511: Trying affinity for 192.168.5.192/26 host="ip-172-31-18-46" Jan 14 01:26:38.217424 containerd[1962]: 2026-01-14 01:26:38.013 [INFO][4866] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.192/26 host="ip-172-31-18-46" Jan 14 01:26:38.217424 containerd[1962]: 2026-01-14 01:26:38.025 [INFO][4866] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.192/26 host="ip-172-31-18-46" Jan 14 01:26:38.217424 containerd[1962]: 2026-01-14 01:26:38.025 [INFO][4866] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.192/26 handle="k8s-pod-network.82e8aa5f553ea348dea90d00e7dd77b5e4b547c4f02aec8681303c20b713538e" host="ip-172-31-18-46" Jan 14 01:26:38.218711 containerd[1962]: 2026-01-14 01:26:38.034 [INFO][4866] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.82e8aa5f553ea348dea90d00e7dd77b5e4b547c4f02aec8681303c20b713538e Jan 14 01:26:38.218711 containerd[1962]: 2026-01-14 01:26:38.049 [INFO][4866] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.192/26 handle="k8s-pod-network.82e8aa5f553ea348dea90d00e7dd77b5e4b547c4f02aec8681303c20b713538e" host="ip-172-31-18-46" Jan 14 01:26:38.218711 containerd[1962]: 2026-01-14 01:26:38.067 [INFO][4866] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.196/26] block=192.168.5.192/26 handle="k8s-pod-network.82e8aa5f553ea348dea90d00e7dd77b5e4b547c4f02aec8681303c20b713538e" host="ip-172-31-18-46" Jan 14 01:26:38.218711 containerd[1962]: 2026-01-14 01:26:38.067 [INFO][4866] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.196/26] handle="k8s-pod-network.82e8aa5f553ea348dea90d00e7dd77b5e4b547c4f02aec8681303c20b713538e" host="ip-172-31-18-46" Jan 14 01:26:38.218711 containerd[1962]: 2026-01-14 01:26:38.068 [INFO][4866] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:26:38.218711 containerd[1962]: 2026-01-14 01:26:38.068 [INFO][4866] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.196/26] IPv6=[] ContainerID="82e8aa5f553ea348dea90d00e7dd77b5e4b547c4f02aec8681303c20b713538e" HandleID="k8s-pod-network.82e8aa5f553ea348dea90d00e7dd77b5e4b547c4f02aec8681303c20b713538e" Workload="ip--172--31--18--46-k8s-coredns--674b8bbfcf--k7zqh-eth0" Jan 14 01:26:38.218962 containerd[1962]: 2026-01-14 01:26:38.094 [INFO][4834] cni-plugin/k8s.go 418: Populated endpoint ContainerID="82e8aa5f553ea348dea90d00e7dd77b5e4b547c4f02aec8681303c20b713538e" Namespace="kube-system" Pod="coredns-674b8bbfcf-k7zqh" WorkloadEndpoint="ip--172--31--18--46-k8s-coredns--674b8bbfcf--k7zqh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--46-k8s-coredns--674b8bbfcf--k7zqh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2a03d66f-dd24-454c-84df-6cece7cb808c", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-46", ContainerID:"", Pod:"coredns-674b8bbfcf-k7zqh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calice29bf40714", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:26:38.218962 containerd[1962]: 2026-01-14 01:26:38.094 [INFO][4834] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.196/32] ContainerID="82e8aa5f553ea348dea90d00e7dd77b5e4b547c4f02aec8681303c20b713538e" Namespace="kube-system" Pod="coredns-674b8bbfcf-k7zqh" WorkloadEndpoint="ip--172--31--18--46-k8s-coredns--674b8bbfcf--k7zqh-eth0" Jan 14 01:26:38.218962 containerd[1962]: 2026-01-14 01:26:38.094 [INFO][4834] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calice29bf40714 ContainerID="82e8aa5f553ea348dea90d00e7dd77b5e4b547c4f02aec8681303c20b713538e" Namespace="kube-system" Pod="coredns-674b8bbfcf-k7zqh" WorkloadEndpoint="ip--172--31--18--46-k8s-coredns--674b8bbfcf--k7zqh-eth0" Jan 14 01:26:38.218962 containerd[1962]: 2026-01-14 01:26:38.147 [INFO][4834] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="82e8aa5f553ea348dea90d00e7dd77b5e4b547c4f02aec8681303c20b713538e" Namespace="kube-system" Pod="coredns-674b8bbfcf-k7zqh" WorkloadEndpoint="ip--172--31--18--46-k8s-coredns--674b8bbfcf--k7zqh-eth0" Jan 14 01:26:38.218962 containerd[1962]: 2026-01-14 01:26:38.153 [INFO][4834] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="82e8aa5f553ea348dea90d00e7dd77b5e4b547c4f02aec8681303c20b713538e" Namespace="kube-system" Pod="coredns-674b8bbfcf-k7zqh" WorkloadEndpoint="ip--172--31--18--46-k8s-coredns--674b8bbfcf--k7zqh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--46-k8s-coredns--674b8bbfcf--k7zqh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2a03d66f-dd24-454c-84df-6cece7cb808c", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-46", ContainerID:"82e8aa5f553ea348dea90d00e7dd77b5e4b547c4f02aec8681303c20b713538e", Pod:"coredns-674b8bbfcf-k7zqh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calice29bf40714", MAC:"8e:44:cc:3a:4c:1d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:26:38.218962 containerd[1962]: 2026-01-14 01:26:38.183 [INFO][4834] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="82e8aa5f553ea348dea90d00e7dd77b5e4b547c4f02aec8681303c20b713538e" Namespace="kube-system" Pod="coredns-674b8bbfcf-k7zqh" WorkloadEndpoint="ip--172--31--18--46-k8s-coredns--674b8bbfcf--k7zqh-eth0" Jan 14 01:26:38.221901 systemd[1]: Started cri-containerd-141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e.scope - libcontainer container 141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e. Jan 14 01:26:38.270404 containerd[1962]: time="2026-01-14T01:26:38.270348551Z" level=info msg="connecting to shim 82e8aa5f553ea348dea90d00e7dd77b5e4b547c4f02aec8681303c20b713538e" address="unix:///run/containerd/s/65e77c5bcf5ea0bcf12a64e1e006597e8c012de8d499db1b155738ef701cb03d" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:26:38.322000 audit[5310]: NETFILTER_CFG table=filter:128 family=2 entries=50 op=nft_register_chain pid=5310 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:26:38.322000 audit[5310]: SYSCALL arch=c000003e syscall=46 success=yes exit=24928 a0=3 a1=7fff0ac7d6e0 a2=0 a3=7fff0ac7d6cc items=0 ppid=4950 pid=5310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.322000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:26:38.328114 systemd[1]: Started cri-containerd-82e8aa5f553ea348dea90d00e7dd77b5e4b547c4f02aec8681303c20b713538e.scope - libcontainer container 82e8aa5f553ea348dea90d00e7dd77b5e4b547c4f02aec8681303c20b713538e. Jan 14 01:26:38.353000 audit: BPF prog-id=223 op=LOAD Jan 14 01:26:38.355000 audit: BPF prog-id=224 op=LOAD Jan 14 01:26:38.355000 audit[5250]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=5232 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.355000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134313331323434326536663531343033646364653030353765316563 Jan 14 01:26:38.355000 audit: BPF prog-id=224 op=UNLOAD Jan 14 01:26:38.355000 audit[5250]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5232 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.355000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134313331323434326536663531343033646364653030353765316563 Jan 14 01:26:38.355000 audit: BPF prog-id=225 op=LOAD Jan 14 01:26:38.355000 audit[5250]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=5232 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.355000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134313331323434326536663531343033646364653030353765316563 Jan 14 01:26:38.355000 audit: BPF prog-id=226 op=LOAD Jan 14 01:26:38.355000 audit[5250]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=5232 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.355000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134313331323434326536663531343033646364653030353765316563 Jan 14 01:26:38.355000 audit: BPF prog-id=226 op=UNLOAD Jan 14 01:26:38.355000 audit[5250]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5232 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.355000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134313331323434326536663531343033646364653030353765316563 Jan 14 01:26:38.355000 audit: BPF prog-id=225 op=UNLOAD Jan 14 01:26:38.355000 audit[5250]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5232 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.355000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134313331323434326536663531343033646364653030353765316563 Jan 14 01:26:38.355000 audit: BPF prog-id=227 op=LOAD Jan 14 01:26:38.355000 audit[5250]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=5232 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.355000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134313331323434326536663531343033646364653030353765316563 Jan 14 01:26:38.361000 audit: BPF prog-id=228 op=LOAD Jan 14 01:26:38.362000 audit: BPF prog-id=229 op=LOAD Jan 14 01:26:38.362000 audit[5303]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=5287 pid=5303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.362000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832653861613566353533656133343864656139306430306537646437 Jan 14 01:26:38.362000 audit: BPF prog-id=229 op=UNLOAD Jan 14 01:26:38.362000 audit[5303]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5287 pid=5303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.362000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832653861613566353533656133343864656139306430306537646437 Jan 14 01:26:38.363000 audit: BPF prog-id=230 op=LOAD Jan 14 01:26:38.363000 audit[5303]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=5287 pid=5303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.363000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832653861613566353533656133343864656139306430306537646437 Jan 14 01:26:38.363000 audit: BPF prog-id=231 op=LOAD Jan 14 01:26:38.363000 audit[5303]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=5287 pid=5303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.363000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832653861613566353533656133343864656139306430306537646437 Jan 14 01:26:38.363000 audit: BPF prog-id=231 op=UNLOAD Jan 14 01:26:38.363000 audit[5303]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5287 pid=5303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.363000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832653861613566353533656133343864656139306430306537646437 Jan 14 01:26:38.363000 audit: BPF prog-id=230 op=UNLOAD Jan 14 01:26:38.363000 audit[5303]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5287 pid=5303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.363000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832653861613566353533656133343864656139306430306537646437 Jan 14 01:26:38.364000 audit: BPF prog-id=232 op=LOAD Jan 14 01:26:38.364000 audit[5303]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=5287 pid=5303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.364000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832653861613566353533656133343864656139306430306537646437 Jan 14 01:26:38.372696 containerd[1962]: time="2026-01-14T01:26:38.368448116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574d5c8798-6q7jw,Uid:4263d4be-fc9d-471e-8df9-42f06716a4f0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"10d472fbee22d017515116909491fc8a1b7d4b369887c93dc436feedc5d47bc7\"" Jan 14 01:26:38.373154 containerd[1962]: time="2026-01-14T01:26:38.373124391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65b9745b8-fxksz,Uid:56394477-d28d-42eb-bee5-a9a20263c11f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5d9459b2b6917f33906075a18813577a6ec59b334523153692ce76aeb1d466d7\"" Jan 14 01:26:38.401453 containerd[1962]: time="2026-01-14T01:26:38.401409986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:26:38.428491 containerd[1962]: time="2026-01-14T01:26:38.428268102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k7zqh,Uid:2a03d66f-dd24-454c-84df-6cece7cb808c,Namespace:kube-system,Attempt:0,} returns sandbox id \"82e8aa5f553ea348dea90d00e7dd77b5e4b547c4f02aec8681303c20b713538e\"" Jan 14 01:26:38.461854 containerd[1962]: time="2026-01-14T01:26:38.460004109Z" level=info msg="CreateContainer within sandbox \"82e8aa5f553ea348dea90d00e7dd77b5e4b547c4f02aec8681303c20b713538e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 01:26:38.479841 containerd[1962]: time="2026-01-14T01:26:38.479739405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84845f84c5-f7vsl,Uid:4fe5960b-c32f-4a52-8f07-a5dac29b6214,Namespace:calico-system,Attempt:0,} returns sandbox id \"141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e\"" Jan 14 01:26:38.555514 containerd[1962]: time="2026-01-14T01:26:38.555467191Z" level=info msg="Container 74ae262a2262ece31cda400d7b62e6ba203dae734a82f73e77883e908a1cefec: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:26:38.562706 containerd[1962]: time="2026-01-14T01:26:38.562658148Z" level=info msg="CreateContainer within sandbox \"82e8aa5f553ea348dea90d00e7dd77b5e4b547c4f02aec8681303c20b713538e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"74ae262a2262ece31cda400d7b62e6ba203dae734a82f73e77883e908a1cefec\"" Jan 14 01:26:38.563397 containerd[1962]: time="2026-01-14T01:26:38.563275794Z" level=info msg="StartContainer for \"74ae262a2262ece31cda400d7b62e6ba203dae734a82f73e77883e908a1cefec\"" Jan 14 01:26:38.566023 containerd[1962]: time="2026-01-14T01:26:38.565895923Z" level=info msg="connecting to shim 74ae262a2262ece31cda400d7b62e6ba203dae734a82f73e77883e908a1cefec" address="unix:///run/containerd/s/65e77c5bcf5ea0bcf12a64e1e006597e8c012de8d499db1b155738ef701cb03d" protocol=ttrpc version=3 Jan 14 01:26:38.591814 systemd[1]: Started cri-containerd-74ae262a2262ece31cda400d7b62e6ba203dae734a82f73e77883e908a1cefec.scope - libcontainer container 74ae262a2262ece31cda400d7b62e6ba203dae734a82f73e77883e908a1cefec. Jan 14 01:26:38.611000 audit: BPF prog-id=233 op=LOAD Jan 14 01:26:38.612000 audit: BPF prog-id=234 op=LOAD Jan 14 01:26:38.612000 audit[5346]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=5287 pid=5346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734616532363261323236326563653331636461343030643762363265 Jan 14 01:26:38.612000 audit: BPF prog-id=234 op=UNLOAD Jan 14 01:26:38.612000 audit[5346]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5287 pid=5346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734616532363261323236326563653331636461343030643762363265 Jan 14 01:26:38.612000 audit: BPF prog-id=235 op=LOAD Jan 14 01:26:38.612000 audit[5346]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=5287 pid=5346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734616532363261323236326563653331636461343030643762363265 Jan 14 01:26:38.612000 audit: BPF prog-id=236 op=LOAD Jan 14 01:26:38.612000 audit[5346]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=5287 pid=5346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734616532363261323236326563653331636461343030643762363265 Jan 14 01:26:38.612000 audit: BPF prog-id=236 op=UNLOAD Jan 14 01:26:38.612000 audit[5346]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5287 pid=5346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734616532363261323236326563653331636461343030643762363265 Jan 14 01:26:38.612000 audit: BPF prog-id=235 op=UNLOAD Jan 14 01:26:38.612000 audit[5346]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5287 pid=5346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734616532363261323236326563653331636461343030643762363265 Jan 14 01:26:38.612000 audit: BPF prog-id=237 op=LOAD Jan 14 01:26:38.612000 audit[5346]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=5287 pid=5346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:38.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734616532363261323236326563653331636461343030643762363265 Jan 14 01:26:38.664337 containerd[1962]: time="2026-01-14T01:26:38.664118267Z" level=info msg="StartContainer for \"74ae262a2262ece31cda400d7b62e6ba203dae734a82f73e77883e908a1cefec\" returns successfully" Jan 14 01:26:38.711820 containerd[1962]: time="2026-01-14T01:26:38.711762581Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:26:38.713050 containerd[1962]: time="2026-01-14T01:26:38.713004036Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:26:38.713226 containerd[1962]: time="2026-01-14T01:26:38.713098180Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:26:38.718045 kubelet[3571]: E0114 01:26:38.713284 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:26:38.724375 kubelet[3571]: E0114 01:26:38.724258 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:26:38.725276 containerd[1962]: time="2026-01-14T01:26:38.725205948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:26:38.746277 kubelet[3571]: E0114 01:26:38.746103 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zcshs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-574d5c8798-6q7jw_calico-apiserver(4263d4be-fc9d-471e-8df9-42f06716a4f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:26:38.747417 kubelet[3571]: E0114 01:26:38.747371 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-574d5c8798-6q7jw" podUID="4263d4be-fc9d-471e-8df9-42f06716a4f0" Jan 14 01:26:38.986313 containerd[1962]: time="2026-01-14T01:26:38.986254390Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:26:38.987400 containerd[1962]: time="2026-01-14T01:26:38.987335266Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:26:38.987579 containerd[1962]: time="2026-01-14T01:26:38.987417284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:26:38.987619 kubelet[3571]: E0114 01:26:38.987595 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:26:38.987669 kubelet[3571]: E0114 01:26:38.987650 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:26:38.987913 kubelet[3571]: E0114 01:26:38.987872 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h4r5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65b9745b8-fxksz_calico-apiserver(56394477-d28d-42eb-bee5-a9a20263c11f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:26:38.988542 containerd[1962]: time="2026-01-14T01:26:38.988513496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:26:38.989709 kubelet[3571]: E0114 01:26:38.989633 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b9745b8-fxksz" podUID="56394477-d28d-42eb-bee5-a9a20263c11f" Jan 14 01:26:39.005710 systemd-networkd[1548]: calif5b432813b4: Gained IPv6LL Jan 14 01:26:39.069569 systemd-networkd[1548]: cali1105652f90a: Gained IPv6LL Jan 14 01:26:39.255744 containerd[1962]: time="2026-01-14T01:26:39.255702421Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:26:39.256845 containerd[1962]: time="2026-01-14T01:26:39.256780870Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:26:39.257079 containerd[1962]: time="2026-01-14T01:26:39.256866487Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:26:39.257154 kubelet[3571]: E0114 01:26:39.257041 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:26:39.257154 kubelet[3571]: E0114 01:26:39.257083 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:26:39.257254 kubelet[3571]: E0114 01:26:39.257196 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b81ad817aa184d10b98d3fd4131cf440,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5zsbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84845f84c5-f7vsl_calico-system(4fe5960b-c32f-4a52-8f07-a5dac29b6214): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:26:39.259472 containerd[1962]: time="2026-01-14T01:26:39.259183599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:26:39.538066 containerd[1962]: time="2026-01-14T01:26:39.537874431Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:26:39.539121 containerd[1962]: time="2026-01-14T01:26:39.539065775Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:26:39.539609 containerd[1962]: time="2026-01-14T01:26:39.539078091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:26:39.539676 kubelet[3571]: E0114 01:26:39.539330 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:26:39.539676 kubelet[3571]: E0114 01:26:39.539383 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:26:39.539676 kubelet[3571]: E0114 01:26:39.539514 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5zsbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84845f84c5-f7vsl_calico-system(4fe5960b-c32f-4a52-8f07-a5dac29b6214): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:26:39.541031 kubelet[3571]: E0114 01:26:39.540982 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84845f84c5-f7vsl" podUID="4fe5960b-c32f-4a52-8f07-a5dac29b6214" Jan 14 01:26:39.667957 kubelet[3571]: E0114 01:26:39.667878 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b9745b8-fxksz" podUID="56394477-d28d-42eb-bee5-a9a20263c11f" Jan 14 01:26:39.668627 kubelet[3571]: E0114 01:26:39.668557 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-574d5c8798-6q7jw" podUID="4263d4be-fc9d-471e-8df9-42f06716a4f0" Jan 14 01:26:39.669830 containerd[1962]: time="2026-01-14T01:26:39.669790031Z" level=info msg="StopPodSandbox for \"141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e\"" Jan 14 01:26:39.686266 systemd[1]: cri-containerd-141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e.scope: Deactivated successfully. Jan 14 01:26:39.689000 audit: BPF prog-id=223 op=UNLOAD Jan 14 01:26:39.689000 audit: BPF prog-id=227 op=UNLOAD Jan 14 01:26:39.696408 containerd[1962]: time="2026-01-14T01:26:39.696185513Z" level=info msg="received sandbox exit event container_id:\"141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e\" id:\"141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e\" exit_status:137 exited_at:{seconds:1768353999 nanos:694855446}" monitor_name=podsandbox Jan 14 01:26:39.755663 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e-rootfs.mount: Deactivated successfully. Jan 14 01:26:39.772889 systemd-networkd[1548]: cali38bc2ee4ada: Gained IPv6LL Jan 14 01:26:39.791084 containerd[1962]: time="2026-01-14T01:26:39.790965119Z" level=info msg="shim disconnected" id=141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e namespace=k8s.io Jan 14 01:26:39.798801 containerd[1962]: time="2026-01-14T01:26:39.791581613Z" level=info msg="cleaning up after shim disconnected" id=141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e namespace=k8s.io Jan 14 01:26:39.798801 containerd[1962]: time="2026-01-14T01:26:39.791608012Z" level=info msg="cleaning up dead shim" id=141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e namespace=k8s.io Jan 14 01:26:39.864000 audit[5401]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=5401 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:39.864000 audit[5401]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdf0d308e0 a2=0 a3=7ffdf0d308cc items=0 ppid=3724 pid=5401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:39.864000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:39.871000 audit[5401]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=5401 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:39.871000 audit[5401]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdf0d308e0 a2=0 a3=0 items=0 ppid=3724 pid=5401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:39.871000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:39.903579 containerd[1962]: time="2026-01-14T01:26:39.900315686Z" level=info msg="received sandbox container exit event sandbox_id:\"141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e\" exit_status:137 exited_at:{seconds:1768353999 nanos:694855446}" monitor_name=criService Jan 14 01:26:39.900835 systemd-networkd[1548]: calice29bf40714: Gained IPv6LL Jan 14 01:26:39.906878 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e-shm.mount: Deactivated successfully. Jan 14 01:26:39.951000 audit[5425]: NETFILTER_CFG table=filter:131 family=2 entries=20 op=nft_register_rule pid=5425 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:39.951000 audit[5425]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff7fa65ed0 a2=0 a3=7fff7fa65ebc items=0 ppid=3724 pid=5425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:39.951000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:39.956099 kubelet[3571]: I0114 01:26:39.952329 3571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-k7zqh" podStartSLOduration=87.942446128 podStartE2EDuration="1m27.942446128s" podCreationTimestamp="2026-01-14 01:25:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:26:39.942011493 +0000 UTC m=+92.030299648" watchObservedRunningTime="2026-01-14 01:26:39.942446128 +0000 UTC m=+92.030734281" Jan 14 01:26:39.957000 audit[5425]: NETFILTER_CFG table=nat:132 family=2 entries=14 op=nft_register_rule pid=5425 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:39.957000 audit[5425]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff7fa65ed0 a2=0 a3=0 items=0 ppid=3724 pid=5425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:39.957000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:39.998430 systemd-networkd[1548]: calif5b432813b4: Link DOWN Jan 14 01:26:39.998443 systemd-networkd[1548]: calif5b432813b4: Lost carrier Jan 14 01:26:40.135000 audit[5441]: NETFILTER_CFG table=filter:133 family=2 entries=63 op=nft_register_rule pid=5441 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:26:40.135000 audit[5441]: SYSCALL arch=c000003e syscall=46 success=yes exit=5796 a0=3 a1=7fffb90ff350 a2=0 a3=7fffb90ff33c items=0 ppid=4950 pid=5441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:40.135000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:26:40.136000 audit[5441]: NETFILTER_CFG table=filter:134 family=2 entries=8 op=nft_unregister_chain pid=5441 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:26:40.136000 audit[5441]: SYSCALL arch=c000003e syscall=46 success=yes exit=1152 a0=3 a1=7fffb90ff350 a2=0 a3=55c1fee42000 items=0 ppid=4950 pid=5441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:40.136000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:26:40.161108 containerd[1962]: 2026-01-14 01:26:39.994 [INFO][5421] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" Jan 14 01:26:40.161108 containerd[1962]: 2026-01-14 01:26:39.996 [INFO][5421] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" iface="eth0" netns="/var/run/netns/cni-240f5cbb-7560-475f-4cec-f6f63b11080d" Jan 14 01:26:40.161108 containerd[1962]: 2026-01-14 01:26:39.996 [INFO][5421] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" iface="eth0" netns="/var/run/netns/cni-240f5cbb-7560-475f-4cec-f6f63b11080d" Jan 14 01:26:40.161108 containerd[1962]: 2026-01-14 01:26:40.007 [INFO][5421] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" after=11.173775ms iface="eth0" netns="/var/run/netns/cni-240f5cbb-7560-475f-4cec-f6f63b11080d" Jan 14 01:26:40.161108 containerd[1962]: 2026-01-14 01:26:40.008 [INFO][5421] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" Jan 14 01:26:40.161108 containerd[1962]: 2026-01-14 01:26:40.008 [INFO][5421] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" Jan 14 01:26:40.161108 containerd[1962]: 2026-01-14 01:26:40.084 [INFO][5431] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" HandleID="k8s-pod-network.141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" Workload="ip--172--31--18--46-k8s-whisker--84845f84c5--f7vsl-eth0" Jan 14 01:26:40.161108 containerd[1962]: 2026-01-14 01:26:40.084 [INFO][5431] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:26:40.161108 containerd[1962]: 2026-01-14 01:26:40.084 [INFO][5431] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:26:40.161108 containerd[1962]: 2026-01-14 01:26:40.153 [INFO][5431] ipam/ipam_plugin.go 455: Released address using handleID ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" HandleID="k8s-pod-network.141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" Workload="ip--172--31--18--46-k8s-whisker--84845f84c5--f7vsl-eth0" Jan 14 01:26:40.161108 containerd[1962]: 2026-01-14 01:26:40.153 [INFO][5431] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" HandleID="k8s-pod-network.141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" Workload="ip--172--31--18--46-k8s-whisker--84845f84c5--f7vsl-eth0" Jan 14 01:26:40.161108 containerd[1962]: 2026-01-14 01:26:40.156 [INFO][5431] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:26:40.161108 containerd[1962]: 2026-01-14 01:26:40.158 [INFO][5421] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" Jan 14 01:26:40.164194 containerd[1962]: time="2026-01-14T01:26:40.163075819Z" level=info msg="TearDown network for sandbox \"141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e\" successfully" Jan 14 01:26:40.164194 containerd[1962]: time="2026-01-14T01:26:40.163344089Z" level=info msg="StopPodSandbox for \"141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e\" returns successfully" Jan 14 01:26:40.165401 systemd[1]: run-netns-cni\x2d240f5cbb\x2d7560\x2d475f\x2d4cec\x2df6f63b11080d.mount: Deactivated successfully. Jan 14 01:26:40.319921 kubelet[3571]: I0114 01:26:40.319873 3571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4fe5960b-c32f-4a52-8f07-a5dac29b6214-whisker-backend-key-pair\") pod \"4fe5960b-c32f-4a52-8f07-a5dac29b6214\" (UID: \"4fe5960b-c32f-4a52-8f07-a5dac29b6214\") " Jan 14 01:26:40.319921 kubelet[3571]: I0114 01:26:40.319938 3571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zsbb\" (UniqueName: \"kubernetes.io/projected/4fe5960b-c32f-4a52-8f07-a5dac29b6214-kube-api-access-5zsbb\") pod \"4fe5960b-c32f-4a52-8f07-a5dac29b6214\" (UID: \"4fe5960b-c32f-4a52-8f07-a5dac29b6214\") " Jan 14 01:26:40.319921 kubelet[3571]: I0114 01:26:40.319976 3571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fe5960b-c32f-4a52-8f07-a5dac29b6214-whisker-ca-bundle\") pod \"4fe5960b-c32f-4a52-8f07-a5dac29b6214\" (UID: \"4fe5960b-c32f-4a52-8f07-a5dac29b6214\") " Jan 14 01:26:40.327572 kubelet[3571]: I0114 01:26:40.322615 3571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fe5960b-c32f-4a52-8f07-a5dac29b6214-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4fe5960b-c32f-4a52-8f07-a5dac29b6214" (UID: "4fe5960b-c32f-4a52-8f07-a5dac29b6214"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 14 01:26:40.328805 kubelet[3571]: I0114 01:26:40.328738 3571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fe5960b-c32f-4a52-8f07-a5dac29b6214-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4fe5960b-c32f-4a52-8f07-a5dac29b6214" (UID: "4fe5960b-c32f-4a52-8f07-a5dac29b6214"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 14 01:26:40.328904 systemd[1]: var-lib-kubelet-pods-4fe5960b\x2dc32f\x2d4a52\x2d8f07\x2da5dac29b6214-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 14 01:26:40.333194 systemd[1]: var-lib-kubelet-pods-4fe5960b\x2dc32f\x2d4a52\x2d8f07\x2da5dac29b6214-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5zsbb.mount: Deactivated successfully. Jan 14 01:26:40.340169 kubelet[3571]: I0114 01:26:40.340106 3571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fe5960b-c32f-4a52-8f07-a5dac29b6214-kube-api-access-5zsbb" (OuterVolumeSpecName: "kube-api-access-5zsbb") pod "4fe5960b-c32f-4a52-8f07-a5dac29b6214" (UID: "4fe5960b-c32f-4a52-8f07-a5dac29b6214"). InnerVolumeSpecName "kube-api-access-5zsbb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 14 01:26:40.421702 kubelet[3571]: I0114 01:26:40.421242 3571 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fe5960b-c32f-4a52-8f07-a5dac29b6214-whisker-ca-bundle\") on node \"ip-172-31-18-46\" DevicePath \"\"" Jan 14 01:26:40.421702 kubelet[3571]: I0114 01:26:40.421288 3571 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4fe5960b-c32f-4a52-8f07-a5dac29b6214-whisker-backend-key-pair\") on node \"ip-172-31-18-46\" DevicePath \"\"" Jan 14 01:26:40.421702 kubelet[3571]: I0114 01:26:40.421300 3571 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5zsbb\" (UniqueName: \"kubernetes.io/projected/4fe5960b-c32f-4a52-8f07-a5dac29b6214-kube-api-access-5zsbb\") on node \"ip-172-31-18-46\" DevicePath \"\"" Jan 14 01:26:40.677289 systemd[1]: Removed slice kubepods-besteffort-pod4fe5960b_c32f_4a52_8f07_a5dac29b6214.slice - libcontainer container kubepods-besteffort-pod4fe5960b_c32f_4a52_8f07_a5dac29b6214.slice. Jan 14 01:26:40.828181 kubelet[3571]: I0114 01:26:40.828115 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/47e80bc5-cd92-40b2-a579-aec98175c1b6-whisker-backend-key-pair\") pod \"whisker-54d55768dc-v7mtb\" (UID: \"47e80bc5-cd92-40b2-a579-aec98175c1b6\") " pod="calico-system/whisker-54d55768dc-v7mtb" Jan 14 01:26:40.828181 kubelet[3571]: I0114 01:26:40.828167 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9czx\" (UniqueName: \"kubernetes.io/projected/47e80bc5-cd92-40b2-a579-aec98175c1b6-kube-api-access-f9czx\") pod \"whisker-54d55768dc-v7mtb\" (UID: \"47e80bc5-cd92-40b2-a579-aec98175c1b6\") " pod="calico-system/whisker-54d55768dc-v7mtb" Jan 14 01:26:40.828392 kubelet[3571]: I0114 01:26:40.828199 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47e80bc5-cd92-40b2-a579-aec98175c1b6-whisker-ca-bundle\") pod \"whisker-54d55768dc-v7mtb\" (UID: \"47e80bc5-cd92-40b2-a579-aec98175c1b6\") " pod="calico-system/whisker-54d55768dc-v7mtb" Jan 14 01:26:40.834160 systemd[1]: Created slice kubepods-besteffort-pod47e80bc5_cd92_40b2_a579_aec98175c1b6.slice - libcontainer container kubepods-besteffort-pod47e80bc5_cd92_40b2_a579_aec98175c1b6.slice. Jan 14 01:26:40.970000 audit[5449]: NETFILTER_CFG table=filter:135 family=2 entries=20 op=nft_register_rule pid=5449 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:40.970000 audit[5449]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcab3d2720 a2=0 a3=7ffcab3d270c items=0 ppid=3724 pid=5449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:40.970000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:40.976000 audit[5449]: NETFILTER_CFG table=nat:136 family=2 entries=14 op=nft_register_rule pid=5449 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:40.976000 audit[5449]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcab3d2720 a2=0 a3=0 items=0 ppid=3724 pid=5449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:40.976000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:41.140821 containerd[1962]: time="2026-01-14T01:26:41.140637432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54d55768dc-v7mtb,Uid:47e80bc5-cd92-40b2-a579-aec98175c1b6,Namespace:calico-system,Attempt:0,}" Jan 14 01:26:41.319724 systemd-networkd[1548]: calie923709a816: Link UP Jan 14 01:26:41.321008 systemd-networkd[1548]: calie923709a816: Gained carrier Jan 14 01:26:41.346966 containerd[1962]: 2026-01-14 01:26:41.194 [INFO][5450] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--46-k8s-whisker--54d55768dc--v7mtb-eth0 whisker-54d55768dc- calico-system 47e80bc5-cd92-40b2-a579-aec98175c1b6 1090 0 2026-01-14 01:26:40 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:54d55768dc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-18-46 whisker-54d55768dc-v7mtb eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie923709a816 [] [] }} ContainerID="31659db547a3a2a53ac26fa110126b2475e20562d2857376254c98326974efe8" Namespace="calico-system" Pod="whisker-54d55768dc-v7mtb" WorkloadEndpoint="ip--172--31--18--46-k8s-whisker--54d55768dc--v7mtb-" Jan 14 01:26:41.346966 containerd[1962]: 2026-01-14 01:26:41.194 [INFO][5450] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="31659db547a3a2a53ac26fa110126b2475e20562d2857376254c98326974efe8" Namespace="calico-system" Pod="whisker-54d55768dc-v7mtb" WorkloadEndpoint="ip--172--31--18--46-k8s-whisker--54d55768dc--v7mtb-eth0" Jan 14 01:26:41.346966 containerd[1962]: 2026-01-14 01:26:41.248 [INFO][5462] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="31659db547a3a2a53ac26fa110126b2475e20562d2857376254c98326974efe8" HandleID="k8s-pod-network.31659db547a3a2a53ac26fa110126b2475e20562d2857376254c98326974efe8" Workload="ip--172--31--18--46-k8s-whisker--54d55768dc--v7mtb-eth0" Jan 14 01:26:41.346966 containerd[1962]: 2026-01-14 01:26:41.248 [INFO][5462] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="31659db547a3a2a53ac26fa110126b2475e20562d2857376254c98326974efe8" HandleID="k8s-pod-network.31659db547a3a2a53ac26fa110126b2475e20562d2857376254c98326974efe8" Workload="ip--172--31--18--46-k8s-whisker--54d55768dc--v7mtb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb1a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-46", "pod":"whisker-54d55768dc-v7mtb", "timestamp":"2026-01-14 01:26:41.24807072 +0000 UTC"}, Hostname:"ip-172-31-18-46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:26:41.346966 containerd[1962]: 2026-01-14 01:26:41.248 [INFO][5462] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:26:41.346966 containerd[1962]: 2026-01-14 01:26:41.248 [INFO][5462] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:26:41.346966 containerd[1962]: 2026-01-14 01:26:41.248 [INFO][5462] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-46' Jan 14 01:26:41.346966 containerd[1962]: 2026-01-14 01:26:41.256 [INFO][5462] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.31659db547a3a2a53ac26fa110126b2475e20562d2857376254c98326974efe8" host="ip-172-31-18-46" Jan 14 01:26:41.346966 containerd[1962]: 2026-01-14 01:26:41.268 [INFO][5462] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-46" Jan 14 01:26:41.346966 containerd[1962]: 2026-01-14 01:26:41.281 [INFO][5462] ipam/ipam.go 511: Trying affinity for 192.168.5.192/26 host="ip-172-31-18-46" Jan 14 01:26:41.346966 containerd[1962]: 2026-01-14 01:26:41.284 [INFO][5462] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.192/26 host="ip-172-31-18-46" Jan 14 01:26:41.346966 containerd[1962]: 2026-01-14 01:26:41.287 [INFO][5462] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.192/26 host="ip-172-31-18-46" Jan 14 01:26:41.346966 containerd[1962]: 2026-01-14 01:26:41.287 [INFO][5462] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.192/26 handle="k8s-pod-network.31659db547a3a2a53ac26fa110126b2475e20562d2857376254c98326974efe8" host="ip-172-31-18-46" Jan 14 01:26:41.346966 containerd[1962]: 2026-01-14 01:26:41.290 [INFO][5462] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.31659db547a3a2a53ac26fa110126b2475e20562d2857376254c98326974efe8 Jan 14 01:26:41.346966 containerd[1962]: 2026-01-14 01:26:41.298 [INFO][5462] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.192/26 handle="k8s-pod-network.31659db547a3a2a53ac26fa110126b2475e20562d2857376254c98326974efe8" host="ip-172-31-18-46" Jan 14 01:26:41.346966 containerd[1962]: 2026-01-14 01:26:41.308 [INFO][5462] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.197/26] block=192.168.5.192/26 handle="k8s-pod-network.31659db547a3a2a53ac26fa110126b2475e20562d2857376254c98326974efe8" host="ip-172-31-18-46" Jan 14 01:26:41.346966 containerd[1962]: 2026-01-14 01:26:41.309 [INFO][5462] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.197/26] handle="k8s-pod-network.31659db547a3a2a53ac26fa110126b2475e20562d2857376254c98326974efe8" host="ip-172-31-18-46" Jan 14 01:26:41.346966 containerd[1962]: 2026-01-14 01:26:41.309 [INFO][5462] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:26:41.346966 containerd[1962]: 2026-01-14 01:26:41.309 [INFO][5462] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.197/26] IPv6=[] ContainerID="31659db547a3a2a53ac26fa110126b2475e20562d2857376254c98326974efe8" HandleID="k8s-pod-network.31659db547a3a2a53ac26fa110126b2475e20562d2857376254c98326974efe8" Workload="ip--172--31--18--46-k8s-whisker--54d55768dc--v7mtb-eth0" Jan 14 01:26:41.348390 containerd[1962]: 2026-01-14 01:26:41.313 [INFO][5450] cni-plugin/k8s.go 418: Populated endpoint ContainerID="31659db547a3a2a53ac26fa110126b2475e20562d2857376254c98326974efe8" Namespace="calico-system" Pod="whisker-54d55768dc-v7mtb" WorkloadEndpoint="ip--172--31--18--46-k8s-whisker--54d55768dc--v7mtb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--46-k8s-whisker--54d55768dc--v7mtb-eth0", GenerateName:"whisker-54d55768dc-", Namespace:"calico-system", SelfLink:"", UID:"47e80bc5-cd92-40b2-a579-aec98175c1b6", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 26, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54d55768dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-46", ContainerID:"", Pod:"whisker-54d55768dc-v7mtb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.5.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie923709a816", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:26:41.348390 containerd[1962]: 2026-01-14 01:26:41.313 [INFO][5450] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.197/32] ContainerID="31659db547a3a2a53ac26fa110126b2475e20562d2857376254c98326974efe8" Namespace="calico-system" Pod="whisker-54d55768dc-v7mtb" WorkloadEndpoint="ip--172--31--18--46-k8s-whisker--54d55768dc--v7mtb-eth0" Jan 14 01:26:41.348390 containerd[1962]: 2026-01-14 01:26:41.313 [INFO][5450] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie923709a816 ContainerID="31659db547a3a2a53ac26fa110126b2475e20562d2857376254c98326974efe8" Namespace="calico-system" Pod="whisker-54d55768dc-v7mtb" WorkloadEndpoint="ip--172--31--18--46-k8s-whisker--54d55768dc--v7mtb-eth0" Jan 14 01:26:41.348390 containerd[1962]: 2026-01-14 01:26:41.317 [INFO][5450] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="31659db547a3a2a53ac26fa110126b2475e20562d2857376254c98326974efe8" Namespace="calico-system" Pod="whisker-54d55768dc-v7mtb" WorkloadEndpoint="ip--172--31--18--46-k8s-whisker--54d55768dc--v7mtb-eth0" Jan 14 01:26:41.348390 containerd[1962]: 2026-01-14 01:26:41.318 [INFO][5450] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="31659db547a3a2a53ac26fa110126b2475e20562d2857376254c98326974efe8" Namespace="calico-system" Pod="whisker-54d55768dc-v7mtb" WorkloadEndpoint="ip--172--31--18--46-k8s-whisker--54d55768dc--v7mtb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--46-k8s-whisker--54d55768dc--v7mtb-eth0", GenerateName:"whisker-54d55768dc-", Namespace:"calico-system", SelfLink:"", UID:"47e80bc5-cd92-40b2-a579-aec98175c1b6", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 26, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54d55768dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-46", ContainerID:"31659db547a3a2a53ac26fa110126b2475e20562d2857376254c98326974efe8", Pod:"whisker-54d55768dc-v7mtb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.5.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie923709a816", MAC:"f6:be:cb:6e:d0:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:26:41.348390 containerd[1962]: 2026-01-14 01:26:41.336 [INFO][5450] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="31659db547a3a2a53ac26fa110126b2475e20562d2857376254c98326974efe8" Namespace="calico-system" Pod="whisker-54d55768dc-v7mtb" WorkloadEndpoint="ip--172--31--18--46-k8s-whisker--54d55768dc--v7mtb-eth0" Jan 14 01:26:41.377101 containerd[1962]: time="2026-01-14T01:26:41.376626550Z" level=info msg="connecting to shim 31659db547a3a2a53ac26fa110126b2475e20562d2857376254c98326974efe8" address="unix:///run/containerd/s/6aaea4a310b7f65ea453dfafcba5a9a9461b700035fa132544f3f5467f986a9c" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:26:41.426125 systemd[1]: Started cri-containerd-31659db547a3a2a53ac26fa110126b2475e20562d2857376254c98326974efe8.scope - libcontainer container 31659db547a3a2a53ac26fa110126b2475e20562d2857376254c98326974efe8. Jan 14 01:26:41.450066 kernel: kauditd_printk_skb: 336 callbacks suppressed Jan 14 01:26:41.450189 kernel: audit: type=1334 audit(1768354001.445:713): prog-id=238 op=LOAD Jan 14 01:26:41.445000 audit: BPF prog-id=238 op=LOAD Jan 14 01:26:41.449000 audit: BPF prog-id=239 op=LOAD Jan 14 01:26:41.460066 kernel: audit: type=1334 audit(1768354001.449:714): prog-id=239 op=LOAD Jan 14 01:26:41.460270 kernel: audit: type=1300 audit(1768354001.449:714): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5489 pid=5501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:41.449000 audit[5501]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5489 pid=5501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:41.467639 kernel: audit: type=1327 audit(1768354001.449:714): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331363539646235343761336132613533616332366661313130313236 Jan 14 01:26:41.449000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331363539646235343761336132613533616332366661313130313236 Jan 14 01:26:41.449000 audit: BPF prog-id=239 op=UNLOAD Jan 14 01:26:41.478762 kernel: audit: type=1334 audit(1768354001.449:715): prog-id=239 op=UNLOAD Jan 14 01:26:41.478911 kernel: audit: type=1300 audit(1768354001.449:715): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5489 pid=5501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:41.449000 audit[5501]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5489 pid=5501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:41.449000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331363539646235343761336132613533616332366661313130313236 Jan 14 01:26:41.486639 kernel: audit: type=1327 audit(1768354001.449:715): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331363539646235343761336132613533616332366661313130313236 Jan 14 01:26:41.449000 audit: BPF prog-id=240 op=LOAD Jan 14 01:26:41.449000 audit[5501]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5489 pid=5501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:41.493566 kernel: audit: type=1334 audit(1768354001.449:716): prog-id=240 op=LOAD Jan 14 01:26:41.493659 kernel: audit: type=1300 audit(1768354001.449:716): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5489 pid=5501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:41.449000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331363539646235343761336132613533616332366661313130313236 Jan 14 01:26:41.508905 kernel: audit: type=1327 audit(1768354001.449:716): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331363539646235343761336132613533616332366661313130313236 Jan 14 01:26:41.449000 audit: BPF prog-id=241 op=LOAD Jan 14 01:26:41.449000 audit[5501]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5489 pid=5501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:41.449000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331363539646235343761336132613533616332366661313130313236 Jan 14 01:26:41.449000 audit: BPF prog-id=241 op=UNLOAD Jan 14 01:26:41.449000 audit[5501]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5489 pid=5501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:41.449000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331363539646235343761336132613533616332366661313130313236 Jan 14 01:26:41.449000 audit: BPF prog-id=240 op=UNLOAD Jan 14 01:26:41.449000 audit[5501]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5489 pid=5501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:41.449000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331363539646235343761336132613533616332366661313130313236 Jan 14 01:26:41.449000 audit: BPF prog-id=242 op=LOAD Jan 14 01:26:41.449000 audit[5501]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5489 pid=5501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:41.449000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331363539646235343761336132613533616332366661313130313236 Jan 14 01:26:41.608000 audit[5521]: NETFILTER_CFG table=filter:137 family=2 entries=71 op=nft_register_chain pid=5521 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:26:41.608000 audit[5521]: SYSCALL arch=c000003e syscall=46 success=yes exit=39424 a0=3 a1=7ffdce94ac90 a2=0 a3=7ffdce94ac7c items=0 ppid=4950 pid=5521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:41.608000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:26:41.612531 containerd[1962]: time="2026-01-14T01:26:41.612497382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54d55768dc-v7mtb,Uid:47e80bc5-cd92-40b2-a579-aec98175c1b6,Namespace:calico-system,Attempt:0,} returns sandbox id \"31659db547a3a2a53ac26fa110126b2475e20562d2857376254c98326974efe8\"" Jan 14 01:26:41.616898 containerd[1962]: time="2026-01-14T01:26:41.616509555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:26:41.922678 containerd[1962]: time="2026-01-14T01:26:41.922519234Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:26:41.923863 containerd[1962]: time="2026-01-14T01:26:41.923806831Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:26:41.924375 containerd[1962]: time="2026-01-14T01:26:41.923895927Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:26:41.924426 kubelet[3571]: E0114 01:26:41.924078 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:26:41.924426 kubelet[3571]: E0114 01:26:41.924124 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:26:41.924426 kubelet[3571]: E0114 01:26:41.924250 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b81ad817aa184d10b98d3fd4131cf440,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f9czx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54d55768dc-v7mtb_calico-system(47e80bc5-cd92-40b2-a579-aec98175c1b6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:26:41.926962 containerd[1962]: time="2026-01-14T01:26:41.926934089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:26:42.143381 kubelet[3571]: I0114 01:26:42.143300 3571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fe5960b-c32f-4a52-8f07-a5dac29b6214" path="/var/lib/kubelet/pods/4fe5960b-c32f-4a52-8f07-a5dac29b6214/volumes" Jan 14 01:26:42.215405 containerd[1962]: time="2026-01-14T01:26:42.215167549Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:26:42.217976 containerd[1962]: time="2026-01-14T01:26:42.217413011Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:26:42.217976 containerd[1962]: time="2026-01-14T01:26:42.217642633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:26:42.218564 kubelet[3571]: E0114 01:26:42.217841 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:26:42.218564 kubelet[3571]: E0114 01:26:42.217894 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:26:42.218564 kubelet[3571]: E0114 01:26:42.218067 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9czx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54d55768dc-v7mtb_calico-system(47e80bc5-cd92-40b2-a579-aec98175c1b6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:26:42.219644 kubelet[3571]: E0114 01:26:42.219599 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54d55768dc-v7mtb" podUID="47e80bc5-cd92-40b2-a579-aec98175c1b6" Jan 14 01:26:42.396724 systemd-networkd[1548]: calie923709a816: Gained IPv6LL Jan 14 01:26:42.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.18.46:22-4.153.228.146:45844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:26:42.512918 systemd[1]: Started sshd@7-172.31.18.46:22-4.153.228.146:45844.service - OpenSSH per-connection server daemon (4.153.228.146:45844). Jan 14 01:26:42.676715 kubelet[3571]: E0114 01:26:42.676674 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54d55768dc-v7mtb" podUID="47e80bc5-cd92-40b2-a579-aec98175c1b6" Jan 14 01:26:42.711000 audit[5545]: NETFILTER_CFG table=filter:138 family=2 entries=20 op=nft_register_rule pid=5545 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:42.711000 audit[5545]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc21f66230 a2=0 a3=7ffc21f6621c items=0 ppid=3724 pid=5545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:42.711000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:42.716000 audit[5545]: NETFILTER_CFG table=nat:139 family=2 entries=14 op=nft_register_rule pid=5545 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:42.716000 audit[5545]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc21f66230 a2=0 a3=0 items=0 ppid=3724 pid=5545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:42.716000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:43.023000 audit[5541]: USER_ACCT pid=5541 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:43.025840 sshd[5541]: Accepted publickey for core from 4.153.228.146 port 45844 ssh2: RSA SHA256:ES3aJcA+M+pl5u1hk2HWRqxW4DXd1pPYtNeRk1B3mrI Jan 14 01:26:43.025000 audit[5541]: CRED_ACQ pid=5541 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:43.025000 audit[5541]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffbb3cbdf0 a2=3 a3=0 items=0 ppid=1 pid=5541 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:43.025000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:26:43.028692 sshd-session[5541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:26:43.035833 systemd-logind[1928]: New session 9 of user core. Jan 14 01:26:43.045866 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 01:26:43.048000 audit[5541]: USER_START pid=5541 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:43.050000 audit[5547]: CRED_ACQ pid=5547 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:43.976039 sshd[5547]: Connection closed by 4.153.228.146 port 45844 Jan 14 01:26:43.976810 sshd-session[5541]: pam_unix(sshd:session): session closed for user core Jan 14 01:26:43.979000 audit[5541]: USER_END pid=5541 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:43.979000 audit[5541]: CRED_DISP pid=5541 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:43.985515 systemd[1]: sshd@7-172.31.18.46:22-4.153.228.146:45844.service: Deactivated successfully. Jan 14 01:26:43.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.18.46:22-4.153.228.146:45844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:26:43.991445 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 01:26:43.997187 systemd-logind[1928]: Session 9 logged out. Waiting for processes to exit. Jan 14 01:26:43.999320 systemd-logind[1928]: Removed session 9. Jan 14 01:26:44.132032 containerd[1962]: time="2026-01-14T01:26:44.131231582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zrh2z,Uid:74b84cdc-323d-4b42-b95a-ceec7dfaa40f,Namespace:calico-system,Attempt:0,}" Jan 14 01:26:44.371445 systemd-networkd[1548]: calied2ea6665c6: Link UP Jan 14 01:26:44.374444 systemd-networkd[1548]: calied2ea6665c6: Gained carrier Jan 14 01:26:44.383069 (udev-worker)[5582]: Network interface NamePolicy= disabled on kernel command line. Jan 14 01:26:44.429593 containerd[1962]: 2026-01-14 01:26:44.248 [INFO][5564] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--46-k8s-csi--node--driver--zrh2z-eth0 csi-node-driver- calico-system 74b84cdc-323d-4b42-b95a-ceec7dfaa40f 822 0 2026-01-14 01:26:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-18-46 csi-node-driver-zrh2z eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calied2ea6665c6 [] [] }} ContainerID="2f62c8eb1b1641fc798d6f3253ae56ae67dedacff12291163899ffb8bac70179" Namespace="calico-system" Pod="csi-node-driver-zrh2z" WorkloadEndpoint="ip--172--31--18--46-k8s-csi--node--driver--zrh2z-" Jan 14 01:26:44.429593 containerd[1962]: 2026-01-14 01:26:44.248 [INFO][5564] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f62c8eb1b1641fc798d6f3253ae56ae67dedacff12291163899ffb8bac70179" Namespace="calico-system" Pod="csi-node-driver-zrh2z" WorkloadEndpoint="ip--172--31--18--46-k8s-csi--node--driver--zrh2z-eth0" Jan 14 01:26:44.429593 containerd[1962]: 2026-01-14 01:26:44.290 [INFO][5575] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f62c8eb1b1641fc798d6f3253ae56ae67dedacff12291163899ffb8bac70179" HandleID="k8s-pod-network.2f62c8eb1b1641fc798d6f3253ae56ae67dedacff12291163899ffb8bac70179" Workload="ip--172--31--18--46-k8s-csi--node--driver--zrh2z-eth0" Jan 14 01:26:44.429593 containerd[1962]: 2026-01-14 01:26:44.290 [INFO][5575] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2f62c8eb1b1641fc798d6f3253ae56ae67dedacff12291163899ffb8bac70179" HandleID="k8s-pod-network.2f62c8eb1b1641fc798d6f3253ae56ae67dedacff12291163899ffb8bac70179" Workload="ip--172--31--18--46-k8s-csi--node--driver--zrh2z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b73a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-46", "pod":"csi-node-driver-zrh2z", "timestamp":"2026-01-14 01:26:44.290563192 +0000 UTC"}, Hostname:"ip-172-31-18-46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:26:44.429593 containerd[1962]: 2026-01-14 01:26:44.290 [INFO][5575] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:26:44.429593 containerd[1962]: 2026-01-14 01:26:44.290 [INFO][5575] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:26:44.429593 containerd[1962]: 2026-01-14 01:26:44.290 [INFO][5575] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-46' Jan 14 01:26:44.429593 containerd[1962]: 2026-01-14 01:26:44.300 [INFO][5575] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f62c8eb1b1641fc798d6f3253ae56ae67dedacff12291163899ffb8bac70179" host="ip-172-31-18-46" Jan 14 01:26:44.429593 containerd[1962]: 2026-01-14 01:26:44.314 [INFO][5575] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-46" Jan 14 01:26:44.429593 containerd[1962]: 2026-01-14 01:26:44.331 [INFO][5575] ipam/ipam.go 511: Trying affinity for 192.168.5.192/26 host="ip-172-31-18-46" Jan 14 01:26:44.429593 containerd[1962]: 2026-01-14 01:26:44.335 [INFO][5575] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.192/26 host="ip-172-31-18-46" Jan 14 01:26:44.429593 containerd[1962]: 2026-01-14 01:26:44.338 [INFO][5575] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.192/26 host="ip-172-31-18-46" Jan 14 01:26:44.429593 containerd[1962]: 2026-01-14 01:26:44.339 [INFO][5575] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.192/26 handle="k8s-pod-network.2f62c8eb1b1641fc798d6f3253ae56ae67dedacff12291163899ffb8bac70179" host="ip-172-31-18-46" Jan 14 01:26:44.429593 containerd[1962]: 2026-01-14 01:26:44.341 [INFO][5575] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2f62c8eb1b1641fc798d6f3253ae56ae67dedacff12291163899ffb8bac70179 Jan 14 01:26:44.429593 containerd[1962]: 2026-01-14 01:26:44.348 [INFO][5575] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.192/26 handle="k8s-pod-network.2f62c8eb1b1641fc798d6f3253ae56ae67dedacff12291163899ffb8bac70179" host="ip-172-31-18-46" Jan 14 01:26:44.429593 containerd[1962]: 2026-01-14 01:26:44.358 [INFO][5575] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.198/26] block=192.168.5.192/26 handle="k8s-pod-network.2f62c8eb1b1641fc798d6f3253ae56ae67dedacff12291163899ffb8bac70179" host="ip-172-31-18-46" Jan 14 01:26:44.429593 containerd[1962]: 2026-01-14 01:26:44.359 [INFO][5575] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.198/26] handle="k8s-pod-network.2f62c8eb1b1641fc798d6f3253ae56ae67dedacff12291163899ffb8bac70179" host="ip-172-31-18-46" Jan 14 01:26:44.429593 containerd[1962]: 2026-01-14 01:26:44.359 [INFO][5575] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:26:44.429593 containerd[1962]: 2026-01-14 01:26:44.359 [INFO][5575] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.198/26] IPv6=[] ContainerID="2f62c8eb1b1641fc798d6f3253ae56ae67dedacff12291163899ffb8bac70179" HandleID="k8s-pod-network.2f62c8eb1b1641fc798d6f3253ae56ae67dedacff12291163899ffb8bac70179" Workload="ip--172--31--18--46-k8s-csi--node--driver--zrh2z-eth0" Jan 14 01:26:44.430529 containerd[1962]: 2026-01-14 01:26:44.363 [INFO][5564] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f62c8eb1b1641fc798d6f3253ae56ae67dedacff12291163899ffb8bac70179" Namespace="calico-system" Pod="csi-node-driver-zrh2z" WorkloadEndpoint="ip--172--31--18--46-k8s-csi--node--driver--zrh2z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--46-k8s-csi--node--driver--zrh2z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"74b84cdc-323d-4b42-b95a-ceec7dfaa40f", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 26, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-46", ContainerID:"", Pod:"csi-node-driver-zrh2z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.5.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calied2ea6665c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:26:44.430529 containerd[1962]: 2026-01-14 01:26:44.364 [INFO][5564] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.198/32] ContainerID="2f62c8eb1b1641fc798d6f3253ae56ae67dedacff12291163899ffb8bac70179" Namespace="calico-system" Pod="csi-node-driver-zrh2z" WorkloadEndpoint="ip--172--31--18--46-k8s-csi--node--driver--zrh2z-eth0" Jan 14 01:26:44.430529 containerd[1962]: 2026-01-14 01:26:44.364 [INFO][5564] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied2ea6665c6 ContainerID="2f62c8eb1b1641fc798d6f3253ae56ae67dedacff12291163899ffb8bac70179" Namespace="calico-system" Pod="csi-node-driver-zrh2z" WorkloadEndpoint="ip--172--31--18--46-k8s-csi--node--driver--zrh2z-eth0" Jan 14 01:26:44.430529 containerd[1962]: 2026-01-14 01:26:44.376 [INFO][5564] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f62c8eb1b1641fc798d6f3253ae56ae67dedacff12291163899ffb8bac70179" Namespace="calico-system" Pod="csi-node-driver-zrh2z" WorkloadEndpoint="ip--172--31--18--46-k8s-csi--node--driver--zrh2z-eth0" Jan 14 01:26:44.430529 containerd[1962]: 2026-01-14 01:26:44.377 [INFO][5564] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f62c8eb1b1641fc798d6f3253ae56ae67dedacff12291163899ffb8bac70179" Namespace="calico-system" Pod="csi-node-driver-zrh2z" WorkloadEndpoint="ip--172--31--18--46-k8s-csi--node--driver--zrh2z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--46-k8s-csi--node--driver--zrh2z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"74b84cdc-323d-4b42-b95a-ceec7dfaa40f", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 26, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-46", ContainerID:"2f62c8eb1b1641fc798d6f3253ae56ae67dedacff12291163899ffb8bac70179", Pod:"csi-node-driver-zrh2z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.5.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calied2ea6665c6", MAC:"02:c3:b8:8f:e6:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:26:44.430529 containerd[1962]: 2026-01-14 01:26:44.422 [INFO][5564] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f62c8eb1b1641fc798d6f3253ae56ae67dedacff12291163899ffb8bac70179" Namespace="calico-system" Pod="csi-node-driver-zrh2z" WorkloadEndpoint="ip--172--31--18--46-k8s-csi--node--driver--zrh2z-eth0" Jan 14 01:26:44.513773 containerd[1962]: time="2026-01-14T01:26:44.513673701Z" level=info msg="connecting to shim 2f62c8eb1b1641fc798d6f3253ae56ae67dedacff12291163899ffb8bac70179" address="unix:///run/containerd/s/bd9941d9bc153442b4bbaafbe8563799541a4ed90b9acaf47cbdecf8d0f8f6d1" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:26:44.626068 systemd[1]: Started cri-containerd-2f62c8eb1b1641fc798d6f3253ae56ae67dedacff12291163899ffb8bac70179.scope - libcontainer container 2f62c8eb1b1641fc798d6f3253ae56ae67dedacff12291163899ffb8bac70179. Jan 14 01:26:44.662000 audit[5629]: NETFILTER_CFG table=filter:140 family=2 entries=54 op=nft_register_chain pid=5629 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:26:44.662000 audit[5629]: SYSCALL arch=c000003e syscall=46 success=yes exit=25992 a0=3 a1=7ffce27f02b0 a2=0 a3=7ffce27f029c items=0 ppid=4950 pid=5629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:44.662000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:26:44.667000 audit: BPF prog-id=243 op=LOAD Jan 14 01:26:44.668000 audit: BPF prog-id=244 op=LOAD Jan 14 01:26:44.668000 audit[5609]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=5598 pid=5609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:44.668000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266363263386562316231363431666337393864366633323533616535 Jan 14 01:26:44.668000 audit: BPF prog-id=244 op=UNLOAD Jan 14 01:26:44.668000 audit[5609]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5598 pid=5609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:44.668000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266363263386562316231363431666337393864366633323533616535 Jan 14 01:26:44.669000 audit: BPF prog-id=245 op=LOAD Jan 14 01:26:44.669000 audit[5609]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=5598 pid=5609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:44.669000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266363263386562316231363431666337393864366633323533616535 Jan 14 01:26:44.669000 audit: BPF prog-id=246 op=LOAD Jan 14 01:26:44.669000 audit[5609]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=5598 pid=5609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:44.669000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266363263386562316231363431666337393864366633323533616535 Jan 14 01:26:44.669000 audit: BPF prog-id=246 op=UNLOAD Jan 14 01:26:44.669000 audit[5609]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5598 pid=5609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:44.669000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266363263386562316231363431666337393864366633323533616535 Jan 14 01:26:44.669000 audit: BPF prog-id=245 op=UNLOAD Jan 14 01:26:44.669000 audit[5609]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5598 pid=5609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:44.669000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266363263386562316231363431666337393864366633323533616535 Jan 14 01:26:44.669000 audit: BPF prog-id=247 op=LOAD Jan 14 01:26:44.669000 audit[5609]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=5598 pid=5609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:44.669000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266363263386562316231363431666337393864366633323533616535 Jan 14 01:26:44.714236 containerd[1962]: time="2026-01-14T01:26:44.714174904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zrh2z,Uid:74b84cdc-323d-4b42-b95a-ceec7dfaa40f,Namespace:calico-system,Attempt:0,} returns sandbox id \"2f62c8eb1b1641fc798d6f3253ae56ae67dedacff12291163899ffb8bac70179\"" Jan 14 01:26:44.716348 containerd[1962]: time="2026-01-14T01:26:44.716314771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:26:44.978139 containerd[1962]: time="2026-01-14T01:26:44.978017902Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:26:44.979457 containerd[1962]: time="2026-01-14T01:26:44.979406328Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:26:44.979749 containerd[1962]: time="2026-01-14T01:26:44.979511162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:26:44.979898 kubelet[3571]: E0114 01:26:44.979683 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:26:44.979898 kubelet[3571]: E0114 01:26:44.979727 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:26:44.979898 kubelet[3571]: E0114 01:26:44.979849 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qs6c7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zrh2z_calico-system(74b84cdc-323d-4b42-b95a-ceec7dfaa40f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:26:44.982850 containerd[1962]: time="2026-01-14T01:26:44.982818635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:26:45.130663 containerd[1962]: time="2026-01-14T01:26:45.130610127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2rswm,Uid:257e1b40-fca0-492c-87be-45f394e92bdc,Namespace:kube-system,Attempt:0,}" Jan 14 01:26:45.303342 systemd-networkd[1548]: cali732a0906c37: Link UP Jan 14 01:26:45.304876 systemd-networkd[1548]: cali732a0906c37: Gained carrier Jan 14 01:26:45.325709 containerd[1962]: 2026-01-14 01:26:45.193 [INFO][5637] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--46-k8s-coredns--674b8bbfcf--2rswm-eth0 coredns-674b8bbfcf- kube-system 257e1b40-fca0-492c-87be-45f394e92bdc 920 0 2026-01-14 01:25:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-46 coredns-674b8bbfcf-2rswm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali732a0906c37 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d000cffa6422e918f4fa1be235d531e383e37e8d24ae176bbdae5549c5200e86" Namespace="kube-system" Pod="coredns-674b8bbfcf-2rswm" WorkloadEndpoint="ip--172--31--18--46-k8s-coredns--674b8bbfcf--2rswm-" Jan 14 01:26:45.325709 containerd[1962]: 2026-01-14 01:26:45.194 [INFO][5637] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d000cffa6422e918f4fa1be235d531e383e37e8d24ae176bbdae5549c5200e86" Namespace="kube-system" Pod="coredns-674b8bbfcf-2rswm" WorkloadEndpoint="ip--172--31--18--46-k8s-coredns--674b8bbfcf--2rswm-eth0" Jan 14 01:26:45.325709 containerd[1962]: 2026-01-14 01:26:45.235 [INFO][5650] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d000cffa6422e918f4fa1be235d531e383e37e8d24ae176bbdae5549c5200e86" HandleID="k8s-pod-network.d000cffa6422e918f4fa1be235d531e383e37e8d24ae176bbdae5549c5200e86" Workload="ip--172--31--18--46-k8s-coredns--674b8bbfcf--2rswm-eth0" Jan 14 01:26:45.325709 containerd[1962]: 2026-01-14 01:26:45.235 [INFO][5650] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d000cffa6422e918f4fa1be235d531e383e37e8d24ae176bbdae5549c5200e86" HandleID="k8s-pod-network.d000cffa6422e918f4fa1be235d531e383e37e8d24ae176bbdae5549c5200e86" Workload="ip--172--31--18--46-k8s-coredns--674b8bbfcf--2rswm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f160), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-46", "pod":"coredns-674b8bbfcf-2rswm", "timestamp":"2026-01-14 01:26:45.235478225 +0000 UTC"}, Hostname:"ip-172-31-18-46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:26:45.325709 containerd[1962]: 2026-01-14 01:26:45.235 [INFO][5650] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:26:45.325709 containerd[1962]: 2026-01-14 01:26:45.236 [INFO][5650] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:26:45.325709 containerd[1962]: 2026-01-14 01:26:45.236 [INFO][5650] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-46' Jan 14 01:26:45.325709 containerd[1962]: 2026-01-14 01:26:45.246 [INFO][5650] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d000cffa6422e918f4fa1be235d531e383e37e8d24ae176bbdae5549c5200e86" host="ip-172-31-18-46" Jan 14 01:26:45.325709 containerd[1962]: 2026-01-14 01:26:45.253 [INFO][5650] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-46" Jan 14 01:26:45.325709 containerd[1962]: 2026-01-14 01:26:45.262 [INFO][5650] ipam/ipam.go 511: Trying affinity for 192.168.5.192/26 host="ip-172-31-18-46" Jan 14 01:26:45.325709 containerd[1962]: 2026-01-14 01:26:45.267 [INFO][5650] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.192/26 host="ip-172-31-18-46" Jan 14 01:26:45.325709 containerd[1962]: 2026-01-14 01:26:45.270 [INFO][5650] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.192/26 host="ip-172-31-18-46" Jan 14 01:26:45.325709 containerd[1962]: 2026-01-14 01:26:45.270 [INFO][5650] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.192/26 handle="k8s-pod-network.d000cffa6422e918f4fa1be235d531e383e37e8d24ae176bbdae5549c5200e86" host="ip-172-31-18-46" Jan 14 01:26:45.325709 containerd[1962]: 2026-01-14 01:26:45.275 [INFO][5650] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d000cffa6422e918f4fa1be235d531e383e37e8d24ae176bbdae5549c5200e86 Jan 14 01:26:45.325709 containerd[1962]: 2026-01-14 01:26:45.282 [INFO][5650] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.192/26 handle="k8s-pod-network.d000cffa6422e918f4fa1be235d531e383e37e8d24ae176bbdae5549c5200e86" host="ip-172-31-18-46" Jan 14 01:26:45.325709 containerd[1962]: 2026-01-14 01:26:45.293 [INFO][5650] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.199/26] block=192.168.5.192/26 handle="k8s-pod-network.d000cffa6422e918f4fa1be235d531e383e37e8d24ae176bbdae5549c5200e86" host="ip-172-31-18-46" Jan 14 01:26:45.325709 containerd[1962]: 2026-01-14 01:26:45.293 [INFO][5650] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.199/26] handle="k8s-pod-network.d000cffa6422e918f4fa1be235d531e383e37e8d24ae176bbdae5549c5200e86" host="ip-172-31-18-46" Jan 14 01:26:45.325709 containerd[1962]: 2026-01-14 01:26:45.293 [INFO][5650] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:26:45.325709 containerd[1962]: 2026-01-14 01:26:45.293 [INFO][5650] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.199/26] IPv6=[] ContainerID="d000cffa6422e918f4fa1be235d531e383e37e8d24ae176bbdae5549c5200e86" HandleID="k8s-pod-network.d000cffa6422e918f4fa1be235d531e383e37e8d24ae176bbdae5549c5200e86" Workload="ip--172--31--18--46-k8s-coredns--674b8bbfcf--2rswm-eth0" Jan 14 01:26:45.328143 containerd[1962]: 2026-01-14 01:26:45.298 [INFO][5637] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d000cffa6422e918f4fa1be235d531e383e37e8d24ae176bbdae5549c5200e86" Namespace="kube-system" Pod="coredns-674b8bbfcf-2rswm" WorkloadEndpoint="ip--172--31--18--46-k8s-coredns--674b8bbfcf--2rswm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--46-k8s-coredns--674b8bbfcf--2rswm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"257e1b40-fca0-492c-87be-45f394e92bdc", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-46", ContainerID:"", Pod:"coredns-674b8bbfcf-2rswm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali732a0906c37", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:26:45.328143 containerd[1962]: 2026-01-14 01:26:45.298 [INFO][5637] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.199/32] ContainerID="d000cffa6422e918f4fa1be235d531e383e37e8d24ae176bbdae5549c5200e86" Namespace="kube-system" Pod="coredns-674b8bbfcf-2rswm" WorkloadEndpoint="ip--172--31--18--46-k8s-coredns--674b8bbfcf--2rswm-eth0" Jan 14 01:26:45.328143 containerd[1962]: 2026-01-14 01:26:45.298 [INFO][5637] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali732a0906c37 ContainerID="d000cffa6422e918f4fa1be235d531e383e37e8d24ae176bbdae5549c5200e86" Namespace="kube-system" Pod="coredns-674b8bbfcf-2rswm" WorkloadEndpoint="ip--172--31--18--46-k8s-coredns--674b8bbfcf--2rswm-eth0" Jan 14 01:26:45.328143 containerd[1962]: 2026-01-14 01:26:45.304 [INFO][5637] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d000cffa6422e918f4fa1be235d531e383e37e8d24ae176bbdae5549c5200e86" Namespace="kube-system" Pod="coredns-674b8bbfcf-2rswm" WorkloadEndpoint="ip--172--31--18--46-k8s-coredns--674b8bbfcf--2rswm-eth0" Jan 14 01:26:45.328143 containerd[1962]: 2026-01-14 01:26:45.305 [INFO][5637] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d000cffa6422e918f4fa1be235d531e383e37e8d24ae176bbdae5549c5200e86" Namespace="kube-system" Pod="coredns-674b8bbfcf-2rswm" WorkloadEndpoint="ip--172--31--18--46-k8s-coredns--674b8bbfcf--2rswm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--46-k8s-coredns--674b8bbfcf--2rswm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"257e1b40-fca0-492c-87be-45f394e92bdc", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-46", ContainerID:"d000cffa6422e918f4fa1be235d531e383e37e8d24ae176bbdae5549c5200e86", Pod:"coredns-674b8bbfcf-2rswm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali732a0906c37", MAC:"62:77:db:ee:65:90", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:26:45.328143 containerd[1962]: 2026-01-14 01:26:45.321 [INFO][5637] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d000cffa6422e918f4fa1be235d531e383e37e8d24ae176bbdae5549c5200e86" Namespace="kube-system" Pod="coredns-674b8bbfcf-2rswm" WorkloadEndpoint="ip--172--31--18--46-k8s-coredns--674b8bbfcf--2rswm-eth0" Jan 14 01:26:45.352530 containerd[1962]: time="2026-01-14T01:26:45.352359930Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:26:45.355164 containerd[1962]: time="2026-01-14T01:26:45.355106376Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:26:45.355706 containerd[1962]: time="2026-01-14T01:26:45.355587753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:26:45.360731 kubelet[3571]: E0114 01:26:45.356406 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:26:45.360731 kubelet[3571]: E0114 01:26:45.356462 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:26:45.360731 kubelet[3571]: E0114 01:26:45.358146 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qs6c7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zrh2z_calico-system(74b84cdc-323d-4b42-b95a-ceec7dfaa40f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:26:45.361130 kubelet[3571]: E0114 01:26:45.360943 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zrh2z" podUID="74b84cdc-323d-4b42-b95a-ceec7dfaa40f" Jan 14 01:26:45.369755 containerd[1962]: time="2026-01-14T01:26:45.369662299Z" level=info msg="connecting to shim d000cffa6422e918f4fa1be235d531e383e37e8d24ae176bbdae5549c5200e86" address="unix:///run/containerd/s/11ecc71cb17ffd7ade1591f075cb5f0670b277c648bd21a06ea37277f4f40da2" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:26:45.375000 audit[5678]: NETFILTER_CFG table=filter:141 family=2 entries=44 op=nft_register_chain pid=5678 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:26:45.375000 audit[5678]: SYSCALL arch=c000003e syscall=46 success=yes exit=21516 a0=3 a1=7ffd9b973cb0 a2=0 a3=7ffd9b973c9c items=0 ppid=4950 pid=5678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:45.375000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:26:45.414791 systemd[1]: Started cri-containerd-d000cffa6422e918f4fa1be235d531e383e37e8d24ae176bbdae5549c5200e86.scope - libcontainer container d000cffa6422e918f4fa1be235d531e383e37e8d24ae176bbdae5549c5200e86. Jan 14 01:26:45.426000 audit: BPF prog-id=248 op=LOAD Jan 14 01:26:45.426000 audit: BPF prog-id=249 op=LOAD Jan 14 01:26:45.426000 audit[5685]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00019c238 a2=98 a3=0 items=0 ppid=5673 pid=5685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:45.426000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430303063666661363432326539313866346661316265323335643533 Jan 14 01:26:45.426000 audit: BPF prog-id=249 op=UNLOAD Jan 14 01:26:45.426000 audit[5685]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5673 pid=5685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:45.426000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430303063666661363432326539313866346661316265323335643533 Jan 14 01:26:45.428000 audit: BPF prog-id=250 op=LOAD Jan 14 01:26:45.428000 audit[5685]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00019c488 a2=98 a3=0 items=0 ppid=5673 pid=5685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:45.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430303063666661363432326539313866346661316265323335643533 Jan 14 01:26:45.428000 audit: BPF prog-id=251 op=LOAD Jan 14 01:26:45.428000 audit[5685]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00019c218 a2=98 a3=0 items=0 ppid=5673 pid=5685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:45.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430303063666661363432326539313866346661316265323335643533 Jan 14 01:26:45.428000 audit: BPF prog-id=251 op=UNLOAD Jan 14 01:26:45.428000 audit[5685]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5673 pid=5685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:45.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430303063666661363432326539313866346661316265323335643533 Jan 14 01:26:45.428000 audit: BPF prog-id=250 op=UNLOAD Jan 14 01:26:45.428000 audit[5685]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5673 pid=5685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:45.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430303063666661363432326539313866346661316265323335643533 Jan 14 01:26:45.428000 audit: BPF prog-id=252 op=LOAD Jan 14 01:26:45.428000 audit[5685]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00019c6e8 a2=98 a3=0 items=0 ppid=5673 pid=5685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:45.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430303063666661363432326539313866346661316265323335643533 Jan 14 01:26:45.483303 containerd[1962]: time="2026-01-14T01:26:45.483167123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2rswm,Uid:257e1b40-fca0-492c-87be-45f394e92bdc,Namespace:kube-system,Attempt:0,} returns sandbox id \"d000cffa6422e918f4fa1be235d531e383e37e8d24ae176bbdae5549c5200e86\"" Jan 14 01:26:45.490475 containerd[1962]: time="2026-01-14T01:26:45.490440963Z" level=info msg="CreateContainer within sandbox \"d000cffa6422e918f4fa1be235d531e383e37e8d24ae176bbdae5549c5200e86\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 01:26:45.519798 containerd[1962]: time="2026-01-14T01:26:45.519728254Z" level=info msg="Container 59eda742ebc90b39b2e26570fe24c2dd8daf3c4f5463eaa81bd68ddd8f8f414b: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:26:45.528592 containerd[1962]: time="2026-01-14T01:26:45.528462598Z" level=info msg="CreateContainer within sandbox \"d000cffa6422e918f4fa1be235d531e383e37e8d24ae176bbdae5549c5200e86\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"59eda742ebc90b39b2e26570fe24c2dd8daf3c4f5463eaa81bd68ddd8f8f414b\"" Jan 14 01:26:45.530743 containerd[1962]: time="2026-01-14T01:26:45.529895775Z" level=info msg="StartContainer for \"59eda742ebc90b39b2e26570fe24c2dd8daf3c4f5463eaa81bd68ddd8f8f414b\"" Jan 14 01:26:45.530940 containerd[1962]: time="2026-01-14T01:26:45.530913716Z" level=info msg="connecting to shim 59eda742ebc90b39b2e26570fe24c2dd8daf3c4f5463eaa81bd68ddd8f8f414b" address="unix:///run/containerd/s/11ecc71cb17ffd7ade1591f075cb5f0670b277c648bd21a06ea37277f4f40da2" protocol=ttrpc version=3 Jan 14 01:26:45.555816 systemd[1]: Started cri-containerd-59eda742ebc90b39b2e26570fe24c2dd8daf3c4f5463eaa81bd68ddd8f8f414b.scope - libcontainer container 59eda742ebc90b39b2e26570fe24c2dd8daf3c4f5463eaa81bd68ddd8f8f414b. Jan 14 01:26:45.571000 audit: BPF prog-id=253 op=LOAD Jan 14 01:26:45.572000 audit: BPF prog-id=254 op=LOAD Jan 14 01:26:45.572000 audit[5713]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5673 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:45.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539656461373432656263393062333962326532363537306665323463 Jan 14 01:26:45.572000 audit: BPF prog-id=254 op=UNLOAD Jan 14 01:26:45.572000 audit[5713]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5673 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:45.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539656461373432656263393062333962326532363537306665323463 Jan 14 01:26:45.572000 audit: BPF prog-id=255 op=LOAD Jan 14 01:26:45.572000 audit[5713]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5673 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:45.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539656461373432656263393062333962326532363537306665323463 Jan 14 01:26:45.572000 audit: BPF prog-id=256 op=LOAD Jan 14 01:26:45.572000 audit[5713]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5673 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:45.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539656461373432656263393062333962326532363537306665323463 Jan 14 01:26:45.572000 audit: BPF prog-id=256 op=UNLOAD Jan 14 01:26:45.572000 audit[5713]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5673 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:45.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539656461373432656263393062333962326532363537306665323463 Jan 14 01:26:45.572000 audit: BPF prog-id=255 op=UNLOAD Jan 14 01:26:45.572000 audit[5713]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5673 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:45.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539656461373432656263393062333962326532363537306665323463 Jan 14 01:26:45.572000 audit: BPF prog-id=257 op=LOAD Jan 14 01:26:45.572000 audit[5713]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5673 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:45.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539656461373432656263393062333962326532363537306665323463 Jan 14 01:26:45.602880 containerd[1962]: time="2026-01-14T01:26:45.602845298Z" level=info msg="StartContainer for \"59eda742ebc90b39b2e26570fe24c2dd8daf3c4f5463eaa81bd68ddd8f8f414b\" returns successfully" Jan 14 01:26:45.698192 kubelet[3571]: E0114 01:26:45.698123 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zrh2z" podUID="74b84cdc-323d-4b42-b95a-ceec7dfaa40f" Jan 14 01:26:45.723758 kubelet[3571]: I0114 01:26:45.723702 3571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2rswm" podStartSLOduration=93.723683297 podStartE2EDuration="1m33.723683297s" podCreationTimestamp="2026-01-14 01:25:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:26:45.708072645 +0000 UTC m=+97.796360799" watchObservedRunningTime="2026-01-14 01:26:45.723683297 +0000 UTC m=+97.811971450" Jan 14 01:26:45.732000 audit[5747]: NETFILTER_CFG table=filter:142 family=2 entries=20 op=nft_register_rule pid=5747 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:45.732000 audit[5747]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc3bbfdeb0 a2=0 a3=7ffc3bbfde9c items=0 ppid=3724 pid=5747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:45.732000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:45.743000 audit[5747]: NETFILTER_CFG table=nat:143 family=2 entries=14 op=nft_register_rule pid=5747 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:45.743000 audit[5747]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc3bbfdeb0 a2=0 a3=0 items=0 ppid=3724 pid=5747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:45.743000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:45.775000 audit[5751]: NETFILTER_CFG table=filter:144 family=2 entries=17 op=nft_register_rule pid=5751 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:45.775000 audit[5751]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe0ed4bfe0 a2=0 a3=7ffe0ed4bfcc items=0 ppid=3724 pid=5751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:45.775000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:45.781000 audit[5751]: NETFILTER_CFG table=nat:145 family=2 entries=35 op=nft_register_chain pid=5751 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:45.781000 audit[5751]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffe0ed4bfe0 a2=0 a3=7ffe0ed4bfcc items=0 ppid=3724 pid=5751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:45.781000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:45.980788 systemd-networkd[1548]: calied2ea6665c6: Gained IPv6LL Jan 14 01:26:46.129994 containerd[1962]: time="2026-01-14T01:26:46.129764444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574d5c8798-9hhk4,Uid:525cee0b-846b-43ba-9b3e-e192ccc373a0,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:26:46.130621 containerd[1962]: time="2026-01-14T01:26:46.130567548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5mdck,Uid:993f578d-b707-42bd-b6e9-14c5aa23a03f,Namespace:calico-system,Attempt:0,}" Jan 14 01:26:46.325068 systemd-networkd[1548]: caliacfaaa6ba56: Link UP Jan 14 01:26:46.326455 systemd-networkd[1548]: caliacfaaa6ba56: Gained carrier Jan 14 01:26:46.348298 containerd[1962]: 2026-01-14 01:26:46.199 [INFO][5756] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--46-k8s-goldmane--666569f655--5mdck-eth0 goldmane-666569f655- calico-system 993f578d-b707-42bd-b6e9-14c5aa23a03f 934 0 2026-01-14 01:26:02 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-18-46 goldmane-666569f655-5mdck eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliacfaaa6ba56 [] [] }} ContainerID="9e8b4d8b7d712046243463f6c87a2087e8105e469f68e05947187354398c87ca" Namespace="calico-system" Pod="goldmane-666569f655-5mdck" WorkloadEndpoint="ip--172--31--18--46-k8s-goldmane--666569f655--5mdck-" Jan 14 01:26:46.348298 containerd[1962]: 2026-01-14 01:26:46.200 [INFO][5756] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9e8b4d8b7d712046243463f6c87a2087e8105e469f68e05947187354398c87ca" Namespace="calico-system" Pod="goldmane-666569f655-5mdck" WorkloadEndpoint="ip--172--31--18--46-k8s-goldmane--666569f655--5mdck-eth0" Jan 14 01:26:46.348298 containerd[1962]: 2026-01-14 01:26:46.252 [INFO][5779] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e8b4d8b7d712046243463f6c87a2087e8105e469f68e05947187354398c87ca" HandleID="k8s-pod-network.9e8b4d8b7d712046243463f6c87a2087e8105e469f68e05947187354398c87ca" Workload="ip--172--31--18--46-k8s-goldmane--666569f655--5mdck-eth0" Jan 14 01:26:46.348298 containerd[1962]: 2026-01-14 01:26:46.258 [INFO][5779] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9e8b4d8b7d712046243463f6c87a2087e8105e469f68e05947187354398c87ca" HandleID="k8s-pod-network.9e8b4d8b7d712046243463f6c87a2087e8105e469f68e05947187354398c87ca" Workload="ip--172--31--18--46-k8s-goldmane--666569f655--5mdck-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5850), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-46", "pod":"goldmane-666569f655-5mdck", "timestamp":"2026-01-14 01:26:46.252462135 +0000 UTC"}, Hostname:"ip-172-31-18-46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:26:46.348298 containerd[1962]: 2026-01-14 01:26:46.258 [INFO][5779] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:26:46.348298 containerd[1962]: 2026-01-14 01:26:46.258 [INFO][5779] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:26:46.348298 containerd[1962]: 2026-01-14 01:26:46.258 [INFO][5779] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-46' Jan 14 01:26:46.348298 containerd[1962]: 2026-01-14 01:26:46.274 [INFO][5779] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9e8b4d8b7d712046243463f6c87a2087e8105e469f68e05947187354398c87ca" host="ip-172-31-18-46" Jan 14 01:26:46.348298 containerd[1962]: 2026-01-14 01:26:46.282 [INFO][5779] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-46" Jan 14 01:26:46.348298 containerd[1962]: 2026-01-14 01:26:46.289 [INFO][5779] ipam/ipam.go 511: Trying affinity for 192.168.5.192/26 host="ip-172-31-18-46" Jan 14 01:26:46.348298 containerd[1962]: 2026-01-14 01:26:46.291 [INFO][5779] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.192/26 host="ip-172-31-18-46" Jan 14 01:26:46.348298 containerd[1962]: 2026-01-14 01:26:46.295 [INFO][5779] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.192/26 host="ip-172-31-18-46" Jan 14 01:26:46.348298 containerd[1962]: 2026-01-14 01:26:46.295 [INFO][5779] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.192/26 handle="k8s-pod-network.9e8b4d8b7d712046243463f6c87a2087e8105e469f68e05947187354398c87ca" host="ip-172-31-18-46" Jan 14 01:26:46.348298 containerd[1962]: 2026-01-14 01:26:46.298 [INFO][5779] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9e8b4d8b7d712046243463f6c87a2087e8105e469f68e05947187354398c87ca Jan 14 01:26:46.348298 containerd[1962]: 2026-01-14 01:26:46.304 [INFO][5779] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.192/26 handle="k8s-pod-network.9e8b4d8b7d712046243463f6c87a2087e8105e469f68e05947187354398c87ca" host="ip-172-31-18-46" Jan 14 01:26:46.348298 containerd[1962]: 2026-01-14 01:26:46.312 [INFO][5779] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.200/26] block=192.168.5.192/26 handle="k8s-pod-network.9e8b4d8b7d712046243463f6c87a2087e8105e469f68e05947187354398c87ca" host="ip-172-31-18-46" Jan 14 01:26:46.348298 containerd[1962]: 2026-01-14 01:26:46.313 [INFO][5779] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.200/26] handle="k8s-pod-network.9e8b4d8b7d712046243463f6c87a2087e8105e469f68e05947187354398c87ca" host="ip-172-31-18-46" Jan 14 01:26:46.348298 containerd[1962]: 2026-01-14 01:26:46.313 [INFO][5779] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:26:46.348298 containerd[1962]: 2026-01-14 01:26:46.313 [INFO][5779] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.200/26] IPv6=[] ContainerID="9e8b4d8b7d712046243463f6c87a2087e8105e469f68e05947187354398c87ca" HandleID="k8s-pod-network.9e8b4d8b7d712046243463f6c87a2087e8105e469f68e05947187354398c87ca" Workload="ip--172--31--18--46-k8s-goldmane--666569f655--5mdck-eth0" Jan 14 01:26:46.350609 containerd[1962]: 2026-01-14 01:26:46.318 [INFO][5756] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9e8b4d8b7d712046243463f6c87a2087e8105e469f68e05947187354398c87ca" Namespace="calico-system" Pod="goldmane-666569f655-5mdck" WorkloadEndpoint="ip--172--31--18--46-k8s-goldmane--666569f655--5mdck-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--46-k8s-goldmane--666569f655--5mdck-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"993f578d-b707-42bd-b6e9-14c5aa23a03f", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-46", ContainerID:"", Pod:"goldmane-666569f655-5mdck", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.5.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliacfaaa6ba56", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:26:46.350609 containerd[1962]: 2026-01-14 01:26:46.318 [INFO][5756] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.200/32] ContainerID="9e8b4d8b7d712046243463f6c87a2087e8105e469f68e05947187354398c87ca" Namespace="calico-system" Pod="goldmane-666569f655-5mdck" WorkloadEndpoint="ip--172--31--18--46-k8s-goldmane--666569f655--5mdck-eth0" Jan 14 01:26:46.350609 containerd[1962]: 2026-01-14 01:26:46.318 [INFO][5756] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliacfaaa6ba56 ContainerID="9e8b4d8b7d712046243463f6c87a2087e8105e469f68e05947187354398c87ca" Namespace="calico-system" Pod="goldmane-666569f655-5mdck" WorkloadEndpoint="ip--172--31--18--46-k8s-goldmane--666569f655--5mdck-eth0" Jan 14 01:26:46.350609 containerd[1962]: 2026-01-14 01:26:46.327 [INFO][5756] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9e8b4d8b7d712046243463f6c87a2087e8105e469f68e05947187354398c87ca" Namespace="calico-system" Pod="goldmane-666569f655-5mdck" WorkloadEndpoint="ip--172--31--18--46-k8s-goldmane--666569f655--5mdck-eth0" Jan 14 01:26:46.350609 containerd[1962]: 2026-01-14 01:26:46.328 [INFO][5756] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9e8b4d8b7d712046243463f6c87a2087e8105e469f68e05947187354398c87ca" Namespace="calico-system" Pod="goldmane-666569f655-5mdck" WorkloadEndpoint="ip--172--31--18--46-k8s-goldmane--666569f655--5mdck-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--46-k8s-goldmane--666569f655--5mdck-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"993f578d-b707-42bd-b6e9-14c5aa23a03f", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-46", ContainerID:"9e8b4d8b7d712046243463f6c87a2087e8105e469f68e05947187354398c87ca", Pod:"goldmane-666569f655-5mdck", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.5.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliacfaaa6ba56", MAC:"7e:e5:1b:8e:83:8f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:26:46.350609 containerd[1962]: 2026-01-14 01:26:46.344 [INFO][5756] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9e8b4d8b7d712046243463f6c87a2087e8105e469f68e05947187354398c87ca" Namespace="calico-system" Pod="goldmane-666569f655-5mdck" WorkloadEndpoint="ip--172--31--18--46-k8s-goldmane--666569f655--5mdck-eth0" Jan 14 01:26:46.370000 audit[5801]: NETFILTER_CFG table=filter:146 family=2 entries=60 op=nft_register_chain pid=5801 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:26:46.370000 audit[5801]: SYSCALL arch=c000003e syscall=46 success=yes exit=29916 a0=3 a1=7ffc390c0460 a2=0 a3=7ffc390c044c items=0 ppid=4950 pid=5801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:46.370000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:26:46.388128 containerd[1962]: time="2026-01-14T01:26:46.388068395Z" level=info msg="connecting to shim 9e8b4d8b7d712046243463f6c87a2087e8105e469f68e05947187354398c87ca" address="unix:///run/containerd/s/5990d2ceab41093e7b1f1259ce1adb63399811be29acd42db33dd7316c288d94" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:26:46.449105 systemd[1]: Started cri-containerd-9e8b4d8b7d712046243463f6c87a2087e8105e469f68e05947187354398c87ca.scope - libcontainer container 9e8b4d8b7d712046243463f6c87a2087e8105e469f68e05947187354398c87ca. Jan 14 01:26:46.453129 systemd-networkd[1548]: cali64259f45118: Link UP Jan 14 01:26:46.455598 systemd-networkd[1548]: cali64259f45118: Gained carrier Jan 14 01:26:46.490461 containerd[1962]: 2026-01-14 01:26:46.200 [INFO][5753] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--46-k8s-calico--apiserver--574d5c8798--9hhk4-eth0 calico-apiserver-574d5c8798- calico-apiserver 525cee0b-846b-43ba-9b3e-e192ccc373a0 929 0 2026-01-14 01:25:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:574d5c8798 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-46 calico-apiserver-574d5c8798-9hhk4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali64259f45118 [] [] }} ContainerID="05d628b7578ed61fb9e6a4abe0c1dc161e56f7f63062e0430beb190c258f4c7d" Namespace="calico-apiserver" Pod="calico-apiserver-574d5c8798-9hhk4" WorkloadEndpoint="ip--172--31--18--46-k8s-calico--apiserver--574d5c8798--9hhk4-" Jan 14 01:26:46.490461 containerd[1962]: 2026-01-14 01:26:46.200 [INFO][5753] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="05d628b7578ed61fb9e6a4abe0c1dc161e56f7f63062e0430beb190c258f4c7d" Namespace="calico-apiserver" Pod="calico-apiserver-574d5c8798-9hhk4" WorkloadEndpoint="ip--172--31--18--46-k8s-calico--apiserver--574d5c8798--9hhk4-eth0" Jan 14 01:26:46.490461 containerd[1962]: 2026-01-14 01:26:46.282 [INFO][5784] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="05d628b7578ed61fb9e6a4abe0c1dc161e56f7f63062e0430beb190c258f4c7d" HandleID="k8s-pod-network.05d628b7578ed61fb9e6a4abe0c1dc161e56f7f63062e0430beb190c258f4c7d" Workload="ip--172--31--18--46-k8s-calico--apiserver--574d5c8798--9hhk4-eth0" Jan 14 01:26:46.490461 containerd[1962]: 2026-01-14 01:26:46.282 [INFO][5784] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="05d628b7578ed61fb9e6a4abe0c1dc161e56f7f63062e0430beb190c258f4c7d" HandleID="k8s-pod-network.05d628b7578ed61fb9e6a4abe0c1dc161e56f7f63062e0430beb190c258f4c7d" Workload="ip--172--31--18--46-k8s-calico--apiserver--574d5c8798--9hhk4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f680), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-46", "pod":"calico-apiserver-574d5c8798-9hhk4", "timestamp":"2026-01-14 01:26:46.282538855 +0000 UTC"}, Hostname:"ip-172-31-18-46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:26:46.490461 containerd[1962]: 2026-01-14 01:26:46.283 [INFO][5784] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:26:46.490461 containerd[1962]: 2026-01-14 01:26:46.313 [INFO][5784] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:26:46.490461 containerd[1962]: 2026-01-14 01:26:46.313 [INFO][5784] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-46' Jan 14 01:26:46.490461 containerd[1962]: 2026-01-14 01:26:46.376 [INFO][5784] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.05d628b7578ed61fb9e6a4abe0c1dc161e56f7f63062e0430beb190c258f4c7d" host="ip-172-31-18-46" Jan 14 01:26:46.490461 containerd[1962]: 2026-01-14 01:26:46.385 [INFO][5784] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-46" Jan 14 01:26:46.490461 containerd[1962]: 2026-01-14 01:26:46.394 [INFO][5784] ipam/ipam.go 511: Trying affinity for 192.168.5.192/26 host="ip-172-31-18-46" Jan 14 01:26:46.490461 containerd[1962]: 2026-01-14 01:26:46.400 [INFO][5784] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.192/26 host="ip-172-31-18-46" Jan 14 01:26:46.490461 containerd[1962]: 2026-01-14 01:26:46.405 [INFO][5784] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.192/26 host="ip-172-31-18-46" Jan 14 01:26:46.490461 containerd[1962]: 2026-01-14 01:26:46.406 [INFO][5784] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.192/26 handle="k8s-pod-network.05d628b7578ed61fb9e6a4abe0c1dc161e56f7f63062e0430beb190c258f4c7d" host="ip-172-31-18-46" Jan 14 01:26:46.490461 containerd[1962]: 2026-01-14 01:26:46.412 [INFO][5784] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.05d628b7578ed61fb9e6a4abe0c1dc161e56f7f63062e0430beb190c258f4c7d Jan 14 01:26:46.490461 containerd[1962]: 2026-01-14 01:26:46.423 [INFO][5784] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.192/26 handle="k8s-pod-network.05d628b7578ed61fb9e6a4abe0c1dc161e56f7f63062e0430beb190c258f4c7d" host="ip-172-31-18-46" Jan 14 01:26:46.490461 containerd[1962]: 2026-01-14 01:26:46.438 [INFO][5784] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.201/26] block=192.168.5.192/26 handle="k8s-pod-network.05d628b7578ed61fb9e6a4abe0c1dc161e56f7f63062e0430beb190c258f4c7d" host="ip-172-31-18-46" Jan 14 01:26:46.490461 containerd[1962]: 2026-01-14 01:26:46.438 [INFO][5784] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.201/26] handle="k8s-pod-network.05d628b7578ed61fb9e6a4abe0c1dc161e56f7f63062e0430beb190c258f4c7d" host="ip-172-31-18-46" Jan 14 01:26:46.490461 containerd[1962]: 2026-01-14 01:26:46.439 [INFO][5784] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:26:46.490461 containerd[1962]: 2026-01-14 01:26:46.439 [INFO][5784] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.201/26] IPv6=[] ContainerID="05d628b7578ed61fb9e6a4abe0c1dc161e56f7f63062e0430beb190c258f4c7d" HandleID="k8s-pod-network.05d628b7578ed61fb9e6a4abe0c1dc161e56f7f63062e0430beb190c258f4c7d" Workload="ip--172--31--18--46-k8s-calico--apiserver--574d5c8798--9hhk4-eth0" Jan 14 01:26:46.491937 containerd[1962]: 2026-01-14 01:26:46.445 [INFO][5753] cni-plugin/k8s.go 418: Populated endpoint ContainerID="05d628b7578ed61fb9e6a4abe0c1dc161e56f7f63062e0430beb190c258f4c7d" Namespace="calico-apiserver" Pod="calico-apiserver-574d5c8798-9hhk4" WorkloadEndpoint="ip--172--31--18--46-k8s-calico--apiserver--574d5c8798--9hhk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--46-k8s-calico--apiserver--574d5c8798--9hhk4-eth0", GenerateName:"calico-apiserver-574d5c8798-", Namespace:"calico-apiserver", SelfLink:"", UID:"525cee0b-846b-43ba-9b3e-e192ccc373a0", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 25, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"574d5c8798", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-46", ContainerID:"", Pod:"calico-apiserver-574d5c8798-9hhk4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali64259f45118", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:26:46.491937 containerd[1962]: 2026-01-14 01:26:46.445 [INFO][5753] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.201/32] ContainerID="05d628b7578ed61fb9e6a4abe0c1dc161e56f7f63062e0430beb190c258f4c7d" Namespace="calico-apiserver" Pod="calico-apiserver-574d5c8798-9hhk4" WorkloadEndpoint="ip--172--31--18--46-k8s-calico--apiserver--574d5c8798--9hhk4-eth0" Jan 14 01:26:46.491937 containerd[1962]: 2026-01-14 01:26:46.445 [INFO][5753] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali64259f45118 ContainerID="05d628b7578ed61fb9e6a4abe0c1dc161e56f7f63062e0430beb190c258f4c7d" Namespace="calico-apiserver" Pod="calico-apiserver-574d5c8798-9hhk4" WorkloadEndpoint="ip--172--31--18--46-k8s-calico--apiserver--574d5c8798--9hhk4-eth0" Jan 14 01:26:46.491937 containerd[1962]: 2026-01-14 01:26:46.456 [INFO][5753] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="05d628b7578ed61fb9e6a4abe0c1dc161e56f7f63062e0430beb190c258f4c7d" Namespace="calico-apiserver" Pod="calico-apiserver-574d5c8798-9hhk4" WorkloadEndpoint="ip--172--31--18--46-k8s-calico--apiserver--574d5c8798--9hhk4-eth0" Jan 14 01:26:46.491937 containerd[1962]: 2026-01-14 01:26:46.458 [INFO][5753] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="05d628b7578ed61fb9e6a4abe0c1dc161e56f7f63062e0430beb190c258f4c7d" Namespace="calico-apiserver" Pod="calico-apiserver-574d5c8798-9hhk4" WorkloadEndpoint="ip--172--31--18--46-k8s-calico--apiserver--574d5c8798--9hhk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--46-k8s-calico--apiserver--574d5c8798--9hhk4-eth0", GenerateName:"calico-apiserver-574d5c8798-", Namespace:"calico-apiserver", SelfLink:"", UID:"525cee0b-846b-43ba-9b3e-e192ccc373a0", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 25, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"574d5c8798", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-46", ContainerID:"05d628b7578ed61fb9e6a4abe0c1dc161e56f7f63062e0430beb190c258f4c7d", Pod:"calico-apiserver-574d5c8798-9hhk4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali64259f45118", MAC:"9a:a8:68:6b:10:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:26:46.491937 containerd[1962]: 2026-01-14 01:26:46.481 [INFO][5753] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="05d628b7578ed61fb9e6a4abe0c1dc161e56f7f63062e0430beb190c258f4c7d" Namespace="calico-apiserver" Pod="calico-apiserver-574d5c8798-9hhk4" WorkloadEndpoint="ip--172--31--18--46-k8s-calico--apiserver--574d5c8798--9hhk4-eth0" Jan 14 01:26:46.507662 kernel: kauditd_printk_skb: 119 callbacks suppressed Jan 14 01:26:46.507774 kernel: audit: type=1334 audit(1768354006.502:764): prog-id=258 op=LOAD Jan 14 01:26:46.502000 audit: BPF prog-id=258 op=LOAD Jan 14 01:26:46.504000 audit: BPF prog-id=259 op=LOAD Jan 14 01:26:46.504000 audit[5821]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=5810 pid=5821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:46.515814 kernel: audit: type=1334 audit(1768354006.504:765): prog-id=259 op=LOAD Jan 14 01:26:46.515868 kernel: audit: type=1300 audit(1768354006.504:765): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=5810 pid=5821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:46.515905 kernel: audit: type=1327 audit(1768354006.504:765): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965386234643862376437313230343632343334363366366338376132 Jan 14 01:26:46.504000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965386234643862376437313230343632343334363366366338376132 Jan 14 01:26:46.504000 audit: BPF prog-id=259 op=UNLOAD Jan 14 01:26:46.504000 audit[5821]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5810 pid=5821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:46.536789 kernel: audit: type=1334 audit(1768354006.504:766): prog-id=259 op=UNLOAD Jan 14 01:26:46.536874 kernel: audit: type=1300 audit(1768354006.504:766): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5810 pid=5821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:46.504000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965386234643862376437313230343632343334363366366338376132 Jan 14 01:26:46.542503 kernel: audit: type=1327 audit(1768354006.504:766): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965386234643862376437313230343632343334363366366338376132 Jan 14 01:26:46.505000 audit: BPF prog-id=260 op=LOAD Jan 14 01:26:46.552725 containerd[1962]: time="2026-01-14T01:26:46.552661125Z" level=info msg="connecting to shim 05d628b7578ed61fb9e6a4abe0c1dc161e56f7f63062e0430beb190c258f4c7d" address="unix:///run/containerd/s/d9d4a9c3d3d1b50abcaaf77ee7dfec8b58897aca8ca31314779e048fe2ebd0d1" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:26:46.553608 kernel: audit: type=1334 audit(1768354006.505:767): prog-id=260 op=LOAD Jan 14 01:26:46.505000 audit[5821]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=5810 pid=5821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:46.560593 kernel: audit: type=1300 audit(1768354006.505:767): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=5810 pid=5821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:46.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965386234643862376437313230343632343334363366366338376132 Jan 14 01:26:46.570583 kernel: audit: type=1327 audit(1768354006.505:767): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965386234643862376437313230343632343334363366366338376132 Jan 14 01:26:46.505000 audit: BPF prog-id=261 op=LOAD Jan 14 01:26:46.505000 audit[5821]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=5810 pid=5821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:46.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965386234643862376437313230343632343334363366366338376132 Jan 14 01:26:46.505000 audit: BPF prog-id=261 op=UNLOAD Jan 14 01:26:46.505000 audit[5821]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5810 pid=5821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:46.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965386234643862376437313230343632343334363366366338376132 Jan 14 01:26:46.505000 audit: BPF prog-id=260 op=UNLOAD Jan 14 01:26:46.505000 audit[5821]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5810 pid=5821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:46.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965386234643862376437313230343632343334363366366338376132 Jan 14 01:26:46.505000 audit: BPF prog-id=262 op=LOAD Jan 14 01:26:46.505000 audit[5821]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=5810 pid=5821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:46.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965386234643862376437313230343632343334363366366338376132 Jan 14 01:26:46.601000 audit[5870]: NETFILTER_CFG table=filter:147 family=2 entries=57 op=nft_register_chain pid=5870 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:26:46.601000 audit[5870]: SYSCALL arch=c000003e syscall=46 success=yes exit=27812 a0=3 a1=7fff8afe1260 a2=0 a3=7fff8afe124c items=0 ppid=4950 pid=5870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:46.601000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:26:46.630802 systemd[1]: Started cri-containerd-05d628b7578ed61fb9e6a4abe0c1dc161e56f7f63062e0430beb190c258f4c7d.scope - libcontainer container 05d628b7578ed61fb9e6a4abe0c1dc161e56f7f63062e0430beb190c258f4c7d. Jan 14 01:26:46.635302 containerd[1962]: time="2026-01-14T01:26:46.635167163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5mdck,Uid:993f578d-b707-42bd-b6e9-14c5aa23a03f,Namespace:calico-system,Attempt:0,} returns sandbox id \"9e8b4d8b7d712046243463f6c87a2087e8105e469f68e05947187354398c87ca\"" Jan 14 01:26:46.638150 containerd[1962]: time="2026-01-14T01:26:46.638109455Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:26:46.646000 audit: BPF prog-id=263 op=LOAD Jan 14 01:26:46.647000 audit: BPF prog-id=264 op=LOAD Jan 14 01:26:46.647000 audit[5868]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5857 pid=5868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:46.647000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035643632386237353738656436316662396536613461626530633164 Jan 14 01:26:46.647000 audit: BPF prog-id=264 op=UNLOAD Jan 14 01:26:46.647000 audit[5868]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5857 pid=5868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:46.647000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035643632386237353738656436316662396536613461626530633164 Jan 14 01:26:46.647000 audit: BPF prog-id=265 op=LOAD Jan 14 01:26:46.647000 audit[5868]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5857 pid=5868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:46.647000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035643632386237353738656436316662396536613461626530633164 Jan 14 01:26:46.648000 audit: BPF prog-id=266 op=LOAD Jan 14 01:26:46.648000 audit[5868]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5857 pid=5868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:46.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035643632386237353738656436316662396536613461626530633164 Jan 14 01:26:46.648000 audit: BPF prog-id=266 op=UNLOAD Jan 14 01:26:46.648000 audit[5868]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5857 pid=5868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:46.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035643632386237353738656436316662396536613461626530633164 Jan 14 01:26:46.648000 audit: BPF prog-id=265 op=UNLOAD Jan 14 01:26:46.648000 audit[5868]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5857 pid=5868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:46.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035643632386237353738656436316662396536613461626530633164 Jan 14 01:26:46.648000 audit: BPF prog-id=267 op=LOAD Jan 14 01:26:46.648000 audit[5868]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5857 pid=5868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:46.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035643632386237353738656436316662396536613461626530633164 Jan 14 01:26:46.695825 containerd[1962]: time="2026-01-14T01:26:46.695726988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574d5c8798-9hhk4,Uid:525cee0b-846b-43ba-9b3e-e192ccc373a0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"05d628b7578ed61fb9e6a4abe0c1dc161e56f7f63062e0430beb190c258f4c7d\"" Jan 14 01:26:46.702271 kubelet[3571]: E0114 01:26:46.702205 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zrh2z" podUID="74b84cdc-323d-4b42-b95a-ceec7dfaa40f" Jan 14 01:26:46.893992 containerd[1962]: time="2026-01-14T01:26:46.893847132Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:26:46.895204 containerd[1962]: time="2026-01-14T01:26:46.895095274Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:26:46.895204 containerd[1962]: time="2026-01-14T01:26:46.895117110Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:26:46.895428 kubelet[3571]: E0114 01:26:46.895354 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:26:46.895428 kubelet[3571]: E0114 01:26:46.895402 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:26:46.901093 kubelet[3571]: E0114 01:26:46.900996 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zblph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5mdck_calico-system(993f578d-b707-42bd-b6e9-14c5aa23a03f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:26:46.902696 kubelet[3571]: E0114 01:26:46.902622 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5mdck" podUID="993f578d-b707-42bd-b6e9-14c5aa23a03f" Jan 14 01:26:46.907600 containerd[1962]: time="2026-01-14T01:26:46.907538636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:26:47.184023 containerd[1962]: time="2026-01-14T01:26:47.183886294Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:26:47.185190 containerd[1962]: time="2026-01-14T01:26:47.185131300Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:26:47.185414 containerd[1962]: time="2026-01-14T01:26:47.185214402Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:26:47.185497 kubelet[3571]: E0114 01:26:47.185460 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:26:47.186024 kubelet[3571]: E0114 01:26:47.185517 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:26:47.186024 kubelet[3571]: E0114 01:26:47.185756 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jdvfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-574d5c8798-9hhk4_calico-apiserver(525cee0b-846b-43ba-9b3e-e192ccc373a0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:26:47.187021 kubelet[3571]: E0114 01:26:47.186941 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-574d5c8798-9hhk4" podUID="525cee0b-846b-43ba-9b3e-e192ccc373a0" Jan 14 01:26:47.260930 systemd-networkd[1548]: cali732a0906c37: Gained IPv6LL Jan 14 01:26:47.703458 kubelet[3571]: E0114 01:26:47.703393 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5mdck" podUID="993f578d-b707-42bd-b6e9-14c5aa23a03f" Jan 14 01:26:47.704116 kubelet[3571]: E0114 01:26:47.703978 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-574d5c8798-9hhk4" podUID="525cee0b-846b-43ba-9b3e-e192ccc373a0" Jan 14 01:26:47.738000 audit[5911]: NETFILTER_CFG table=filter:148 family=2 entries=14 op=nft_register_rule pid=5911 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:47.738000 audit[5911]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffdf3e0370 a2=0 a3=7fffdf3e035c items=0 ppid=3724 pid=5911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:47.738000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:47.742000 audit[5911]: NETFILTER_CFG table=nat:149 family=2 entries=20 op=nft_register_rule pid=5911 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:47.742000 audit[5911]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffdf3e0370 a2=0 a3=7fffdf3e035c items=0 ppid=3724 pid=5911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:47.742000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:47.760000 audit[5913]: NETFILTER_CFG table=filter:150 family=2 entries=14 op=nft_register_rule pid=5913 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:47.760000 audit[5913]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdb3b281c0 a2=0 a3=7ffdb3b281ac items=0 ppid=3724 pid=5913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:47.760000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:47.764000 audit[5913]: NETFILTER_CFG table=nat:151 family=2 entries=20 op=nft_register_rule pid=5913 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:47.764000 audit[5913]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdb3b281c0 a2=0 a3=7ffdb3b281ac items=0 ppid=3724 pid=5913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:47.764000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:48.093907 systemd-networkd[1548]: cali64259f45118: Gained IPv6LL Jan 14 01:26:48.349043 systemd-networkd[1548]: caliacfaaa6ba56: Gained IPv6LL Jan 14 01:26:48.704652 kubelet[3571]: E0114 01:26:48.704366 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-574d5c8798-9hhk4" podUID="525cee0b-846b-43ba-9b3e-e192ccc373a0" Jan 14 01:26:49.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.18.46:22-4.153.228.146:44990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:26:49.059427 systemd[1]: Started sshd@8-172.31.18.46:22-4.153.228.146:44990.service - OpenSSH per-connection server daemon (4.153.228.146:44990). Jan 14 01:26:49.134705 containerd[1962]: time="2026-01-14T01:26:49.131059377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5799fb7c8b-6l2xc,Uid:89c07bdd-9dad-4c41-8dfe-3de894f6f743,Namespace:calico-system,Attempt:0,}" Jan 14 01:26:49.347338 systemd-networkd[1548]: caliccfdd2c59c7: Link UP Jan 14 01:26:49.348764 systemd-networkd[1548]: caliccfdd2c59c7: Gained carrier Jan 14 01:26:49.370667 containerd[1962]: 2026-01-14 01:26:49.212 [INFO][5922] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--46-k8s-calico--kube--controllers--5799fb7c8b--6l2xc-eth0 calico-kube-controllers-5799fb7c8b- calico-system 89c07bdd-9dad-4c41-8dfe-3de894f6f743 939 0 2026-01-14 01:26:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5799fb7c8b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-18-46 calico-kube-controllers-5799fb7c8b-6l2xc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliccfdd2c59c7 [] [] }} ContainerID="f8ddeea0d560ec0f380402157d2c195fc36fc0d127294361182c5ca5de03afeb" Namespace="calico-system" Pod="calico-kube-controllers-5799fb7c8b-6l2xc" WorkloadEndpoint="ip--172--31--18--46-k8s-calico--kube--controllers--5799fb7c8b--6l2xc-" Jan 14 01:26:49.370667 containerd[1962]: 2026-01-14 01:26:49.213 [INFO][5922] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f8ddeea0d560ec0f380402157d2c195fc36fc0d127294361182c5ca5de03afeb" Namespace="calico-system" Pod="calico-kube-controllers-5799fb7c8b-6l2xc" WorkloadEndpoint="ip--172--31--18--46-k8s-calico--kube--controllers--5799fb7c8b--6l2xc-eth0" Jan 14 01:26:49.370667 containerd[1962]: 2026-01-14 01:26:49.278 [INFO][5934] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f8ddeea0d560ec0f380402157d2c195fc36fc0d127294361182c5ca5de03afeb" HandleID="k8s-pod-network.f8ddeea0d560ec0f380402157d2c195fc36fc0d127294361182c5ca5de03afeb" Workload="ip--172--31--18--46-k8s-calico--kube--controllers--5799fb7c8b--6l2xc-eth0" Jan 14 01:26:49.370667 containerd[1962]: 2026-01-14 01:26:49.278 [INFO][5934] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f8ddeea0d560ec0f380402157d2c195fc36fc0d127294361182c5ca5de03afeb" HandleID="k8s-pod-network.f8ddeea0d560ec0f380402157d2c195fc36fc0d127294361182c5ca5de03afeb" Workload="ip--172--31--18--46-k8s-calico--kube--controllers--5799fb7c8b--6l2xc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ccfe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-46", "pod":"calico-kube-controllers-5799fb7c8b-6l2xc", "timestamp":"2026-01-14 01:26:49.27805409 +0000 UTC"}, Hostname:"ip-172-31-18-46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:26:49.370667 containerd[1962]: 2026-01-14 01:26:49.278 [INFO][5934] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:26:49.370667 containerd[1962]: 2026-01-14 01:26:49.278 [INFO][5934] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:26:49.370667 containerd[1962]: 2026-01-14 01:26:49.278 [INFO][5934] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-46' Jan 14 01:26:49.370667 containerd[1962]: 2026-01-14 01:26:49.287 [INFO][5934] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f8ddeea0d560ec0f380402157d2c195fc36fc0d127294361182c5ca5de03afeb" host="ip-172-31-18-46" Jan 14 01:26:49.370667 containerd[1962]: 2026-01-14 01:26:49.302 [INFO][5934] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-46" Jan 14 01:26:49.370667 containerd[1962]: 2026-01-14 01:26:49.310 [INFO][5934] ipam/ipam.go 511: Trying affinity for 192.168.5.192/26 host="ip-172-31-18-46" Jan 14 01:26:49.370667 containerd[1962]: 2026-01-14 01:26:49.314 [INFO][5934] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.192/26 host="ip-172-31-18-46" Jan 14 01:26:49.370667 containerd[1962]: 2026-01-14 01:26:49.316 [INFO][5934] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.192/26 host="ip-172-31-18-46" Jan 14 01:26:49.370667 containerd[1962]: 2026-01-14 01:26:49.317 [INFO][5934] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.192/26 handle="k8s-pod-network.f8ddeea0d560ec0f380402157d2c195fc36fc0d127294361182c5ca5de03afeb" host="ip-172-31-18-46" Jan 14 01:26:49.370667 containerd[1962]: 2026-01-14 01:26:49.319 [INFO][5934] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f8ddeea0d560ec0f380402157d2c195fc36fc0d127294361182c5ca5de03afeb Jan 14 01:26:49.370667 containerd[1962]: 2026-01-14 01:26:49.327 [INFO][5934] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.192/26 handle="k8s-pod-network.f8ddeea0d560ec0f380402157d2c195fc36fc0d127294361182c5ca5de03afeb" host="ip-172-31-18-46" Jan 14 01:26:49.370667 containerd[1962]: 2026-01-14 01:26:49.339 [INFO][5934] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.202/26] block=192.168.5.192/26 handle="k8s-pod-network.f8ddeea0d560ec0f380402157d2c195fc36fc0d127294361182c5ca5de03afeb" host="ip-172-31-18-46" Jan 14 01:26:49.370667 containerd[1962]: 2026-01-14 01:26:49.339 [INFO][5934] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.202/26] handle="k8s-pod-network.f8ddeea0d560ec0f380402157d2c195fc36fc0d127294361182c5ca5de03afeb" host="ip-172-31-18-46" Jan 14 01:26:49.370667 containerd[1962]: 2026-01-14 01:26:49.339 [INFO][5934] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:26:49.370667 containerd[1962]: 2026-01-14 01:26:49.339 [INFO][5934] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.202/26] IPv6=[] ContainerID="f8ddeea0d560ec0f380402157d2c195fc36fc0d127294361182c5ca5de03afeb" HandleID="k8s-pod-network.f8ddeea0d560ec0f380402157d2c195fc36fc0d127294361182c5ca5de03afeb" Workload="ip--172--31--18--46-k8s-calico--kube--controllers--5799fb7c8b--6l2xc-eth0" Jan 14 01:26:49.372761 containerd[1962]: 2026-01-14 01:26:49.343 [INFO][5922] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f8ddeea0d560ec0f380402157d2c195fc36fc0d127294361182c5ca5de03afeb" Namespace="calico-system" Pod="calico-kube-controllers-5799fb7c8b-6l2xc" WorkloadEndpoint="ip--172--31--18--46-k8s-calico--kube--controllers--5799fb7c8b--6l2xc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--46-k8s-calico--kube--controllers--5799fb7c8b--6l2xc-eth0", GenerateName:"calico-kube-controllers-5799fb7c8b-", Namespace:"calico-system", SelfLink:"", UID:"89c07bdd-9dad-4c41-8dfe-3de894f6f743", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 26, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5799fb7c8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-46", ContainerID:"", Pod:"calico-kube-controllers-5799fb7c8b-6l2xc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.202/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliccfdd2c59c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:26:49.372761 containerd[1962]: 2026-01-14 01:26:49.344 [INFO][5922] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.202/32] ContainerID="f8ddeea0d560ec0f380402157d2c195fc36fc0d127294361182c5ca5de03afeb" Namespace="calico-system" Pod="calico-kube-controllers-5799fb7c8b-6l2xc" WorkloadEndpoint="ip--172--31--18--46-k8s-calico--kube--controllers--5799fb7c8b--6l2xc-eth0" Jan 14 01:26:49.372761 containerd[1962]: 2026-01-14 01:26:49.344 [INFO][5922] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliccfdd2c59c7 ContainerID="f8ddeea0d560ec0f380402157d2c195fc36fc0d127294361182c5ca5de03afeb" Namespace="calico-system" Pod="calico-kube-controllers-5799fb7c8b-6l2xc" WorkloadEndpoint="ip--172--31--18--46-k8s-calico--kube--controllers--5799fb7c8b--6l2xc-eth0" Jan 14 01:26:49.372761 containerd[1962]: 2026-01-14 01:26:49.348 [INFO][5922] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f8ddeea0d560ec0f380402157d2c195fc36fc0d127294361182c5ca5de03afeb" Namespace="calico-system" Pod="calico-kube-controllers-5799fb7c8b-6l2xc" WorkloadEndpoint="ip--172--31--18--46-k8s-calico--kube--controllers--5799fb7c8b--6l2xc-eth0" Jan 14 01:26:49.372761 containerd[1962]: 2026-01-14 01:26:49.349 [INFO][5922] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f8ddeea0d560ec0f380402157d2c195fc36fc0d127294361182c5ca5de03afeb" Namespace="calico-system" Pod="calico-kube-controllers-5799fb7c8b-6l2xc" WorkloadEndpoint="ip--172--31--18--46-k8s-calico--kube--controllers--5799fb7c8b--6l2xc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--46-k8s-calico--kube--controllers--5799fb7c8b--6l2xc-eth0", GenerateName:"calico-kube-controllers-5799fb7c8b-", Namespace:"calico-system", SelfLink:"", UID:"89c07bdd-9dad-4c41-8dfe-3de894f6f743", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 26, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5799fb7c8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-46", ContainerID:"f8ddeea0d560ec0f380402157d2c195fc36fc0d127294361182c5ca5de03afeb", Pod:"calico-kube-controllers-5799fb7c8b-6l2xc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.202/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliccfdd2c59c7", MAC:"62:aa:ee:49:08:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:26:49.372761 containerd[1962]: 2026-01-14 01:26:49.367 [INFO][5922] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f8ddeea0d560ec0f380402157d2c195fc36fc0d127294361182c5ca5de03afeb" Namespace="calico-system" Pod="calico-kube-controllers-5799fb7c8b-6l2xc" WorkloadEndpoint="ip--172--31--18--46-k8s-calico--kube--controllers--5799fb7c8b--6l2xc-eth0" Jan 14 01:26:49.415833 containerd[1962]: time="2026-01-14T01:26:49.415698557Z" level=info msg="connecting to shim f8ddeea0d560ec0f380402157d2c195fc36fc0d127294361182c5ca5de03afeb" address="unix:///run/containerd/s/0de0c8d152d49806971386e9fbdb6f6a02121975ecd3a9ab3738aead100a31c3" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:26:49.418000 audit[5949]: NETFILTER_CFG table=filter:152 family=2 entries=66 op=nft_register_chain pid=5949 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:26:49.418000 audit[5949]: SYSCALL arch=c000003e syscall=46 success=yes exit=29540 a0=3 a1=7fffc77c7080 a2=0 a3=7fffc77c706c items=0 ppid=4950 pid=5949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:49.418000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:26:49.462094 systemd[1]: Started cri-containerd-f8ddeea0d560ec0f380402157d2c195fc36fc0d127294361182c5ca5de03afeb.scope - libcontainer container f8ddeea0d560ec0f380402157d2c195fc36fc0d127294361182c5ca5de03afeb. Jan 14 01:26:49.479000 audit: BPF prog-id=268 op=LOAD Jan 14 01:26:49.480000 audit: BPF prog-id=269 op=LOAD Jan 14 01:26:49.480000 audit[5970]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5958 pid=5970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:49.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638646465656130643536306563306633383034303231353764326331 Jan 14 01:26:49.480000 audit: BPF prog-id=269 op=UNLOAD Jan 14 01:26:49.480000 audit[5970]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5958 pid=5970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:49.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638646465656130643536306563306633383034303231353764326331 Jan 14 01:26:49.480000 audit: BPF prog-id=270 op=LOAD Jan 14 01:26:49.480000 audit[5970]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5958 pid=5970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:49.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638646465656130643536306563306633383034303231353764326331 Jan 14 01:26:49.480000 audit: BPF prog-id=271 op=LOAD Jan 14 01:26:49.480000 audit[5970]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5958 pid=5970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:49.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638646465656130643536306563306633383034303231353764326331 Jan 14 01:26:49.481000 audit: BPF prog-id=271 op=UNLOAD Jan 14 01:26:49.481000 audit[5970]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5958 pid=5970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:49.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638646465656130643536306563306633383034303231353764326331 Jan 14 01:26:49.481000 audit: BPF prog-id=270 op=UNLOAD Jan 14 01:26:49.481000 audit[5970]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5958 pid=5970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:49.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638646465656130643536306563306633383034303231353764326331 Jan 14 01:26:49.481000 audit: BPF prog-id=272 op=LOAD Jan 14 01:26:49.481000 audit[5970]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5958 pid=5970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:49.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638646465656130643536306563306633383034303231353764326331 Jan 14 01:26:49.525643 containerd[1962]: time="2026-01-14T01:26:49.525540277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5799fb7c8b-6l2xc,Uid:89c07bdd-9dad-4c41-8dfe-3de894f6f743,Namespace:calico-system,Attempt:0,} returns sandbox id \"f8ddeea0d560ec0f380402157d2c195fc36fc0d127294361182c5ca5de03afeb\"" Jan 14 01:26:49.528254 containerd[1962]: time="2026-01-14T01:26:49.528223129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:26:49.563000 audit[5916]: USER_ACCT pid=5916 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:49.565740 sshd[5916]: Accepted publickey for core from 4.153.228.146 port 44990 ssh2: RSA SHA256:ES3aJcA+M+pl5u1hk2HWRqxW4DXd1pPYtNeRk1B3mrI Jan 14 01:26:49.565000 audit[5916]: CRED_ACQ pid=5916 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:49.566000 audit[5916]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb1a5ef10 a2=3 a3=0 items=0 ppid=1 pid=5916 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:49.566000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:26:49.572287 sshd-session[5916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:26:49.582132 systemd-logind[1928]: New session 10 of user core. Jan 14 01:26:49.587815 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 01:26:49.589000 audit[5916]: USER_START pid=5916 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:49.591000 audit[5996]: CRED_ACQ pid=5996 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:49.785601 containerd[1962]: time="2026-01-14T01:26:49.784809278Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:26:49.786393 containerd[1962]: time="2026-01-14T01:26:49.786168553Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:26:49.786393 containerd[1962]: time="2026-01-14T01:26:49.786355773Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:26:49.786637 kubelet[3571]: E0114 01:26:49.786599 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:26:49.786940 kubelet[3571]: E0114 01:26:49.786651 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:26:49.788822 kubelet[3571]: E0114 01:26:49.786819 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnbz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5799fb7c8b-6l2xc_calico-system(89c07bdd-9dad-4c41-8dfe-3de894f6f743): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:26:49.789568 kubelet[3571]: E0114 01:26:49.789494 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5799fb7c8b-6l2xc" podUID="89c07bdd-9dad-4c41-8dfe-3de894f6f743" Jan 14 01:26:49.990255 sshd[5996]: Connection closed by 4.153.228.146 port 44990 Jan 14 01:26:49.992686 sshd-session[5916]: pam_unix(sshd:session): session closed for user core Jan 14 01:26:49.993000 audit[5916]: USER_END pid=5916 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:49.994000 audit[5916]: CRED_DISP pid=5916 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:49.998911 systemd[1]: sshd@8-172.31.18.46:22-4.153.228.146:44990.service: Deactivated successfully. Jan 14 01:26:49.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.18.46:22-4.153.228.146:44990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:26:50.002917 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 01:26:50.005269 systemd-logind[1928]: Session 10 logged out. Waiting for processes to exit. Jan 14 01:26:50.009699 systemd-logind[1928]: Removed session 10. Jan 14 01:26:50.130846 containerd[1962]: time="2026-01-14T01:26:50.130617914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:26:50.390874 containerd[1962]: time="2026-01-14T01:26:50.390762091Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:26:50.392523 containerd[1962]: time="2026-01-14T01:26:50.392460504Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:26:50.392802 containerd[1962]: time="2026-01-14T01:26:50.392574447Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:26:50.392916 kubelet[3571]: E0114 01:26:50.392738 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:26:50.392916 kubelet[3571]: E0114 01:26:50.392790 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:26:50.393033 kubelet[3571]: E0114 01:26:50.392970 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zcshs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-574d5c8798-6q7jw_calico-apiserver(4263d4be-fc9d-471e-8df9-42f06716a4f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:26:50.394777 kubelet[3571]: E0114 01:26:50.394730 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-574d5c8798-6q7jw" podUID="4263d4be-fc9d-471e-8df9-42f06716a4f0" Jan 14 01:26:50.710610 kubelet[3571]: E0114 01:26:50.709891 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5799fb7c8b-6l2xc" podUID="89c07bdd-9dad-4c41-8dfe-3de894f6f743" Jan 14 01:26:50.846661 systemd-networkd[1548]: caliccfdd2c59c7: Gained IPv6LL Jan 14 01:26:51.716000 audit[6008]: NETFILTER_CFG table=filter:153 family=2 entries=14 op=nft_register_rule pid=6008 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:51.718906 kernel: kauditd_printk_skb: 85 callbacks suppressed Jan 14 01:26:51.718975 kernel: audit: type=1325 audit(1768354011.716:803): table=filter:153 family=2 entries=14 op=nft_register_rule pid=6008 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:51.716000 audit[6008]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdb40ce9e0 a2=0 a3=7ffdb40ce9cc items=0 ppid=3724 pid=6008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:51.723220 kernel: audit: type=1300 audit(1768354011.716:803): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdb40ce9e0 a2=0 a3=7ffdb40ce9cc items=0 ppid=3724 pid=6008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:51.716000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:51.730270 kernel: audit: type=1327 audit(1768354011.716:803): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:51.732000 audit[6008]: NETFILTER_CFG table=nat:154 family=2 entries=56 op=nft_register_chain pid=6008 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:51.732000 audit[6008]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffdb40ce9e0 a2=0 a3=7ffdb40ce9cc items=0 ppid=3724 pid=6008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:51.743338 kernel: audit: type=1325 audit(1768354011.732:804): table=nat:154 family=2 entries=56 op=nft_register_chain pid=6008 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:51.743440 kernel: audit: type=1300 audit(1768354011.732:804): arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffdb40ce9e0 a2=0 a3=7ffdb40ce9cc items=0 ppid=3724 pid=6008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:51.732000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:51.748575 kernel: audit: type=1327 audit(1768354011.732:804): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:53.129791 containerd[1962]: time="2026-01-14T01:26:53.129752133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:26:53.423843 containerd[1962]: time="2026-01-14T01:26:53.423711421Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:26:53.425096 containerd[1962]: time="2026-01-14T01:26:53.425037366Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:26:53.425235 containerd[1962]: time="2026-01-14T01:26:53.425164644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:26:53.425626 kubelet[3571]: E0114 01:26:53.425584 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:26:53.426052 kubelet[3571]: E0114 01:26:53.425633 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:26:53.426052 kubelet[3571]: E0114 01:26:53.425823 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h4r5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65b9745b8-fxksz_calico-apiserver(56394477-d28d-42eb-bee5-a9a20263c11f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:26:53.427562 kubelet[3571]: E0114 01:26:53.427513 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b9745b8-fxksz" podUID="56394477-d28d-42eb-bee5-a9a20263c11f" Jan 14 01:26:53.822756 ntpd[1919]: Listen normally on 6 vxlan.calico 192.168.5.192:123 Jan 14 01:26:53.823376 ntpd[1919]: 14 Jan 01:26:53 ntpd[1919]: Listen normally on 6 vxlan.calico 192.168.5.192:123 Jan 14 01:26:53.823376 ntpd[1919]: 14 Jan 01:26:53 ntpd[1919]: Listen normally on 7 vxlan.calico [fe80::6475:81ff:fe37:b152%4]:123 Jan 14 01:26:53.823376 ntpd[1919]: 14 Jan 01:26:53 ntpd[1919]: Listen normally on 8 cali1105652f90a [fe80::ecee:eeff:feee:eeee%7]:123 Jan 14 01:26:53.823376 ntpd[1919]: 14 Jan 01:26:53 ntpd[1919]: Listen normally on 9 cali38bc2ee4ada [fe80::ecee:eeff:feee:eeee%8]:123 Jan 14 01:26:53.823376 ntpd[1919]: 14 Jan 01:26:53 ntpd[1919]: Listen normally on 10 calice29bf40714 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 14 01:26:53.823376 ntpd[1919]: 14 Jan 01:26:53 ntpd[1919]: Listen normally on 11 calie923709a816 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 14 01:26:53.823376 ntpd[1919]: 14 Jan 01:26:53 ntpd[1919]: Listen normally on 12 calied2ea6665c6 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 14 01:26:53.823376 ntpd[1919]: 14 Jan 01:26:53 ntpd[1919]: Listen normally on 13 cali732a0906c37 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 14 01:26:53.823376 ntpd[1919]: 14 Jan 01:26:53 ntpd[1919]: Listen normally on 14 caliacfaaa6ba56 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 14 01:26:53.823376 ntpd[1919]: 14 Jan 01:26:53 ntpd[1919]: Listen normally on 15 cali64259f45118 [fe80::ecee:eeff:feee:eeee%15]:123 Jan 14 01:26:53.823376 ntpd[1919]: 14 Jan 01:26:53 ntpd[1919]: Listen normally on 16 caliccfdd2c59c7 [fe80::ecee:eeff:feee:eeee%16]:123 Jan 14 01:26:53.822812 ntpd[1919]: Listen normally on 7 vxlan.calico [fe80::6475:81ff:fe37:b152%4]:123 Jan 14 01:26:53.822835 ntpd[1919]: Listen normally on 8 cali1105652f90a [fe80::ecee:eeff:feee:eeee%7]:123 Jan 14 01:26:53.822854 ntpd[1919]: Listen normally on 9 cali38bc2ee4ada [fe80::ecee:eeff:feee:eeee%8]:123 Jan 14 01:26:53.822874 ntpd[1919]: Listen normally on 10 calice29bf40714 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 14 01:26:53.822893 ntpd[1919]: Listen normally on 11 calie923709a816 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 14 01:26:53.822916 ntpd[1919]: Listen normally on 12 calied2ea6665c6 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 14 01:26:53.822937 ntpd[1919]: Listen normally on 13 cali732a0906c37 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 14 01:26:53.822958 ntpd[1919]: Listen normally on 14 caliacfaaa6ba56 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 14 01:26:53.822979 ntpd[1919]: Listen normally on 15 cali64259f45118 [fe80::ecee:eeff:feee:eeee%15]:123 Jan 14 01:26:53.822997 ntpd[1919]: Listen normally on 16 caliccfdd2c59c7 [fe80::ecee:eeff:feee:eeee%16]:123 Jan 14 01:26:55.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.18.46:22-4.153.228.146:46366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:26:55.082158 systemd[1]: Started sshd@9-172.31.18.46:22-4.153.228.146:46366.service - OpenSSH per-connection server daemon (4.153.228.146:46366). Jan 14 01:26:55.087787 kernel: audit: type=1130 audit(1768354015.080:805): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.18.46:22-4.153.228.146:46366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:26:55.525000 audit[6015]: USER_ACCT pid=6015 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:55.529727 sshd-session[6015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:26:55.531110 sshd[6015]: Accepted publickey for core from 4.153.228.146 port 46366 ssh2: RSA SHA256:ES3aJcA+M+pl5u1hk2HWRqxW4DXd1pPYtNeRk1B3mrI Jan 14 01:26:55.532584 kernel: audit: type=1101 audit(1768354015.525:806): pid=6015 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:55.526000 audit[6015]: CRED_ACQ pid=6015 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:55.541634 kernel: audit: type=1103 audit(1768354015.526:807): pid=6015 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:55.541716 kernel: audit: type=1006 audit(1768354015.526:808): pid=6015 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 14 01:26:55.540319 systemd-logind[1928]: New session 11 of user core. Jan 14 01:26:55.526000 audit[6015]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe421fe500 a2=3 a3=0 items=0 ppid=1 pid=6015 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:55.526000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:26:55.553103 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 01:26:55.555000 audit[6015]: USER_START pid=6015 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:55.561000 audit[6019]: CRED_ACQ pid=6019 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:55.871494 sshd[6019]: Connection closed by 4.153.228.146 port 46366 Jan 14 01:26:55.872795 sshd-session[6015]: pam_unix(sshd:session): session closed for user core Jan 14 01:26:55.875000 audit[6015]: USER_END pid=6015 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:55.876000 audit[6015]: CRED_DISP pid=6015 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:55.886791 systemd[1]: sshd@9-172.31.18.46:22-4.153.228.146:46366.service: Deactivated successfully. Jan 14 01:26:55.887078 systemd-logind[1928]: Session 11 logged out. Waiting for processes to exit. Jan 14 01:26:55.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.18.46:22-4.153.228.146:46366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:26:55.889201 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 01:26:55.891604 systemd-logind[1928]: Removed session 11. Jan 14 01:26:55.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.18.46:22-4.153.228.146:46370 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:26:55.958563 systemd[1]: Started sshd@10-172.31.18.46:22-4.153.228.146:46370.service - OpenSSH per-connection server daemon (4.153.228.146:46370). Jan 14 01:26:56.132058 containerd[1962]: time="2026-01-14T01:26:56.131844372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:26:56.411511 containerd[1962]: time="2026-01-14T01:26:56.411370913Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:26:56.413718 containerd[1962]: time="2026-01-14T01:26:56.413650296Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:26:56.413718 containerd[1962]: time="2026-01-14T01:26:56.413684850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:26:56.413942 kubelet[3571]: E0114 01:26:56.413910 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:26:56.414843 kubelet[3571]: E0114 01:26:56.413968 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:26:56.414843 kubelet[3571]: E0114 01:26:56.414096 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b81ad817aa184d10b98d3fd4131cf440,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f9czx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54d55768dc-v7mtb_calico-system(47e80bc5-cd92-40b2-a579-aec98175c1b6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:26:56.416456 containerd[1962]: time="2026-01-14T01:26:56.416406305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:26:56.416000 audit[6032]: USER_ACCT pid=6032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:56.418454 sshd[6032]: Accepted publickey for core from 4.153.228.146 port 46370 ssh2: RSA SHA256:ES3aJcA+M+pl5u1hk2HWRqxW4DXd1pPYtNeRk1B3mrI Jan 14 01:26:56.418000 audit[6032]: CRED_ACQ pid=6032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:56.418000 audit[6032]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc2a508de0 a2=3 a3=0 items=0 ppid=1 pid=6032 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:56.418000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:26:56.421569 sshd-session[6032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:26:56.428361 systemd-logind[1928]: New session 12 of user core. Jan 14 01:26:56.434110 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 01:26:56.437000 audit[6032]: USER_START pid=6032 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:56.440000 audit[6036]: CRED_ACQ pid=6036 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:56.703982 containerd[1962]: time="2026-01-14T01:26:56.703076525Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:26:56.706259 containerd[1962]: time="2026-01-14T01:26:56.705655422Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:26:56.706259 containerd[1962]: time="2026-01-14T01:26:56.705780263Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:26:56.706442 kubelet[3571]: E0114 01:26:56.705956 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:26:56.706442 kubelet[3571]: E0114 01:26:56.706007 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:26:56.706442 kubelet[3571]: E0114 01:26:56.706155 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9czx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54d55768dc-v7mtb_calico-system(47e80bc5-cd92-40b2-a579-aec98175c1b6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:26:56.707773 kubelet[3571]: E0114 01:26:56.707691 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54d55768dc-v7mtb" podUID="47e80bc5-cd92-40b2-a579-aec98175c1b6" Jan 14 01:26:56.862956 sshd[6036]: Connection closed by 4.153.228.146 port 46370 Jan 14 01:26:56.864787 sshd-session[6032]: pam_unix(sshd:session): session closed for user core Jan 14 01:26:56.867679 kernel: kauditd_printk_skb: 15 callbacks suppressed Jan 14 01:26:56.867772 kernel: audit: type=1106 audit(1768354016.865:820): pid=6032 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:56.865000 audit[6032]: USER_END pid=6032 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:56.870877 systemd[1]: sshd@10-172.31.18.46:22-4.153.228.146:46370.service: Deactivated successfully. Jan 14 01:26:56.865000 audit[6032]: CRED_DISP pid=6032 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:56.878597 kernel: audit: type=1104 audit(1768354016.865:821): pid=6032 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:56.874017 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 01:26:56.876803 systemd-logind[1928]: Session 12 logged out. Waiting for processes to exit. Jan 14 01:26:56.878362 systemd-logind[1928]: Removed session 12. Jan 14 01:26:56.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.18.46:22-4.153.228.146:46370 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:26:56.883608 kernel: audit: type=1131 audit(1768354016.869:822): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.18.46:22-4.153.228.146:46370 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:26:56.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.18.46:22-4.153.228.146:46386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:26:56.967925 systemd[1]: Started sshd@11-172.31.18.46:22-4.153.228.146:46386.service - OpenSSH per-connection server daemon (4.153.228.146:46386). Jan 14 01:26:56.973637 kernel: audit: type=1130 audit(1768354016.966:823): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.18.46:22-4.153.228.146:46386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:26:57.444123 sshd[6046]: Accepted publickey for core from 4.153.228.146 port 46386 ssh2: RSA SHA256:ES3aJcA+M+pl5u1hk2HWRqxW4DXd1pPYtNeRk1B3mrI Jan 14 01:26:57.442000 audit[6046]: USER_ACCT pid=6046 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:57.448020 sshd-session[6046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:26:57.451592 kernel: audit: type=1101 audit(1768354017.442:824): pid=6046 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:57.445000 audit[6046]: CRED_ACQ pid=6046 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:57.460582 kernel: audit: type=1103 audit(1768354017.445:825): pid=6046 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:57.466291 systemd-logind[1928]: New session 13 of user core. Jan 14 01:26:57.467576 kernel: audit: type=1006 audit(1768354017.445:826): pid=6046 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 14 01:26:57.445000 audit[6046]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde4538d20 a2=3 a3=0 items=0 ppid=1 pid=6046 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:57.475585 kernel: audit: type=1300 audit(1768354017.445:826): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde4538d20 a2=3 a3=0 items=0 ppid=1 pid=6046 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:57.445000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:26:57.477865 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 01:26:57.480582 kernel: audit: type=1327 audit(1768354017.445:826): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:26:57.484000 audit[6046]: USER_START pid=6046 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:57.492758 kernel: audit: type=1105 audit(1768354017.484:827): pid=6046 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:57.491000 audit[6056]: CRED_ACQ pid=6056 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:57.804881 sshd[6056]: Connection closed by 4.153.228.146 port 46386 Jan 14 01:26:57.805821 sshd-session[6046]: pam_unix(sshd:session): session closed for user core Jan 14 01:26:57.805000 audit[6046]: USER_END pid=6046 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:57.806000 audit[6046]: CRED_DISP pid=6046 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:26:57.811615 systemd[1]: sshd@11-172.31.18.46:22-4.153.228.146:46386.service: Deactivated successfully. Jan 14 01:26:57.811623 systemd-logind[1928]: Session 13 logged out. Waiting for processes to exit. Jan 14 01:26:57.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.18.46:22-4.153.228.146:46386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:26:57.815273 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 01:26:57.817222 systemd-logind[1928]: Removed session 13. Jan 14 01:27:00.134815 containerd[1962]: time="2026-01-14T01:27:00.134737248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:27:00.385987 containerd[1962]: time="2026-01-14T01:27:00.384946901Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:27:00.387437 containerd[1962]: time="2026-01-14T01:27:00.387269330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:27:00.387437 containerd[1962]: time="2026-01-14T01:27:00.387360718Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:27:00.388196 kubelet[3571]: E0114 01:27:00.388153 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:27:00.388675 kubelet[3571]: E0114 01:27:00.388206 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:27:00.388675 kubelet[3571]: E0114 01:27:00.388377 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jdvfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-574d5c8798-9hhk4_calico-apiserver(525cee0b-846b-43ba-9b3e-e192ccc373a0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:27:00.390159 kubelet[3571]: E0114 01:27:00.390012 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-574d5c8798-9hhk4" podUID="525cee0b-846b-43ba-9b3e-e192ccc373a0" Jan 14 01:27:01.139729 containerd[1962]: time="2026-01-14T01:27:01.139534640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:27:01.648621 containerd[1962]: time="2026-01-14T01:27:01.648536528Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:27:01.651058 containerd[1962]: time="2026-01-14T01:27:01.650975593Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:27:01.651058 containerd[1962]: time="2026-01-14T01:27:01.651026050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:27:01.651575 kubelet[3571]: E0114 01:27:01.651299 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:27:01.659823 kubelet[3571]: E0114 01:27:01.651591 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:27:01.659823 kubelet[3571]: E0114 01:27:01.654160 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qs6c7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zrh2z_calico-system(74b84cdc-323d-4b42-b95a-ceec7dfaa40f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:27:01.666159 containerd[1962]: time="2026-01-14T01:27:01.654229296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:27:02.152606 containerd[1962]: time="2026-01-14T01:27:02.146727282Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:27:02.160749 containerd[1962]: time="2026-01-14T01:27:02.159260513Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:27:02.165056 containerd[1962]: time="2026-01-14T01:27:02.162353292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:27:02.165823 kubelet[3571]: E0114 01:27:02.165610 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:27:02.168707 kubelet[3571]: E0114 01:27:02.165834 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:27:02.168822 kubelet[3571]: E0114 01:27:02.165709 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-574d5c8798-6q7jw" podUID="4263d4be-fc9d-471e-8df9-42f06716a4f0" Jan 14 01:27:02.205912 containerd[1962]: time="2026-01-14T01:27:02.205866124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:27:02.211043 kubelet[3571]: E0114 01:27:02.210949 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zblph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5mdck_calico-system(993f578d-b707-42bd-b6e9-14c5aa23a03f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:27:02.223674 kubelet[3571]: E0114 01:27:02.223597 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5mdck" podUID="993f578d-b707-42bd-b6e9-14c5aa23a03f" Jan 14 01:27:02.515647 containerd[1962]: time="2026-01-14T01:27:02.504461685Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:27:02.519980 containerd[1962]: time="2026-01-14T01:27:02.519913409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:27:02.520465 containerd[1962]: time="2026-01-14T01:27:02.519912179Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:27:02.523092 kubelet[3571]: E0114 01:27:02.520715 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:27:02.524356 kubelet[3571]: E0114 01:27:02.524093 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:27:02.524492 kubelet[3571]: E0114 01:27:02.524426 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qs6c7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zrh2z_calico-system(74b84cdc-323d-4b42-b95a-ceec7dfaa40f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:27:02.547661 kubelet[3571]: E0114 01:27:02.532158 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zrh2z" podUID="74b84cdc-323d-4b42-b95a-ceec7dfaa40f" Jan 14 01:27:02.992746 kernel: kauditd_printk_skb: 4 callbacks suppressed Jan 14 01:27:02.992868 kernel: audit: type=1130 audit(1768354022.968:832): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.18.46:22-4.153.228.146:46396 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:02.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.18.46:22-4.153.228.146:46396 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:02.969105 systemd[1]: Started sshd@12-172.31.18.46:22-4.153.228.146:46396.service - OpenSSH per-connection server daemon (4.153.228.146:46396). Jan 14 01:27:03.566000 audit[6073]: USER_ACCT pid=6073 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:03.569624 sshd[6073]: Accepted publickey for core from 4.153.228.146 port 46396 ssh2: RSA SHA256:ES3aJcA+M+pl5u1hk2HWRqxW4DXd1pPYtNeRk1B3mrI Jan 14 01:27:03.573136 sshd-session[6073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:27:03.579715 kernel: audit: type=1101 audit(1768354023.566:833): pid=6073 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:03.579830 kernel: audit: type=1103 audit(1768354023.569:834): pid=6073 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:03.569000 audit[6073]: CRED_ACQ pid=6073 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:03.585205 kernel: audit: type=1006 audit(1768354023.569:835): pid=6073 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 14 01:27:03.569000 audit[6073]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdacebed60 a2=3 a3=0 items=0 ppid=1 pid=6073 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:27:03.589568 kernel: audit: type=1300 audit(1768354023.569:835): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdacebed60 a2=3 a3=0 items=0 ppid=1 pid=6073 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:27:03.589074 systemd-logind[1928]: New session 14 of user core. Jan 14 01:27:03.569000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:27:03.597139 kernel: audit: type=1327 audit(1768354023.569:835): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:27:03.599031 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 01:27:03.603000 audit[6073]: USER_START pid=6073 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:03.608000 audit[6077]: CRED_ACQ pid=6077 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:03.613972 kernel: audit: type=1105 audit(1768354023.603:836): pid=6073 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:03.614079 kernel: audit: type=1103 audit(1768354023.608:837): pid=6077 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:04.053699 sshd[6077]: Connection closed by 4.153.228.146 port 46396 Jan 14 01:27:04.058085 sshd-session[6073]: pam_unix(sshd:session): session closed for user core Jan 14 01:27:04.060000 audit[6073]: USER_END pid=6073 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:04.060000 audit[6073]: CRED_DISP pid=6073 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:04.070970 kernel: audit: type=1106 audit(1768354024.060:838): pid=6073 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:04.071076 kernel: audit: type=1104 audit(1768354024.060:839): pid=6073 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:04.070615 systemd[1]: sshd@12-172.31.18.46:22-4.153.228.146:46396.service: Deactivated successfully. Jan 14 01:27:04.070691 systemd-logind[1928]: Session 14 logged out. Waiting for processes to exit. Jan 14 01:27:04.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.18.46:22-4.153.228.146:46396 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:04.076332 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 01:27:04.079959 systemd-logind[1928]: Removed session 14. Jan 14 01:27:05.130762 containerd[1962]: time="2026-01-14T01:27:05.130709075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:27:05.420663 containerd[1962]: time="2026-01-14T01:27:05.420510193Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:27:05.423634 containerd[1962]: time="2026-01-14T01:27:05.423576347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:27:05.423784 containerd[1962]: time="2026-01-14T01:27:05.423581616Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:27:05.424406 kubelet[3571]: E0114 01:27:05.424141 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:27:05.424406 kubelet[3571]: E0114 01:27:05.424243 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:27:05.424912 kubelet[3571]: E0114 01:27:05.424830 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnbz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5799fb7c8b-6l2xc_calico-system(89c07bdd-9dad-4c41-8dfe-3de894f6f743): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:27:05.426677 kubelet[3571]: E0114 01:27:05.426618 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5799fb7c8b-6l2xc" podUID="89c07bdd-9dad-4c41-8dfe-3de894f6f743" Jan 14 01:27:06.133095 kubelet[3571]: E0114 01:27:06.133038 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b9745b8-fxksz" podUID="56394477-d28d-42eb-bee5-a9a20263c11f" Jan 14 01:27:08.177771 containerd[1962]: time="2026-01-14T01:27:08.176577855Z" level=info msg="StopPodSandbox for \"141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e\"" Jan 14 01:27:08.332506 containerd[1962]: 2026-01-14 01:27:08.284 [WARNING][6128] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" WorkloadEndpoint="ip--172--31--18--46-k8s-whisker--84845f84c5--f7vsl-eth0" Jan 14 01:27:08.332506 containerd[1962]: 2026-01-14 01:27:08.284 [INFO][6128] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" Jan 14 01:27:08.332506 containerd[1962]: 2026-01-14 01:27:08.284 [INFO][6128] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" iface="eth0" netns="" Jan 14 01:27:08.332506 containerd[1962]: 2026-01-14 01:27:08.284 [INFO][6128] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" Jan 14 01:27:08.332506 containerd[1962]: 2026-01-14 01:27:08.284 [INFO][6128] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" Jan 14 01:27:08.332506 containerd[1962]: 2026-01-14 01:27:08.316 [INFO][6135] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" HandleID="k8s-pod-network.141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" Workload="ip--172--31--18--46-k8s-whisker--84845f84c5--f7vsl-eth0" Jan 14 01:27:08.332506 containerd[1962]: 2026-01-14 01:27:08.316 [INFO][6135] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:27:08.332506 containerd[1962]: 2026-01-14 01:27:08.316 [INFO][6135] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:27:08.332506 containerd[1962]: 2026-01-14 01:27:08.324 [WARNING][6135] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" HandleID="k8s-pod-network.141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" Workload="ip--172--31--18--46-k8s-whisker--84845f84c5--f7vsl-eth0" Jan 14 01:27:08.332506 containerd[1962]: 2026-01-14 01:27:08.324 [INFO][6135] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" HandleID="k8s-pod-network.141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" Workload="ip--172--31--18--46-k8s-whisker--84845f84c5--f7vsl-eth0" Jan 14 01:27:08.332506 containerd[1962]: 2026-01-14 01:27:08.327 [INFO][6135] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:27:08.332506 containerd[1962]: 2026-01-14 01:27:08.330 [INFO][6128] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" Jan 14 01:27:08.333437 containerd[1962]: time="2026-01-14T01:27:08.332616259Z" level=info msg="TearDown network for sandbox \"141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e\" successfully" Jan 14 01:27:08.333437 containerd[1962]: time="2026-01-14T01:27:08.332661142Z" level=info msg="StopPodSandbox for \"141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e\" returns successfully" Jan 14 01:27:08.347346 containerd[1962]: time="2026-01-14T01:27:08.347300020Z" level=info msg="RemovePodSandbox for \"141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e\"" Jan 14 01:27:08.347346 containerd[1962]: time="2026-01-14T01:27:08.347349267Z" level=info msg="Forcibly stopping sandbox \"141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e\"" Jan 14 01:27:08.440138 containerd[1962]: 2026-01-14 01:27:08.400 [WARNING][6151] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" WorkloadEndpoint="ip--172--31--18--46-k8s-whisker--84845f84c5--f7vsl-eth0" Jan 14 01:27:08.440138 containerd[1962]: 2026-01-14 01:27:08.400 [INFO][6151] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" Jan 14 01:27:08.440138 containerd[1962]: 2026-01-14 01:27:08.400 [INFO][6151] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" iface="eth0" netns="" Jan 14 01:27:08.440138 containerd[1962]: 2026-01-14 01:27:08.400 [INFO][6151] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" Jan 14 01:27:08.440138 containerd[1962]: 2026-01-14 01:27:08.400 [INFO][6151] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" Jan 14 01:27:08.440138 containerd[1962]: 2026-01-14 01:27:08.424 [INFO][6159] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" HandleID="k8s-pod-network.141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" Workload="ip--172--31--18--46-k8s-whisker--84845f84c5--f7vsl-eth0" Jan 14 01:27:08.440138 containerd[1962]: 2026-01-14 01:27:08.424 [INFO][6159] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:27:08.440138 containerd[1962]: 2026-01-14 01:27:08.424 [INFO][6159] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:27:08.440138 containerd[1962]: 2026-01-14 01:27:08.432 [WARNING][6159] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" HandleID="k8s-pod-network.141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" Workload="ip--172--31--18--46-k8s-whisker--84845f84c5--f7vsl-eth0" Jan 14 01:27:08.440138 containerd[1962]: 2026-01-14 01:27:08.432 [INFO][6159] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" HandleID="k8s-pod-network.141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" Workload="ip--172--31--18--46-k8s-whisker--84845f84c5--f7vsl-eth0" Jan 14 01:27:08.440138 containerd[1962]: 2026-01-14 01:27:08.435 [INFO][6159] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:27:08.440138 containerd[1962]: 2026-01-14 01:27:08.437 [INFO][6151] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e" Jan 14 01:27:08.440138 containerd[1962]: time="2026-01-14T01:27:08.439712458Z" level=info msg="TearDown network for sandbox \"141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e\" successfully" Jan 14 01:27:08.447638 containerd[1962]: time="2026-01-14T01:27:08.447586869Z" level=info msg="Ensure that sandbox 141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e in task-service has been cleanup successfully" Jan 14 01:27:08.465172 containerd[1962]: time="2026-01-14T01:27:08.465105984Z" level=info msg="RemovePodSandbox \"141312442e6f51403dcde0057e1ec6273173e13512ef91f0d3d037aa2b958d8e\" returns successfully" Jan 14 01:27:09.136032 systemd[1]: Started sshd@13-172.31.18.46:22-4.153.228.146:33532.service - OpenSSH per-connection server daemon (4.153.228.146:33532). Jan 14 01:27:09.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.18.46:22-4.153.228.146:33532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:09.140503 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:27:09.140630 kernel: audit: type=1130 audit(1768354029.134:841): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.18.46:22-4.153.228.146:33532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:09.630000 audit[6167]: USER_ACCT pid=6167 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:09.635803 sshd[6167]: Accepted publickey for core from 4.153.228.146 port 33532 ssh2: RSA SHA256:ES3aJcA+M+pl5u1hk2HWRqxW4DXd1pPYtNeRk1B3mrI Jan 14 01:27:09.636586 sshd-session[6167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:27:09.638014 kernel: audit: type=1101 audit(1768354029.630:842): pid=6167 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:09.631000 audit[6167]: CRED_ACQ pid=6167 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:09.644606 kernel: audit: type=1103 audit(1768354029.631:843): pid=6167 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:09.644730 kernel: audit: type=1006 audit(1768354029.631:844): pid=6167 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 14 01:27:09.631000 audit[6167]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd14e6c550 a2=3 a3=0 items=0 ppid=1 pid=6167 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:27:09.631000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:27:09.654194 systemd-logind[1928]: New session 15 of user core. Jan 14 01:27:09.656097 kernel: audit: type=1300 audit(1768354029.631:844): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd14e6c550 a2=3 a3=0 items=0 ppid=1 pid=6167 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:27:09.656148 kernel: audit: type=1327 audit(1768354029.631:844): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:27:09.669826 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 01:27:09.672000 audit[6167]: USER_START pid=6167 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:09.675000 audit[6173]: CRED_ACQ pid=6173 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:09.681471 kernel: audit: type=1105 audit(1768354029.672:845): pid=6167 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:09.681587 kernel: audit: type=1103 audit(1768354029.675:846): pid=6173 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:10.217042 sshd[6173]: Connection closed by 4.153.228.146 port 33532 Jan 14 01:27:10.218753 sshd-session[6167]: pam_unix(sshd:session): session closed for user core Jan 14 01:27:10.218000 audit[6167]: USER_END pid=6167 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:10.227581 kernel: audit: type=1106 audit(1768354030.218:847): pid=6167 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:10.227773 systemd[1]: sshd@13-172.31.18.46:22-4.153.228.146:33532.service: Deactivated successfully. Jan 14 01:27:10.219000 audit[6167]: CRED_DISP pid=6167 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:10.231199 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 01:27:10.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.18.46:22-4.153.228.146:33532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:10.234603 kernel: audit: type=1104 audit(1768354030.219:848): pid=6167 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:10.234808 systemd-logind[1928]: Session 15 logged out. Waiting for processes to exit. Jan 14 01:27:10.237939 systemd-logind[1928]: Removed session 15. Jan 14 01:27:12.132903 kubelet[3571]: E0114 01:27:12.132779 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54d55768dc-v7mtb" podUID="47e80bc5-cd92-40b2-a579-aec98175c1b6" Jan 14 01:27:13.132182 kubelet[3571]: E0114 01:27:13.131952 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zrh2z" podUID="74b84cdc-323d-4b42-b95a-ceec7dfaa40f" Jan 14 01:27:14.133138 kubelet[3571]: E0114 01:27:14.131871 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5mdck" podUID="993f578d-b707-42bd-b6e9-14c5aa23a03f" Jan 14 01:27:14.133138 kubelet[3571]: E0114 01:27:14.132352 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-574d5c8798-9hhk4" podUID="525cee0b-846b-43ba-9b3e-e192ccc373a0" Jan 14 01:27:15.139847 containerd[1962]: time="2026-01-14T01:27:15.139603046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:27:15.303537 systemd[1]: Started sshd@14-172.31.18.46:22-4.153.228.146:58322.service - OpenSSH per-connection server daemon (4.153.228.146:58322). Jan 14 01:27:15.311311 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:27:15.314589 kernel: audit: type=1130 audit(1768354035.304:850): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.18.46:22-4.153.228.146:58322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:15.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.18.46:22-4.153.228.146:58322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:15.433402 containerd[1962]: time="2026-01-14T01:27:15.433123804Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:27:15.435434 containerd[1962]: time="2026-01-14T01:27:15.435377360Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:27:15.435598 containerd[1962]: time="2026-01-14T01:27:15.435471530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:27:15.435713 kubelet[3571]: E0114 01:27:15.435672 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:27:15.436053 kubelet[3571]: E0114 01:27:15.435725 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:27:15.436053 kubelet[3571]: E0114 01:27:15.435860 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zcshs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-574d5c8798-6q7jw_calico-apiserver(4263d4be-fc9d-471e-8df9-42f06716a4f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:27:15.437645 kubelet[3571]: E0114 01:27:15.437544 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-574d5c8798-6q7jw" podUID="4263d4be-fc9d-471e-8df9-42f06716a4f0" Jan 14 01:27:15.762000 audit[6187]: USER_ACCT pid=6187 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:15.767297 sshd-session[6187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:27:15.769691 sshd[6187]: Accepted publickey for core from 4.153.228.146 port 58322 ssh2: RSA SHA256:ES3aJcA+M+pl5u1hk2HWRqxW4DXd1pPYtNeRk1B3mrI Jan 14 01:27:15.770578 kernel: audit: type=1101 audit(1768354035.762:851): pid=6187 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:15.764000 audit[6187]: CRED_ACQ pid=6187 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:15.776605 kernel: audit: type=1103 audit(1768354035.764:852): pid=6187 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:15.786987 kernel: audit: type=1006 audit(1768354035.764:853): pid=6187 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 14 01:27:15.787097 kernel: audit: type=1300 audit(1768354035.764:853): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff9341ed0 a2=3 a3=0 items=0 ppid=1 pid=6187 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:27:15.764000 audit[6187]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff9341ed0 a2=3 a3=0 items=0 ppid=1 pid=6187 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:27:15.783712 systemd-logind[1928]: New session 16 of user core. Jan 14 01:27:15.764000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:27:15.790595 kernel: audit: type=1327 audit(1768354035.764:853): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:27:15.791826 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 01:27:15.794000 audit[6187]: USER_START pid=6187 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:15.803943 kernel: audit: type=1105 audit(1768354035.794:854): pid=6187 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:15.804075 kernel: audit: type=1103 audit(1768354035.802:855): pid=6191 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:15.802000 audit[6191]: CRED_ACQ pid=6191 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:16.170987 sshd[6191]: Connection closed by 4.153.228.146 port 58322 Jan 14 01:27:16.172752 sshd-session[6187]: pam_unix(sshd:session): session closed for user core Jan 14 01:27:16.173000 audit[6187]: USER_END pid=6187 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:16.178973 systemd[1]: sshd@14-172.31.18.46:22-4.153.228.146:58322.service: Deactivated successfully. Jan 14 01:27:16.181575 kernel: audit: type=1106 audit(1768354036.173:856): pid=6187 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:16.181948 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 01:27:16.173000 audit[6187]: CRED_DISP pid=6187 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:16.186385 systemd-logind[1928]: Session 16 logged out. Waiting for processes to exit. Jan 14 01:27:16.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.18.46:22-4.153.228.146:58322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:16.187583 kernel: audit: type=1104 audit(1768354036.173:857): pid=6187 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:16.188634 systemd-logind[1928]: Removed session 16. Jan 14 01:27:16.267421 systemd[1]: Started sshd@15-172.31.18.46:22-4.153.228.146:58332.service - OpenSSH per-connection server daemon (4.153.228.146:58332). Jan 14 01:27:16.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.18.46:22-4.153.228.146:58332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:16.725000 audit[6203]: USER_ACCT pid=6203 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:16.727052 sshd[6203]: Accepted publickey for core from 4.153.228.146 port 58332 ssh2: RSA SHA256:ES3aJcA+M+pl5u1hk2HWRqxW4DXd1pPYtNeRk1B3mrI Jan 14 01:27:16.726000 audit[6203]: CRED_ACQ pid=6203 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:16.726000 audit[6203]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffdc0dcb20 a2=3 a3=0 items=0 ppid=1 pid=6203 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:27:16.726000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:27:16.728868 sshd-session[6203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:27:16.736197 systemd-logind[1928]: New session 17 of user core. Jan 14 01:27:16.738815 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 01:27:16.741000 audit[6203]: USER_START pid=6203 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:16.743000 audit[6207]: CRED_ACQ pid=6207 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:20.147854 containerd[1962]: time="2026-01-14T01:27:20.147798578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:27:20.309578 sshd[6207]: Connection closed by 4.153.228.146 port 58332 Jan 14 01:27:20.311891 sshd-session[6203]: pam_unix(sshd:session): session closed for user core Jan 14 01:27:20.322687 kernel: kauditd_printk_skb: 9 callbacks suppressed Jan 14 01:27:20.322810 kernel: audit: type=1106 audit(1768354040.312:865): pid=6203 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:20.312000 audit[6203]: USER_END pid=6203 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:20.319030 systemd[1]: sshd@15-172.31.18.46:22-4.153.228.146:58332.service: Deactivated successfully. Jan 14 01:27:20.313000 audit[6203]: CRED_DISP pid=6203 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:20.324631 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 01:27:20.327399 systemd-logind[1928]: Session 17 logged out. Waiting for processes to exit. Jan 14 01:27:20.329327 systemd-logind[1928]: Removed session 17. Jan 14 01:27:20.329572 kernel: audit: type=1104 audit(1768354040.313:866): pid=6203 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:20.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.18.46:22-4.153.228.146:58332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:20.334575 kernel: audit: type=1131 audit(1768354040.317:867): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.18.46:22-4.153.228.146:58332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:20.392048 systemd[1]: Started sshd@16-172.31.18.46:22-4.153.228.146:58346.service - OpenSSH per-connection server daemon (4.153.228.146:58346). Jan 14 01:27:20.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.18.46:22-4.153.228.146:58346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:20.401601 kernel: audit: type=1130 audit(1768354040.390:868): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.18.46:22-4.153.228.146:58346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:20.488825 containerd[1962]: time="2026-01-14T01:27:20.488778816Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:27:20.490802 containerd[1962]: time="2026-01-14T01:27:20.490747227Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:27:20.490937 containerd[1962]: time="2026-01-14T01:27:20.490838271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:27:20.491075 kubelet[3571]: E0114 01:27:20.491015 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:27:20.491075 kubelet[3571]: E0114 01:27:20.491066 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:27:20.491916 kubelet[3571]: E0114 01:27:20.491205 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h4r5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65b9745b8-fxksz_calico-apiserver(56394477-d28d-42eb-bee5-a9a20263c11f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:27:20.492810 kubelet[3571]: E0114 01:27:20.492696 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b9745b8-fxksz" podUID="56394477-d28d-42eb-bee5-a9a20263c11f" Jan 14 01:27:20.853000 audit[6225]: USER_ACCT pid=6225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:20.857782 sshd-session[6225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:27:20.859083 sshd[6225]: Accepted publickey for core from 4.153.228.146 port 58346 ssh2: RSA SHA256:ES3aJcA+M+pl5u1hk2HWRqxW4DXd1pPYtNeRk1B3mrI Jan 14 01:27:20.864596 kernel: audit: type=1101 audit(1768354040.853:869): pid=6225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:20.864718 kernel: audit: type=1103 audit(1768354040.854:870): pid=6225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:20.854000 audit[6225]: CRED_ACQ pid=6225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:20.867032 kernel: audit: type=1006 audit(1768354040.855:871): pid=6225 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 14 01:27:20.855000 audit[6225]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4c92cd50 a2=3 a3=0 items=0 ppid=1 pid=6225 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:27:20.870668 kernel: audit: type=1300 audit(1768354040.855:871): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4c92cd50 a2=3 a3=0 items=0 ppid=1 pid=6225 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:27:20.855000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:27:20.875878 kernel: audit: type=1327 audit(1768354040.855:871): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:27:20.879906 systemd-logind[1928]: New session 18 of user core. Jan 14 01:27:20.885848 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 01:27:20.888000 audit[6225]: USER_START pid=6225 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:20.896593 kernel: audit: type=1105 audit(1768354040.888:872): pid=6225 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:20.895000 audit[6229]: CRED_ACQ pid=6229 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:21.129651 kubelet[3571]: E0114 01:27:21.129322 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5799fb7c8b-6l2xc" podUID="89c07bdd-9dad-4c41-8dfe-3de894f6f743" Jan 14 01:27:21.938000 audit[6249]: NETFILTER_CFG table=filter:155 family=2 entries=26 op=nft_register_rule pid=6249 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:27:21.938000 audit[6249]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff15db9810 a2=0 a3=7fff15db97fc items=0 ppid=3724 pid=6249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:27:21.938000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:27:21.941000 audit[6249]: NETFILTER_CFG table=nat:156 family=2 entries=20 op=nft_register_rule pid=6249 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:27:21.941000 audit[6249]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff15db9810 a2=0 a3=0 items=0 ppid=3724 pid=6249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:27:21.941000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:27:21.955000 audit[6251]: NETFILTER_CFG table=filter:157 family=2 entries=38 op=nft_register_rule pid=6251 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:27:21.955000 audit[6251]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc4d940170 a2=0 a3=7ffc4d94015c items=0 ppid=3724 pid=6251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:27:21.955000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:27:21.958000 audit[6251]: NETFILTER_CFG table=nat:158 family=2 entries=20 op=nft_register_rule pid=6251 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:27:21.958000 audit[6251]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc4d940170 a2=0 a3=0 items=0 ppid=3724 pid=6251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:27:21.958000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:27:22.020438 sshd[6229]: Connection closed by 4.153.228.146 port 58346 Jan 14 01:27:22.022736 sshd-session[6225]: pam_unix(sshd:session): session closed for user core Jan 14 01:27:22.023000 audit[6225]: USER_END pid=6225 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:22.023000 audit[6225]: CRED_DISP pid=6225 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:22.027421 systemd[1]: sshd@16-172.31.18.46:22-4.153.228.146:58346.service: Deactivated successfully. Jan 14 01:27:22.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.18.46:22-4.153.228.146:58346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:22.030242 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 01:27:22.034055 systemd-logind[1928]: Session 18 logged out. Waiting for processes to exit. Jan 14 01:27:22.035231 systemd-logind[1928]: Removed session 18. Jan 14 01:27:22.109116 systemd[1]: Started sshd@17-172.31.18.46:22-4.153.228.146:58358.service - OpenSSH per-connection server daemon (4.153.228.146:58358). Jan 14 01:27:22.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.18.46:22-4.153.228.146:58358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:22.545000 audit[6256]: USER_ACCT pid=6256 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:22.547410 sshd[6256]: Accepted publickey for core from 4.153.228.146 port 58358 ssh2: RSA SHA256:ES3aJcA+M+pl5u1hk2HWRqxW4DXd1pPYtNeRk1B3mrI Jan 14 01:27:22.546000 audit[6256]: CRED_ACQ pid=6256 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:22.547000 audit[6256]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffbf6a24c0 a2=3 a3=0 items=0 ppid=1 pid=6256 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:27:22.547000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:27:22.549807 sshd-session[6256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:27:22.555272 systemd-logind[1928]: New session 19 of user core. Jan 14 01:27:22.563838 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 01:27:22.565000 audit[6256]: USER_START pid=6256 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:22.567000 audit[6260]: CRED_ACQ pid=6260 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:23.311700 sshd[6260]: Connection closed by 4.153.228.146 port 58358 Jan 14 01:27:23.313792 sshd-session[6256]: pam_unix(sshd:session): session closed for user core Jan 14 01:27:23.313000 audit[6256]: USER_END pid=6256 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:23.313000 audit[6256]: CRED_DISP pid=6256 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:23.318156 systemd[1]: sshd@17-172.31.18.46:22-4.153.228.146:58358.service: Deactivated successfully. Jan 14 01:27:23.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.18.46:22-4.153.228.146:58358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:23.324263 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 01:27:23.327723 systemd-logind[1928]: Session 19 logged out. Waiting for processes to exit. Jan 14 01:27:23.330501 systemd-logind[1928]: Removed session 19. Jan 14 01:27:23.398998 systemd[1]: Started sshd@18-172.31.18.46:22-4.153.228.146:58362.service - OpenSSH per-connection server daemon (4.153.228.146:58362). Jan 14 01:27:23.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.18.46:22-4.153.228.146:58362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:23.831000 audit[6270]: USER_ACCT pid=6270 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:23.833660 sshd[6270]: Accepted publickey for core from 4.153.228.146 port 58362 ssh2: RSA SHA256:ES3aJcA+M+pl5u1hk2HWRqxW4DXd1pPYtNeRk1B3mrI Jan 14 01:27:23.833000 audit[6270]: CRED_ACQ pid=6270 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:23.833000 audit[6270]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd6de0b760 a2=3 a3=0 items=0 ppid=1 pid=6270 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:27:23.833000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:27:23.835826 sshd-session[6270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:27:23.843126 systemd-logind[1928]: New session 20 of user core. Jan 14 01:27:23.845829 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 01:27:23.848000 audit[6270]: USER_START pid=6270 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:23.850000 audit[6274]: CRED_ACQ pid=6274 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:24.131828 containerd[1962]: time="2026-01-14T01:27:24.131792483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:27:24.158567 sshd[6274]: Connection closed by 4.153.228.146 port 58362 Jan 14 01:27:24.160522 sshd-session[6270]: pam_unix(sshd:session): session closed for user core Jan 14 01:27:24.163000 audit[6270]: USER_END pid=6270 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:24.163000 audit[6270]: CRED_DISP pid=6270 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:24.168059 systemd[1]: sshd@18-172.31.18.46:22-4.153.228.146:58362.service: Deactivated successfully. Jan 14 01:27:24.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.18.46:22-4.153.228.146:58362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:24.171510 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 01:27:24.173053 systemd-logind[1928]: Session 20 logged out. Waiting for processes to exit. Jan 14 01:27:24.175151 systemd-logind[1928]: Removed session 20. Jan 14 01:27:24.406145 containerd[1962]: time="2026-01-14T01:27:24.405972656Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:27:24.408307 containerd[1962]: time="2026-01-14T01:27:24.408143258Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:27:24.408307 containerd[1962]: time="2026-01-14T01:27:24.408245372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:27:24.409021 kubelet[3571]: E0114 01:27:24.408431 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:27:24.409021 kubelet[3571]: E0114 01:27:24.408476 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:27:24.409021 kubelet[3571]: E0114 01:27:24.408610 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b81ad817aa184d10b98d3fd4131cf440,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f9czx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54d55768dc-v7mtb_calico-system(47e80bc5-cd92-40b2-a579-aec98175c1b6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:27:24.411559 containerd[1962]: time="2026-01-14T01:27:24.411488101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:27:24.661432 containerd[1962]: time="2026-01-14T01:27:24.661282504Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:27:24.663621 containerd[1962]: time="2026-01-14T01:27:24.663537582Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:27:24.663783 containerd[1962]: time="2026-01-14T01:27:24.663574868Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:27:24.663843 kubelet[3571]: E0114 01:27:24.663799 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:27:24.663886 kubelet[3571]: E0114 01:27:24.663841 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:27:24.664016 kubelet[3571]: E0114 01:27:24.663947 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9czx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54d55768dc-v7mtb_calico-system(47e80bc5-cd92-40b2-a579-aec98175c1b6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:27:24.665301 kubelet[3571]: E0114 01:27:24.665236 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54d55768dc-v7mtb" podUID="47e80bc5-cd92-40b2-a579-aec98175c1b6" Jan 14 01:27:25.131311 containerd[1962]: time="2026-01-14T01:27:25.130659800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:27:25.416610 containerd[1962]: time="2026-01-14T01:27:25.416452683Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:27:25.418765 containerd[1962]: time="2026-01-14T01:27:25.418712662Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:27:25.418937 containerd[1962]: time="2026-01-14T01:27:25.418746835Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:27:25.418988 kubelet[3571]: E0114 01:27:25.418956 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:27:25.419332 kubelet[3571]: E0114 01:27:25.419000 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:27:25.419332 kubelet[3571]: E0114 01:27:25.419122 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jdvfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-574d5c8798-9hhk4_calico-apiserver(525cee0b-846b-43ba-9b3e-e192ccc373a0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:27:25.420346 kubelet[3571]: E0114 01:27:25.420303 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-574d5c8798-9hhk4" podUID="525cee0b-846b-43ba-9b3e-e192ccc373a0" Jan 14 01:27:26.131261 containerd[1962]: time="2026-01-14T01:27:26.131037425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:27:26.428869 containerd[1962]: time="2026-01-14T01:27:26.428726099Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:27:26.431234 containerd[1962]: time="2026-01-14T01:27:26.431118638Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:27:26.431234 containerd[1962]: time="2026-01-14T01:27:26.431165382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:27:26.431608 kubelet[3571]: E0114 01:27:26.431528 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:27:26.431963 kubelet[3571]: E0114 01:27:26.431613 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:27:26.432127 kubelet[3571]: E0114 01:27:26.432054 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qs6c7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zrh2z_calico-system(74b84cdc-323d-4b42-b95a-ceec7dfaa40f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:27:26.434577 containerd[1962]: time="2026-01-14T01:27:26.434338199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:27:26.715599 containerd[1962]: time="2026-01-14T01:27:26.715233400Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:27:26.717602 containerd[1962]: time="2026-01-14T01:27:26.717518149Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:27:26.717726 containerd[1962]: time="2026-01-14T01:27:26.717643077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:27:26.717868 kubelet[3571]: E0114 01:27:26.717802 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:27:26.717868 kubelet[3571]: E0114 01:27:26.717852 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:27:26.718475 kubelet[3571]: E0114 01:27:26.717974 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qs6c7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zrh2z_calico-system(74b84cdc-323d-4b42-b95a-ceec7dfaa40f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:27:26.719365 kubelet[3571]: E0114 01:27:26.719320 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zrh2z" podUID="74b84cdc-323d-4b42-b95a-ceec7dfaa40f" Jan 14 01:27:27.665000 audit[6287]: NETFILTER_CFG table=filter:159 family=2 entries=26 op=nft_register_rule pid=6287 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:27:27.668888 kernel: kauditd_printk_skb: 38 callbacks suppressed Jan 14 01:27:27.668961 kernel: audit: type=1325 audit(1768354047.665:899): table=filter:159 family=2 entries=26 op=nft_register_rule pid=6287 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:27:27.678117 kernel: audit: type=1300 audit(1768354047.665:899): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd9e9c3c70 a2=0 a3=7ffd9e9c3c5c items=0 ppid=3724 pid=6287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:27:27.665000 audit[6287]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd9e9c3c70 a2=0 a3=7ffd9e9c3c5c items=0 ppid=3724 pid=6287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:27:27.665000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:27:27.681227 kernel: audit: type=1327 audit(1768354047.665:899): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:27:27.684571 kernel: audit: type=1325 audit(1768354047.673:900): table=nat:160 family=2 entries=104 op=nft_register_chain pid=6287 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:27:27.673000 audit[6287]: NETFILTER_CFG table=nat:160 family=2 entries=104 op=nft_register_chain pid=6287 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:27:27.673000 audit[6287]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd9e9c3c70 a2=0 a3=7ffd9e9c3c5c items=0 ppid=3724 pid=6287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:27:27.691921 kernel: audit: type=1300 audit(1768354047.673:900): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd9e9c3c70 a2=0 a3=7ffd9e9c3c5c items=0 ppid=3724 pid=6287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:27:27.673000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:27:27.697653 kernel: audit: type=1327 audit(1768354047.673:900): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:27:28.148869 kubelet[3571]: E0114 01:27:28.148805 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-574d5c8798-6q7jw" podUID="4263d4be-fc9d-471e-8df9-42f06716a4f0" Jan 14 01:27:29.130646 containerd[1962]: time="2026-01-14T01:27:29.130610054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:27:29.248428 systemd[1]: Started sshd@19-172.31.18.46:22-4.153.228.146:50374.service - OpenSSH per-connection server daemon (4.153.228.146:50374). Jan 14 01:27:29.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.18.46:22-4.153.228.146:50374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:29.253613 kernel: audit: type=1130 audit(1768354049.246:901): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.18.46:22-4.153.228.146:50374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:29.372657 containerd[1962]: time="2026-01-14T01:27:29.372598963Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:27:29.374849 containerd[1962]: time="2026-01-14T01:27:29.374801258Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:27:29.374977 containerd[1962]: time="2026-01-14T01:27:29.374826492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:27:29.375223 kubelet[3571]: E0114 01:27:29.375161 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:27:29.375223 kubelet[3571]: E0114 01:27:29.375209 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:27:29.376615 kubelet[3571]: E0114 01:27:29.375348 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zblph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5mdck_calico-system(993f578d-b707-42bd-b6e9-14c5aa23a03f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:27:29.376975 kubelet[3571]: E0114 01:27:29.376733 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5mdck" podUID="993f578d-b707-42bd-b6e9-14c5aa23a03f" Jan 14 01:27:29.711000 audit[6289]: USER_ACCT pid=6289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:29.715859 sshd[6289]: Accepted publickey for core from 4.153.228.146 port 50374 ssh2: RSA SHA256:ES3aJcA+M+pl5u1hk2HWRqxW4DXd1pPYtNeRk1B3mrI Jan 14 01:27:29.717081 sshd-session[6289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:27:29.720598 kernel: audit: type=1101 audit(1768354049.711:902): pid=6289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:29.720692 kernel: audit: type=1103 audit(1768354049.711:903): pid=6289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:29.711000 audit[6289]: CRED_ACQ pid=6289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:29.726596 kernel: audit: type=1006 audit(1768354049.711:904): pid=6289 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 14 01:27:29.711000 audit[6289]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc673153f0 a2=3 a3=0 items=0 ppid=1 pid=6289 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:27:29.711000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:27:29.731124 systemd-logind[1928]: New session 21 of user core. Jan 14 01:27:29.734806 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 01:27:29.738000 audit[6289]: USER_START pid=6289 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:29.740000 audit[6295]: CRED_ACQ pid=6295 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:30.030413 sshd[6295]: Connection closed by 4.153.228.146 port 50374 Jan 14 01:27:30.033073 sshd-session[6289]: pam_unix(sshd:session): session closed for user core Jan 14 01:27:30.041000 audit[6289]: USER_END pid=6289 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:30.041000 audit[6289]: CRED_DISP pid=6289 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:30.049453 systemd[1]: sshd@19-172.31.18.46:22-4.153.228.146:50374.service: Deactivated successfully. Jan 14 01:27:30.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.18.46:22-4.153.228.146:50374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:30.057945 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 01:27:30.065934 systemd-logind[1928]: Session 21 logged out. Waiting for processes to exit. Jan 14 01:27:30.069921 systemd-logind[1928]: Removed session 21. Jan 14 01:27:32.131186 kubelet[3571]: E0114 01:27:32.131129 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b9745b8-fxksz" podUID="56394477-d28d-42eb-bee5-a9a20263c11f" Jan 14 01:27:32.133906 containerd[1962]: time="2026-01-14T01:27:32.133688118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:27:32.447006 containerd[1962]: time="2026-01-14T01:27:32.446863917Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:27:32.449167 containerd[1962]: time="2026-01-14T01:27:32.449105161Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:27:32.449323 containerd[1962]: time="2026-01-14T01:27:32.449159377Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:27:32.449423 kubelet[3571]: E0114 01:27:32.449381 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:27:32.449491 kubelet[3571]: E0114 01:27:32.449434 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:27:32.449750 kubelet[3571]: E0114 01:27:32.449637 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnbz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5799fb7c8b-6l2xc_calico-system(89c07bdd-9dad-4c41-8dfe-3de894f6f743): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:27:32.450817 kubelet[3571]: E0114 01:27:32.450759 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5799fb7c8b-6l2xc" podUID="89c07bdd-9dad-4c41-8dfe-3de894f6f743" Jan 14 01:27:35.129450 systemd[1]: Started sshd@20-172.31.18.46:22-4.153.228.146:54338.service - OpenSSH per-connection server daemon (4.153.228.146:54338). Jan 14 01:27:35.132264 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 14 01:27:35.132348 kernel: audit: type=1130 audit(1768354055.128:910): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.18.46:22-4.153.228.146:54338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:35.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.18.46:22-4.153.228.146:54338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:35.614000 audit[6332]: USER_ACCT pid=6332 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:35.619197 sshd-session[6332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:27:35.617000 audit[6332]: CRED_ACQ pid=6332 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:35.622297 sshd[6332]: Accepted publickey for core from 4.153.228.146 port 54338 ssh2: RSA SHA256:ES3aJcA+M+pl5u1hk2HWRqxW4DXd1pPYtNeRk1B3mrI Jan 14 01:27:35.623593 kernel: audit: type=1101 audit(1768354055.614:911): pid=6332 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:35.623686 kernel: audit: type=1103 audit(1768354055.617:912): pid=6332 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:35.627717 kernel: audit: type=1006 audit(1768354055.617:913): pid=6332 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 14 01:27:35.617000 audit[6332]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea7865d00 a2=3 a3=0 items=0 ppid=1 pid=6332 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:27:35.632792 kernel: audit: type=1300 audit(1768354055.617:913): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea7865d00 a2=3 a3=0 items=0 ppid=1 pid=6332 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:27:35.635657 systemd-logind[1928]: New session 22 of user core. Jan 14 01:27:35.617000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:27:35.640574 kernel: audit: type=1327 audit(1768354055.617:913): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:27:35.641841 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 01:27:35.644000 audit[6332]: USER_START pid=6332 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:35.653672 kernel: audit: type=1105 audit(1768354055.644:914): pid=6332 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:35.652000 audit[6336]: CRED_ACQ pid=6336 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:35.658616 kernel: audit: type=1103 audit(1768354055.652:915): pid=6336 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:35.989019 sshd[6336]: Connection closed by 4.153.228.146 port 54338 Jan 14 01:27:35.990727 sshd-session[6332]: pam_unix(sshd:session): session closed for user core Jan 14 01:27:35.990000 audit[6332]: USER_END pid=6332 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:35.995905 systemd-logind[1928]: Session 22 logged out. Waiting for processes to exit. Jan 14 01:27:35.997391 systemd[1]: sshd@20-172.31.18.46:22-4.153.228.146:54338.service: Deactivated successfully. Jan 14 01:27:35.999566 kernel: audit: type=1106 audit(1768354055.990:916): pid=6332 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:36.005250 kernel: audit: type=1104 audit(1768354055.990:917): pid=6332 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:35.990000 audit[6332]: CRED_DISP pid=6332 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:36.002228 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 01:27:35.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.18.46:22-4.153.228.146:54338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:36.008532 systemd-logind[1928]: Removed session 22. Jan 14 01:27:40.132759 kubelet[3571]: E0114 01:27:40.132711 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-574d5c8798-9hhk4" podUID="525cee0b-846b-43ba-9b3e-e192ccc373a0" Jan 14 01:27:40.135306 kubelet[3571]: E0114 01:27:40.135223 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-574d5c8798-6q7jw" podUID="4263d4be-fc9d-471e-8df9-42f06716a4f0" Jan 14 01:27:40.136566 kubelet[3571]: E0114 01:27:40.136002 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54d55768dc-v7mtb" podUID="47e80bc5-cd92-40b2-a579-aec98175c1b6" Jan 14 01:27:40.136566 kubelet[3571]: E0114 01:27:40.136162 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zrh2z" podUID="74b84cdc-323d-4b42-b95a-ceec7dfaa40f" Jan 14 01:27:41.092104 systemd[1]: Started sshd@21-172.31.18.46:22-4.153.228.146:54350.service - OpenSSH per-connection server daemon (4.153.228.146:54350). Jan 14 01:27:41.098858 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:27:41.099898 kernel: audit: type=1130 audit(1768354061.090:919): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.18.46:22-4.153.228.146:54350 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:41.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.18.46:22-4.153.228.146:54350 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:41.639000 audit[6350]: USER_ACCT pid=6350 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:41.645779 sshd[6350]: Accepted publickey for core from 4.153.228.146 port 54350 ssh2: RSA SHA256:ES3aJcA+M+pl5u1hk2HWRqxW4DXd1pPYtNeRk1B3mrI Jan 14 01:27:41.648069 kernel: audit: type=1101 audit(1768354061.639:920): pid=6350 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:41.652000 audit[6350]: CRED_ACQ pid=6350 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:41.659928 sshd-session[6350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:27:41.661257 kernel: audit: type=1103 audit(1768354061.652:921): pid=6350 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:41.661307 kernel: audit: type=1006 audit(1768354061.655:922): pid=6350 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 14 01:27:41.655000 audit[6350]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4f898370 a2=3 a3=0 items=0 ppid=1 pid=6350 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:27:41.670595 kernel: audit: type=1300 audit(1768354061.655:922): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4f898370 a2=3 a3=0 items=0 ppid=1 pid=6350 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:27:41.655000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:27:41.674574 kernel: audit: type=1327 audit(1768354061.655:922): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:27:41.683192 systemd-logind[1928]: New session 23 of user core. Jan 14 01:27:41.686996 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 14 01:27:41.691000 audit[6350]: USER_START pid=6350 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:41.701820 kernel: audit: type=1105 audit(1768354061.691:923): pid=6350 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:41.700000 audit[6354]: CRED_ACQ pid=6354 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:41.708587 kernel: audit: type=1103 audit(1768354061.700:924): pid=6354 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:42.373813 sshd[6354]: Connection closed by 4.153.228.146 port 54350 Jan 14 01:27:42.376996 sshd-session[6350]: pam_unix(sshd:session): session closed for user core Jan 14 01:27:42.377000 audit[6350]: USER_END pid=6350 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:42.387854 kernel: audit: type=1106 audit(1768354062.377:925): pid=6350 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:42.393719 kernel: audit: type=1104 audit(1768354062.377:926): pid=6350 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:42.377000 audit[6350]: CRED_DISP pid=6350 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:42.398457 systemd[1]: sshd@21-172.31.18.46:22-4.153.228.146:54350.service: Deactivated successfully. Jan 14 01:27:42.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.18.46:22-4.153.228.146:54350 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:42.404051 systemd[1]: session-23.scope: Deactivated successfully. Jan 14 01:27:42.409262 systemd-logind[1928]: Session 23 logged out. Waiting for processes to exit. Jan 14 01:27:42.411043 systemd-logind[1928]: Removed session 23. Jan 14 01:27:43.130604 kubelet[3571]: E0114 01:27:43.130528 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5799fb7c8b-6l2xc" podUID="89c07bdd-9dad-4c41-8dfe-3de894f6f743" Jan 14 01:27:43.132796 kubelet[3571]: E0114 01:27:43.131989 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5mdck" podUID="993f578d-b707-42bd-b6e9-14c5aa23a03f" Jan 14 01:27:44.134570 kubelet[3571]: E0114 01:27:44.134314 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b9745b8-fxksz" podUID="56394477-d28d-42eb-bee5-a9a20263c11f" Jan 14 01:27:47.460701 systemd[1]: Started sshd@22-172.31.18.46:22-4.153.228.146:42652.service - OpenSSH per-connection server daemon (4.153.228.146:42652). Jan 14 01:27:47.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.18.46:22-4.153.228.146:42652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:47.464350 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:27:47.464429 kernel: audit: type=1130 audit(1768354067.460:928): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.18.46:22-4.153.228.146:42652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:47.946000 audit[6367]: USER_ACCT pid=6367 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:47.955078 kernel: audit: type=1101 audit(1768354067.946:929): pid=6367 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:47.955214 sshd[6367]: Accepted publickey for core from 4.153.228.146 port 42652 ssh2: RSA SHA256:ES3aJcA+M+pl5u1hk2HWRqxW4DXd1pPYtNeRk1B3mrI Jan 14 01:27:47.957444 sshd-session[6367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:27:47.954000 audit[6367]: CRED_ACQ pid=6367 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:47.965085 kernel: audit: type=1103 audit(1768354067.954:930): pid=6367 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:47.965658 kernel: audit: type=1006 audit(1768354067.954:931): pid=6367 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 14 01:27:47.954000 audit[6367]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff507d6380 a2=3 a3=0 items=0 ppid=1 pid=6367 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:27:47.969447 kernel: audit: type=1300 audit(1768354067.954:931): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff507d6380 a2=3 a3=0 items=0 ppid=1 pid=6367 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:27:47.954000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:27:47.974579 kernel: audit: type=1327 audit(1768354067.954:931): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:27:47.981237 systemd-logind[1928]: New session 24 of user core. Jan 14 01:27:47.984795 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 14 01:27:47.989000 audit[6367]: USER_START pid=6367 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:47.992000 audit[6371]: CRED_ACQ pid=6371 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:47.997744 kernel: audit: type=1105 audit(1768354067.989:932): pid=6367 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:47.997810 kernel: audit: type=1103 audit(1768354067.992:933): pid=6371 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:48.461592 sshd[6371]: Connection closed by 4.153.228.146 port 42652 Jan 14 01:27:48.462245 sshd-session[6367]: pam_unix(sshd:session): session closed for user core Jan 14 01:27:48.463000 audit[6367]: USER_END pid=6367 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:48.468585 systemd[1]: sshd@22-172.31.18.46:22-4.153.228.146:42652.service: Deactivated successfully. Jan 14 01:27:48.473299 systemd[1]: session-24.scope: Deactivated successfully. Jan 14 01:27:48.473594 kernel: audit: type=1106 audit(1768354068.463:934): pid=6367 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:48.463000 audit[6367]: CRED_DISP pid=6367 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:48.477378 systemd-logind[1928]: Session 24 logged out. Waiting for processes to exit. Jan 14 01:27:48.481384 systemd-logind[1928]: Removed session 24. Jan 14 01:27:48.482048 kernel: audit: type=1104 audit(1768354068.463:935): pid=6367 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:48.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.18.46:22-4.153.228.146:42652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:52.137438 kubelet[3571]: E0114 01:27:52.137377 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54d55768dc-v7mtb" podUID="47e80bc5-cd92-40b2-a579-aec98175c1b6" Jan 14 01:27:52.138467 kubelet[3571]: E0114 01:27:52.138220 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-574d5c8798-9hhk4" podUID="525cee0b-846b-43ba-9b3e-e192ccc373a0" Jan 14 01:27:53.560075 systemd[1]: Started sshd@23-172.31.18.46:22-4.153.228.146:42660.service - OpenSSH per-connection server daemon (4.153.228.146:42660). Jan 14 01:27:53.561639 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:27:53.561692 kernel: audit: type=1130 audit(1768354073.558:937): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.18.46:22-4.153.228.146:42660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:53.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.18.46:22-4.153.228.146:42660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:54.067505 sshd[6385]: Accepted publickey for core from 4.153.228.146 port 42660 ssh2: RSA SHA256:ES3aJcA+M+pl5u1hk2HWRqxW4DXd1pPYtNeRk1B3mrI Jan 14 01:27:54.065000 audit[6385]: USER_ACCT pid=6385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:54.071034 sshd-session[6385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:27:54.075120 kernel: audit: type=1101 audit(1768354074.065:938): pid=6385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:54.083580 kernel: audit: type=1103 audit(1768354074.066:939): pid=6385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:54.066000 audit[6385]: CRED_ACQ pid=6385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:54.081601 systemd-logind[1928]: New session 25 of user core. Jan 14 01:27:54.087177 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 14 01:27:54.090594 kernel: audit: type=1006 audit(1768354074.066:940): pid=6385 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 14 01:27:54.066000 audit[6385]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeed29df10 a2=3 a3=0 items=0 ppid=1 pid=6385 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:27:54.099937 kernel: audit: type=1300 audit(1768354074.066:940): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeed29df10 a2=3 a3=0 items=0 ppid=1 pid=6385 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:27:54.066000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:27:54.112887 kernel: audit: type=1327 audit(1768354074.066:940): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:27:54.113025 kernel: audit: type=1105 audit(1768354074.091:941): pid=6385 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:54.091000 audit[6385]: USER_START pid=6385 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:54.100000 audit[6389]: CRED_ACQ pid=6389 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:54.122571 kernel: audit: type=1103 audit(1768354074.100:942): pid=6389 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:54.130127 kubelet[3571]: E0114 01:27:54.130077 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-574d5c8798-6q7jw" podUID="4263d4be-fc9d-471e-8df9-42f06716a4f0" Jan 14 01:27:54.432898 sshd[6389]: Connection closed by 4.153.228.146 port 42660 Jan 14 01:27:54.433176 sshd-session[6385]: pam_unix(sshd:session): session closed for user core Jan 14 01:27:54.434000 audit[6385]: USER_END pid=6385 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:54.439865 systemd[1]: sshd@23-172.31.18.46:22-4.153.228.146:42660.service: Deactivated successfully. Jan 14 01:27:54.443080 kernel: audit: type=1106 audit(1768354074.434:943): pid=6385 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:54.442448 systemd[1]: session-25.scope: Deactivated successfully. Jan 14 01:27:54.444664 systemd-logind[1928]: Session 25 logged out. Waiting for processes to exit. Jan 14 01:27:54.434000 audit[6385]: CRED_DISP pid=6385 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:54.447357 systemd-logind[1928]: Removed session 25. Jan 14 01:27:54.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.18.46:22-4.153.228.146:42660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:27:54.451580 kernel: audit: type=1104 audit(1768354074.434:944): pid=6385 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:27:55.132226 kubelet[3571]: E0114 01:27:55.132118 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zrh2z" podUID="74b84cdc-323d-4b42-b95a-ceec7dfaa40f" Jan 14 01:27:56.132681 kubelet[3571]: E0114 01:27:56.132631 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5799fb7c8b-6l2xc" podUID="89c07bdd-9dad-4c41-8dfe-3de894f6f743" Jan 14 01:27:58.135030 kubelet[3571]: E0114 01:27:58.134979 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5mdck" podUID="993f578d-b707-42bd-b6e9-14c5aa23a03f" Jan 14 01:27:59.137855 kubelet[3571]: E0114 01:27:59.130079 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b9745b8-fxksz" podUID="56394477-d28d-42eb-bee5-a9a20263c11f" Jan 14 01:28:03.201451 kubelet[3571]: E0114 01:28:03.197457 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54d55768dc-v7mtb" podUID="47e80bc5-cd92-40b2-a579-aec98175c1b6" Jan 14 01:28:06.132208 kubelet[3571]: E0114 01:28:06.132019 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zrh2z" podUID="74b84cdc-323d-4b42-b95a-ceec7dfaa40f" Jan 14 01:28:06.148863 containerd[1962]: time="2026-01-14T01:28:06.132125483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:28:06.419489 containerd[1962]: time="2026-01-14T01:28:06.419324445Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:28:06.420667 containerd[1962]: time="2026-01-14T01:28:06.420588779Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:28:06.420868 containerd[1962]: time="2026-01-14T01:28:06.420697358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:28:06.421130 kubelet[3571]: E0114 01:28:06.421028 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:28:06.421204 kubelet[3571]: E0114 01:28:06.421142 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:28:06.421377 kubelet[3571]: E0114 01:28:06.421316 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zcshs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-574d5c8798-6q7jw_calico-apiserver(4263d4be-fc9d-471e-8df9-42f06716a4f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:28:06.422718 kubelet[3571]: E0114 01:28:06.422543 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-574d5c8798-6q7jw" podUID="4263d4be-fc9d-471e-8df9-42f06716a4f0" Jan 14 01:28:07.130250 containerd[1962]: time="2026-01-14T01:28:07.130203737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:28:07.407154 containerd[1962]: time="2026-01-14T01:28:07.407019757Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:28:07.408255 containerd[1962]: time="2026-01-14T01:28:07.408205272Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:28:07.408660 kubelet[3571]: E0114 01:28:07.408483 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:28:07.410478 kubelet[3571]: E0114 01:28:07.408664 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:28:07.410478 kubelet[3571]: E0114 01:28:07.408866 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jdvfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-574d5c8798-9hhk4_calico-apiserver(525cee0b-846b-43ba-9b3e-e192ccc373a0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:28:07.410784 kubelet[3571]: E0114 01:28:07.410728 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-574d5c8798-9hhk4" podUID="525cee0b-846b-43ba-9b3e-e192ccc373a0" Jan 14 01:28:07.419381 containerd[1962]: time="2026-01-14T01:28:07.408303863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:28:08.947020 systemd[1]: cri-containerd-a3d024f4a41a21e584aca1946ba177f36e12ea53a2ea9eb2cfa56704ad6c8ed0.scope: Deactivated successfully. Jan 14 01:28:08.947849 systemd[1]: cri-containerd-a3d024f4a41a21e584aca1946ba177f36e12ea53a2ea9eb2cfa56704ad6c8ed0.scope: Consumed 13.167s CPU time, 104M memory peak, 44.8M read from disk. Jan 14 01:28:08.952000 audit: BPF prog-id=153 op=UNLOAD Jan 14 01:28:08.956129 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:28:08.956202 kernel: audit: type=1334 audit(1768354088.952:946): prog-id=153 op=UNLOAD Jan 14 01:28:08.957566 kernel: audit: type=1334 audit(1768354088.952:947): prog-id=157 op=UNLOAD Jan 14 01:28:08.952000 audit: BPF prog-id=157 op=UNLOAD Jan 14 01:28:08.974288 containerd[1962]: time="2026-01-14T01:28:08.974225021Z" level=info msg="received container exit event container_id:\"a3d024f4a41a21e584aca1946ba177f36e12ea53a2ea9eb2cfa56704ad6c8ed0\" id:\"a3d024f4a41a21e584aca1946ba177f36e12ea53a2ea9eb2cfa56704ad6c8ed0\" pid:3763 exit_status:1 exited_at:{seconds:1768354088 nanos:963521286}" Jan 14 01:28:09.026321 systemd[1]: cri-containerd-b5ba6c1df4f4d57fb70beab2e5aad0de88345f228e254222367f44b1e5f20d1e.scope: Deactivated successfully. Jan 14 01:28:09.028856 systemd[1]: cri-containerd-b5ba6c1df4f4d57fb70beab2e5aad0de88345f228e254222367f44b1e5f20d1e.scope: Consumed 3.779s CPU time, 105.8M memory peak, 96.4M read from disk. Jan 14 01:28:09.034670 kernel: audit: type=1334 audit(1768354089.030:948): prog-id=273 op=LOAD Jan 14 01:28:09.030000 audit: BPF prog-id=273 op=LOAD Jan 14 01:28:09.037887 kernel: audit: type=1334 audit(1768354089.030:949): prog-id=90 op=UNLOAD Jan 14 01:28:09.030000 audit: BPF prog-id=90 op=UNLOAD Jan 14 01:28:09.040424 kernel: audit: type=1334 audit(1768354089.031:950): prog-id=105 op=UNLOAD Jan 14 01:28:09.031000 audit: BPF prog-id=105 op=UNLOAD Jan 14 01:28:09.031000 audit: BPF prog-id=109 op=UNLOAD Jan 14 01:28:09.043265 kernel: audit: type=1334 audit(1768354089.031:951): prog-id=109 op=UNLOAD Jan 14 01:28:09.052784 containerd[1962]: time="2026-01-14T01:28:09.052720154Z" level=info msg="received container exit event container_id:\"b5ba6c1df4f4d57fb70beab2e5aad0de88345f228e254222367f44b1e5f20d1e\" id:\"b5ba6c1df4f4d57fb70beab2e5aad0de88345f228e254222367f44b1e5f20d1e\" pid:3115 exit_status:1 exited_at:{seconds:1768354089 nanos:51522029}" Jan 14 01:28:09.096132 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3d024f4a41a21e584aca1946ba177f36e12ea53a2ea9eb2cfa56704ad6c8ed0-rootfs.mount: Deactivated successfully. Jan 14 01:28:09.130665 kubelet[3571]: E0114 01:28:09.130630 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5799fb7c8b-6l2xc" podUID="89c07bdd-9dad-4c41-8dfe-3de894f6f743" Jan 14 01:28:09.156247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5ba6c1df4f4d57fb70beab2e5aad0de88345f228e254222367f44b1e5f20d1e-rootfs.mount: Deactivated successfully. Jan 14 01:28:09.244233 kubelet[3571]: I0114 01:28:09.244116 3571 scope.go:117] "RemoveContainer" containerID="b5ba6c1df4f4d57fb70beab2e5aad0de88345f228e254222367f44b1e5f20d1e" Jan 14 01:28:09.246124 kubelet[3571]: I0114 01:28:09.246098 3571 scope.go:117] "RemoveContainer" containerID="a3d024f4a41a21e584aca1946ba177f36e12ea53a2ea9eb2cfa56704ad6c8ed0" Jan 14 01:28:09.264870 containerd[1962]: time="2026-01-14T01:28:09.263616391Z" level=info msg="CreateContainer within sandbox \"c5810db3e706df4f748cd1e85159a282084b0ddf4c5a05434964fe519d47e3e1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 14 01:28:09.266005 containerd[1962]: time="2026-01-14T01:28:09.265815649Z" level=info msg="CreateContainer within sandbox \"c7ed81dad6a8059ce8db295698fb38b6faedfa6e42c0af4a767a1fc5e2ed0a64\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 14 01:28:09.286584 containerd[1962]: time="2026-01-14T01:28:09.286407440Z" level=info msg="Container b51d7f6479ba66fefd05198e9cc3bc04510c20f4296ce4dde76cddae4d9a211d: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:28:09.324166 containerd[1962]: time="2026-01-14T01:28:09.324130510Z" level=info msg="CreateContainer within sandbox \"c5810db3e706df4f748cd1e85159a282084b0ddf4c5a05434964fe519d47e3e1\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"b51d7f6479ba66fefd05198e9cc3bc04510c20f4296ce4dde76cddae4d9a211d\"" Jan 14 01:28:09.325580 containerd[1962]: time="2026-01-14T01:28:09.324893713Z" level=info msg="StartContainer for \"b51d7f6479ba66fefd05198e9cc3bc04510c20f4296ce4dde76cddae4d9a211d\"" Jan 14 01:28:09.326197 containerd[1962]: time="2026-01-14T01:28:09.326166369Z" level=info msg="connecting to shim b51d7f6479ba66fefd05198e9cc3bc04510c20f4296ce4dde76cddae4d9a211d" address="unix:///run/containerd/s/4af328783471a9f1a24e3b69147c03bc999fed72a7b8880bdf97e0404f5b67f3" protocol=ttrpc version=3 Jan 14 01:28:09.355101 systemd[1]: Started cri-containerd-b51d7f6479ba66fefd05198e9cc3bc04510c20f4296ce4dde76cddae4d9a211d.scope - libcontainer container b51d7f6479ba66fefd05198e9cc3bc04510c20f4296ce4dde76cddae4d9a211d. Jan 14 01:28:09.357569 containerd[1962]: time="2026-01-14T01:28:09.357456401Z" level=info msg="Container 85e6609297f075b78db4ceb00c9122539edfeca4f13d5be943c257b1f2d3cc28: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:28:09.383168 containerd[1962]: time="2026-01-14T01:28:09.382296284Z" level=info msg="CreateContainer within sandbox \"c7ed81dad6a8059ce8db295698fb38b6faedfa6e42c0af4a767a1fc5e2ed0a64\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"85e6609297f075b78db4ceb00c9122539edfeca4f13d5be943c257b1f2d3cc28\"" Jan 14 01:28:09.385766 containerd[1962]: time="2026-01-14T01:28:09.385724027Z" level=info msg="StartContainer for \"85e6609297f075b78db4ceb00c9122539edfeca4f13d5be943c257b1f2d3cc28\"" Jan 14 01:28:09.384000 audit: BPF prog-id=274 op=LOAD Jan 14 01:28:09.389337 kernel: audit: type=1334 audit(1768354089.384:952): prog-id=274 op=LOAD Jan 14 01:28:09.389438 kernel: audit: type=1334 audit(1768354089.386:953): prog-id=275 op=LOAD Jan 14 01:28:09.386000 audit: BPF prog-id=275 op=LOAD Jan 14 01:28:09.386000 audit[6469]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3660 pid=6469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:28:09.391734 containerd[1962]: time="2026-01-14T01:28:09.391351585Z" level=info msg="connecting to shim 85e6609297f075b78db4ceb00c9122539edfeca4f13d5be943c257b1f2d3cc28" address="unix:///run/containerd/s/9f5d112a63c14da8f22ed9129aa95c531f9bf59d2a88288a5f71921b6229fdef" protocol=ttrpc version=3 Jan 14 01:28:09.397098 kernel: audit: type=1300 audit(1768354089.386:953): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3660 pid=6469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:28:09.386000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235316437663634373962613636666566643035313938653963633362 Jan 14 01:28:09.404680 kernel: audit: type=1327 audit(1768354089.386:953): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235316437663634373962613636666566643035313938653963633362 Jan 14 01:28:09.386000 audit: BPF prog-id=275 op=UNLOAD Jan 14 01:28:09.386000 audit[6469]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3660 pid=6469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:28:09.386000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235316437663634373962613636666566643035313938653963633362 Jan 14 01:28:09.388000 audit: BPF prog-id=276 op=LOAD Jan 14 01:28:09.388000 audit[6469]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3660 pid=6469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:28:09.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235316437663634373962613636666566643035313938653963633362 Jan 14 01:28:09.388000 audit: BPF prog-id=277 op=LOAD Jan 14 01:28:09.388000 audit[6469]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3660 pid=6469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:28:09.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235316437663634373962613636666566643035313938653963633362 Jan 14 01:28:09.388000 audit: BPF prog-id=277 op=UNLOAD Jan 14 01:28:09.388000 audit[6469]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3660 pid=6469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:28:09.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235316437663634373962613636666566643035313938653963633362 Jan 14 01:28:09.388000 audit: BPF prog-id=276 op=UNLOAD Jan 14 01:28:09.388000 audit[6469]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3660 pid=6469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:28:09.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235316437663634373962613636666566643035313938653963633362 Jan 14 01:28:09.388000 audit: BPF prog-id=278 op=LOAD Jan 14 01:28:09.388000 audit[6469]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3660 pid=6469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:28:09.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235316437663634373962613636666566643035313938653963633362 Jan 14 01:28:09.453248 systemd[1]: Started cri-containerd-85e6609297f075b78db4ceb00c9122539edfeca4f13d5be943c257b1f2d3cc28.scope - libcontainer container 85e6609297f075b78db4ceb00c9122539edfeca4f13d5be943c257b1f2d3cc28. Jan 14 01:28:09.459539 containerd[1962]: time="2026-01-14T01:28:09.459431126Z" level=info msg="StartContainer for \"b51d7f6479ba66fefd05198e9cc3bc04510c20f4296ce4dde76cddae4d9a211d\" returns successfully" Jan 14 01:28:09.480000 audit: BPF prog-id=279 op=LOAD Jan 14 01:28:09.480000 audit: BPF prog-id=280 op=LOAD Jan 14 01:28:09.480000 audit[6489]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2976 pid=6489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:28:09.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835653636303932393766303735623738646234636562303063393132 Jan 14 01:28:09.480000 audit: BPF prog-id=280 op=UNLOAD Jan 14 01:28:09.480000 audit[6489]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2976 pid=6489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:28:09.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835653636303932393766303735623738646234636562303063393132 Jan 14 01:28:09.481000 audit: BPF prog-id=281 op=LOAD Jan 14 01:28:09.481000 audit[6489]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2976 pid=6489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:28:09.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835653636303932393766303735623738646234636562303063393132 Jan 14 01:28:09.481000 audit: BPF prog-id=282 op=LOAD Jan 14 01:28:09.481000 audit[6489]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2976 pid=6489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:28:09.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835653636303932393766303735623738646234636562303063393132 Jan 14 01:28:09.481000 audit: BPF prog-id=282 op=UNLOAD Jan 14 01:28:09.481000 audit[6489]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2976 pid=6489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:28:09.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835653636303932393766303735623738646234636562303063393132 Jan 14 01:28:09.481000 audit: BPF prog-id=281 op=UNLOAD Jan 14 01:28:09.481000 audit[6489]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2976 pid=6489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:28:09.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835653636303932393766303735623738646234636562303063393132 Jan 14 01:28:09.481000 audit: BPF prog-id=283 op=LOAD Jan 14 01:28:09.481000 audit[6489]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2976 pid=6489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:28:09.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835653636303932393766303735623738646234636562303063393132 Jan 14 01:28:09.542318 containerd[1962]: time="2026-01-14T01:28:09.541932232Z" level=info msg="StartContainer for \"85e6609297f075b78db4ceb00c9122539edfeca4f13d5be943c257b1f2d3cc28\" returns successfully" Jan 14 01:28:11.130181 containerd[1962]: time="2026-01-14T01:28:11.129891000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:28:11.426599 containerd[1962]: time="2026-01-14T01:28:11.426425077Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:28:11.429222 containerd[1962]: time="2026-01-14T01:28:11.429154978Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:28:11.429385 containerd[1962]: time="2026-01-14T01:28:11.429268248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:28:11.429496 kubelet[3571]: E0114 01:28:11.429444 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:28:11.430008 kubelet[3571]: E0114 01:28:11.429508 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:28:11.430008 kubelet[3571]: E0114 01:28:11.429723 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zblph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5mdck_calico-system(993f578d-b707-42bd-b6e9-14c5aa23a03f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:28:11.430980 kubelet[3571]: E0114 01:28:11.430929 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5mdck" podUID="993f578d-b707-42bd-b6e9-14c5aa23a03f" Jan 14 01:28:11.758297 kubelet[3571]: E0114 01:28:11.758161 3571 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-46?timeout=10s\": context deadline exceeded" Jan 14 01:28:12.132412 containerd[1962]: time="2026-01-14T01:28:12.132362652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:28:12.425064 containerd[1962]: time="2026-01-14T01:28:12.424788742Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:28:12.427360 containerd[1962]: time="2026-01-14T01:28:12.427278660Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:28:12.427360 containerd[1962]: time="2026-01-14T01:28:12.427324063Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:28:12.427590 kubelet[3571]: E0114 01:28:12.427500 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:28:12.427590 kubelet[3571]: E0114 01:28:12.427570 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:28:12.427737 kubelet[3571]: E0114 01:28:12.427695 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h4r5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65b9745b8-fxksz_calico-apiserver(56394477-d28d-42eb-bee5-a9a20263c11f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:28:12.429128 kubelet[3571]: E0114 01:28:12.429077 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b9745b8-fxksz" podUID="56394477-d28d-42eb-bee5-a9a20263c11f" Jan 14 01:28:14.131075 containerd[1962]: time="2026-01-14T01:28:14.131037137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:28:14.311931 systemd[1]: cri-containerd-7d5a13bb23ea033a089d6bc689e95c7f15a622bf8ff17eace8d6f57056edbdc7.scope: Deactivated successfully. Jan 14 01:28:14.312607 systemd[1]: cri-containerd-7d5a13bb23ea033a089d6bc689e95c7f15a622bf8ff17eace8d6f57056edbdc7.scope: Consumed 2.259s CPU time, 40.8M memory peak, 39.2M read from disk. Jan 14 01:28:14.316289 kernel: kauditd_printk_skb: 40 callbacks suppressed Jan 14 01:28:14.316363 kernel: audit: type=1334 audit(1768354094.313:968): prog-id=284 op=LOAD Jan 14 01:28:14.313000 audit: BPF prog-id=284 op=LOAD Jan 14 01:28:14.313000 audit: BPF prog-id=95 op=UNLOAD Jan 14 01:28:14.320580 kernel: audit: type=1334 audit(1768354094.313:969): prog-id=95 op=UNLOAD Jan 14 01:28:14.320639 containerd[1962]: time="2026-01-14T01:28:14.318972608Z" level=info msg="received container exit event container_id:\"7d5a13bb23ea033a089d6bc689e95c7f15a622bf8ff17eace8d6f57056edbdc7\" id:\"7d5a13bb23ea033a089d6bc689e95c7f15a622bf8ff17eace8d6f57056edbdc7\" pid:3135 exit_status:1 exited_at:{seconds:1768354094 nanos:314675891}" Jan 14 01:28:14.317000 audit: BPF prog-id=110 op=UNLOAD Jan 14 01:28:14.325575 kernel: audit: type=1334 audit(1768354094.317:970): prog-id=110 op=UNLOAD Jan 14 01:28:14.325668 kernel: audit: type=1334 audit(1768354094.317:971): prog-id=114 op=UNLOAD Jan 14 01:28:14.317000 audit: BPF prog-id=114 op=UNLOAD Jan 14 01:28:14.358215 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d5a13bb23ea033a089d6bc689e95c7f15a622bf8ff17eace8d6f57056edbdc7-rootfs.mount: Deactivated successfully. Jan 14 01:28:14.376251 containerd[1962]: time="2026-01-14T01:28:14.376176125Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:28:14.378745 containerd[1962]: time="2026-01-14T01:28:14.378636700Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:28:14.379015 containerd[1962]: time="2026-01-14T01:28:14.378670497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:28:14.379380 kubelet[3571]: E0114 01:28:14.379091 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:28:14.379380 kubelet[3571]: E0114 01:28:14.379166 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:28:14.379380 kubelet[3571]: E0114 01:28:14.379291 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b81ad817aa184d10b98d3fd4131cf440,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f9czx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54d55768dc-v7mtb_calico-system(47e80bc5-cd92-40b2-a579-aec98175c1b6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:28:14.381726 containerd[1962]: time="2026-01-14T01:28:14.381481038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:28:14.700543 containerd[1962]: time="2026-01-14T01:28:14.700368844Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:28:14.702442 containerd[1962]: time="2026-01-14T01:28:14.702375220Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:28:14.702591 containerd[1962]: time="2026-01-14T01:28:14.702472308Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:28:14.702708 kubelet[3571]: E0114 01:28:14.702659 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:28:14.702803 kubelet[3571]: E0114 01:28:14.702719 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:28:14.702889 kubelet[3571]: E0114 01:28:14.702846 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9czx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54d55768dc-v7mtb_calico-system(47e80bc5-cd92-40b2-a579-aec98175c1b6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:28:14.704102 kubelet[3571]: E0114 01:28:14.704059 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54d55768dc-v7mtb" podUID="47e80bc5-cd92-40b2-a579-aec98175c1b6" Jan 14 01:28:15.272399 kubelet[3571]: I0114 01:28:15.272367 3571 scope.go:117] "RemoveContainer" containerID="7d5a13bb23ea033a089d6bc689e95c7f15a622bf8ff17eace8d6f57056edbdc7" Jan 14 01:28:15.274603 containerd[1962]: time="2026-01-14T01:28:15.274508337Z" level=info msg="CreateContainer within sandbox \"c7fda0972c6386a7141f470e068bf7b70cc8ba6dc3ee9a67bf41139fa0e0b80d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 14 01:28:15.298595 containerd[1962]: time="2026-01-14T01:28:15.298029850Z" level=info msg="Container 2c3612d6e895cf0cd45f3abfbd3ce2e250a1603406ec9c9b9ff0eae60efba88b: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:28:15.313569 containerd[1962]: time="2026-01-14T01:28:15.313506751Z" level=info msg="CreateContainer within sandbox \"c7fda0972c6386a7141f470e068bf7b70cc8ba6dc3ee9a67bf41139fa0e0b80d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"2c3612d6e895cf0cd45f3abfbd3ce2e250a1603406ec9c9b9ff0eae60efba88b\"" Jan 14 01:28:15.314121 containerd[1962]: time="2026-01-14T01:28:15.314097317Z" level=info msg="StartContainer for \"2c3612d6e895cf0cd45f3abfbd3ce2e250a1603406ec9c9b9ff0eae60efba88b\"" Jan 14 01:28:15.315309 containerd[1962]: time="2026-01-14T01:28:15.315262540Z" level=info msg="connecting to shim 2c3612d6e895cf0cd45f3abfbd3ce2e250a1603406ec9c9b9ff0eae60efba88b" address="unix:///run/containerd/s/a38b3149ac7c2ae17efbbba35b5eaa2c53057b669eac8de3fdd8fd36ab5bef5d" protocol=ttrpc version=3 Jan 14 01:28:15.335862 systemd[1]: Started cri-containerd-2c3612d6e895cf0cd45f3abfbd3ce2e250a1603406ec9c9b9ff0eae60efba88b.scope - libcontainer container 2c3612d6e895cf0cd45f3abfbd3ce2e250a1603406ec9c9b9ff0eae60efba88b. Jan 14 01:28:15.350000 audit: BPF prog-id=285 op=LOAD Jan 14 01:28:15.351000 audit: BPF prog-id=286 op=LOAD Jan 14 01:28:15.355134 kernel: audit: type=1334 audit(1768354095.350:972): prog-id=285 op=LOAD Jan 14 01:28:15.355288 kernel: audit: type=1334 audit(1768354095.351:973): prog-id=286 op=LOAD Jan 14 01:28:15.355327 kernel: audit: type=1300 audit(1768354095.351:973): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2979 pid=6560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:28:15.351000 audit[6560]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2979 pid=6560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:28:15.365787 kernel: audit: type=1327 audit(1768354095.351:973): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263333631326436653839356366306364343566336162666264336365 Jan 14 01:28:15.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263333631326436653839356366306364343566336162666264336365 Jan 14 01:28:15.367412 kernel: audit: type=1334 audit(1768354095.351:974): prog-id=286 op=UNLOAD Jan 14 01:28:15.351000 audit: BPF prog-id=286 op=UNLOAD Jan 14 01:28:15.351000 audit[6560]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2979 pid=6560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:28:15.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263333631326436653839356366306364343566336162666264336365 Jan 14 01:28:15.351000 audit: BPF prog-id=287 op=LOAD Jan 14 01:28:15.351000 audit[6560]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2979 pid=6560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:28:15.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263333631326436653839356366306364343566336162666264336365 Jan 14 01:28:15.351000 audit: BPF prog-id=288 op=LOAD Jan 14 01:28:15.351000 audit[6560]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2979 pid=6560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:28:15.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263333631326436653839356366306364343566336162666264336365 Jan 14 01:28:15.373577 kernel: audit: type=1300 audit(1768354095.351:974): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2979 pid=6560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:28:15.351000 audit: BPF prog-id=288 op=UNLOAD Jan 14 01:28:15.351000 audit[6560]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2979 pid=6560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:28:15.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263333631326436653839356366306364343566336162666264336365 Jan 14 01:28:15.351000 audit: BPF prog-id=287 op=UNLOAD Jan 14 01:28:15.351000 audit[6560]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2979 pid=6560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:28:15.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263333631326436653839356366306364343566336162666264336365 Jan 14 01:28:15.351000 audit: BPF prog-id=289 op=LOAD Jan 14 01:28:15.351000 audit[6560]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2979 pid=6560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:28:15.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263333631326436653839356366306364343566336162666264336365 Jan 14 01:28:15.409957 containerd[1962]: time="2026-01-14T01:28:15.409884409Z" level=info msg="StartContainer for \"2c3612d6e895cf0cd45f3abfbd3ce2e250a1603406ec9c9b9ff0eae60efba88b\" returns successfully" Jan 14 01:28:18.130394 containerd[1962]: time="2026-01-14T01:28:18.130352764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:28:18.446891 containerd[1962]: time="2026-01-14T01:28:18.446624166Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:28:18.449047 containerd[1962]: time="2026-01-14T01:28:18.448823424Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:28:18.449189 containerd[1962]: time="2026-01-14T01:28:18.448936718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:28:18.449249 kubelet[3571]: E0114 01:28:18.449210 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:28:18.449578 kubelet[3571]: E0114 01:28:18.449255 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:28:18.449578 kubelet[3571]: E0114 01:28:18.449378 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qs6c7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zrh2z_calico-system(74b84cdc-323d-4b42-b95a-ceec7dfaa40f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:28:18.451332 containerd[1962]: time="2026-01-14T01:28:18.451300392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:28:18.855633 containerd[1962]: time="2026-01-14T01:28:18.855544043Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:28:18.857881 containerd[1962]: time="2026-01-14T01:28:18.857819107Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:28:18.858073 containerd[1962]: time="2026-01-14T01:28:18.857916565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:28:18.858113 kubelet[3571]: E0114 01:28:18.858059 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:28:18.858113 kubelet[3571]: E0114 01:28:18.858103 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:28:18.858297 kubelet[3571]: E0114 01:28:18.858219 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qs6c7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zrh2z_calico-system(74b84cdc-323d-4b42-b95a-ceec7dfaa40f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:28:18.859496 kubelet[3571]: E0114 01:28:18.859418 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zrh2z" podUID="74b84cdc-323d-4b42-b95a-ceec7dfaa40f" Jan 14 01:28:21.130154 kubelet[3571]: E0114 01:28:21.130087 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-574d5c8798-6q7jw" podUID="4263d4be-fc9d-471e-8df9-42f06716a4f0" Jan 14 01:28:21.130705 containerd[1962]: time="2026-01-14T01:28:21.130255491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:28:21.273307 systemd[1]: cri-containerd-b51d7f6479ba66fefd05198e9cc3bc04510c20f4296ce4dde76cddae4d9a211d.scope: Deactivated successfully. Jan 14 01:28:21.273702 containerd[1962]: time="2026-01-14T01:28:21.273338103Z" level=info msg="received container exit event container_id:\"b51d7f6479ba66fefd05198e9cc3bc04510c20f4296ce4dde76cddae4d9a211d\" id:\"b51d7f6479ba66fefd05198e9cc3bc04510c20f4296ce4dde76cddae4d9a211d\" pid:6483 exit_status:1 exited_at:{seconds:1768354101 nanos:272757363}" Jan 14 01:28:21.274113 systemd[1]: cri-containerd-b51d7f6479ba66fefd05198e9cc3bc04510c20f4296ce4dde76cddae4d9a211d.scope: Consumed 339ms CPU time, 67.8M memory peak, 33.5M read from disk. Jan 14 01:28:21.279270 kernel: kauditd_printk_skb: 16 callbacks suppressed Jan 14 01:28:21.279401 kernel: audit: type=1334 audit(1768354101.275:980): prog-id=274 op=UNLOAD Jan 14 01:28:21.275000 audit: BPF prog-id=274 op=UNLOAD Jan 14 01:28:21.275000 audit: BPF prog-id=278 op=UNLOAD Jan 14 01:28:21.281580 kernel: audit: type=1334 audit(1768354101.275:981): prog-id=278 op=UNLOAD Jan 14 01:28:21.307254 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b51d7f6479ba66fefd05198e9cc3bc04510c20f4296ce4dde76cddae4d9a211d-rootfs.mount: Deactivated successfully. Jan 14 01:28:21.394822 containerd[1962]: time="2026-01-14T01:28:21.394533671Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:28:21.396750 containerd[1962]: time="2026-01-14T01:28:21.396631189Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:28:21.396750 containerd[1962]: time="2026-01-14T01:28:21.396671254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:28:21.397124 kubelet[3571]: E0114 01:28:21.397062 3571 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:28:21.397124 kubelet[3571]: E0114 01:28:21.397117 3571 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:28:21.397290 kubelet[3571]: E0114 01:28:21.397243 3571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnbz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5799fb7c8b-6l2xc_calico-system(89c07bdd-9dad-4c41-8dfe-3de894f6f743): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:28:21.398472 kubelet[3571]: E0114 01:28:21.398431 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5799fb7c8b-6l2xc" podUID="89c07bdd-9dad-4c41-8dfe-3de894f6f743" Jan 14 01:28:21.775022 kubelet[3571]: E0114 01:28:21.774896 3571 request.go:1360] "Unexpected error when reading response body" err="net/http: request canceled (Client.Timeout or context cancellation while reading body)" Jan 14 01:28:21.788499 kubelet[3571]: E0114 01:28:21.779858 3571 controller.go:195] "Failed to update lease" err="unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body)" Jan 14 01:28:22.130080 kubelet[3571]: E0114 01:28:22.129971 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-574d5c8798-9hhk4" podUID="525cee0b-846b-43ba-9b3e-e192ccc373a0" Jan 14 01:28:22.317954 kubelet[3571]: I0114 01:28:22.317893 3571 scope.go:117] "RemoveContainer" containerID="a3d024f4a41a21e584aca1946ba177f36e12ea53a2ea9eb2cfa56704ad6c8ed0" Jan 14 01:28:22.318372 kubelet[3571]: I0114 01:28:22.318060 3571 scope.go:117] "RemoveContainer" containerID="b51d7f6479ba66fefd05198e9cc3bc04510c20f4296ce4dde76cddae4d9a211d" Jan 14 01:28:22.323412 kubelet[3571]: E0114 01:28:22.323356 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-27qbd_tigera-operator(f92c0dc2-1a91-471c-9cce-159e02ced74e)\"" pod="tigera-operator/tigera-operator-7dcd859c48-27qbd" podUID="f92c0dc2-1a91-471c-9cce-159e02ced74e" Jan 14 01:28:22.345703 containerd[1962]: time="2026-01-14T01:28:22.345634726Z" level=info msg="RemoveContainer for \"a3d024f4a41a21e584aca1946ba177f36e12ea53a2ea9eb2cfa56704ad6c8ed0\"" Jan 14 01:28:22.364787 containerd[1962]: time="2026-01-14T01:28:22.364733923Z" level=info msg="RemoveContainer for \"a3d024f4a41a21e584aca1946ba177f36e12ea53a2ea9eb2cfa56704ad6c8ed0\" returns successfully" Jan 14 01:28:24.132209 kubelet[3571]: E0114 01:28:24.132145 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5mdck" podUID="993f578d-b707-42bd-b6e9-14c5aa23a03f" Jan 14 01:28:24.132694 kubelet[3571]: E0114 01:28:24.132226 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b9745b8-fxksz" podUID="56394477-d28d-42eb-bee5-a9a20263c11f"