Jan 21 01:00:20.581191 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 20 22:19:08 -00 2026 Jan 21 01:00:20.581219 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=febd26d0ecadb4f9abb44f6b2a89e793f13258cbb011a4bfe78289e5448c772a Jan 21 01:00:20.581232 kernel: BIOS-provided physical RAM map: Jan 21 01:00:20.581239 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 21 01:00:20.581246 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jan 21 01:00:20.581253 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jan 21 01:00:20.581262 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 21 01:00:20.581270 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 21 01:00:20.581277 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 21 01:00:20.581285 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 21 01:00:20.581295 kernel: NX (Execute Disable) protection: active Jan 21 01:00:20.581302 kernel: APIC: Static calls initialized Jan 21 01:00:20.581310 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Jan 21 01:00:20.581318 kernel: extended physical RAM map: Jan 21 01:00:20.581327 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 21 01:00:20.581337 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Jan 21 01:00:20.581346 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Jan 21 01:00:20.581354 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Jan 21 01:00:20.581363 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jan 21 01:00:20.581371 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 21 01:00:20.581379 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 21 01:00:20.581388 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 21 01:00:20.581396 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 21 01:00:20.581404 kernel: efi: EFI v2.7 by EDK II Jan 21 01:00:20.581413 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Jan 21 01:00:20.581423 kernel: secureboot: Secure boot disabled Jan 21 01:00:20.581431 kernel: SMBIOS 2.7 present. Jan 21 01:00:20.581440 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 21 01:00:20.581448 kernel: DMI: Memory slots populated: 1/1 Jan 21 01:00:20.581456 kernel: Hypervisor detected: KVM Jan 21 01:00:20.581464 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 21 01:00:20.581473 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 21 01:00:20.581481 kernel: kvm-clock: using sched offset of 6531981121 cycles Jan 21 01:00:20.581491 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 21 01:00:20.581499 kernel: tsc: Detected 2499.996 MHz processor Jan 21 01:00:20.581510 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 21 01:00:20.581519 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 21 01:00:20.581528 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 21 01:00:20.581537 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 21 01:00:20.581546 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 21 01:00:20.581558 kernel: Using GB pages for direct mapping Jan 21 01:00:20.581576 kernel: ACPI: Early table checksum verification disabled Jan 21 01:00:20.581589 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jan 21 01:00:20.581603 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jan 21 01:00:20.581616 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 21 01:00:20.581629 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 21 01:00:20.581643 kernel: ACPI: FACS 0x00000000789D0000 000040 Jan 21 01:00:20.581656 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 21 01:00:20.581665 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 21 01:00:20.581674 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 21 01:00:20.581683 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 21 01:00:20.581692 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 21 01:00:20.581701 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 21 01:00:20.581710 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 21 01:00:20.581722 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jan 21 01:00:20.581731 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jan 21 01:00:20.581740 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jan 21 01:00:20.581749 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jan 21 01:00:20.581758 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jan 21 01:00:20.581767 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jan 21 01:00:20.581776 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jan 21 01:00:20.581787 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jan 21 01:00:20.581796 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jan 21 01:00:20.581806 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jan 21 01:00:20.581815 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jan 21 01:00:20.581824 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jan 21 01:00:20.581832 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 21 01:00:20.581842 kernel: NUMA: Initialized distance table, cnt=1 Jan 21 01:00:20.581853 kernel: NODE_DATA(0) allocated [mem 0x7a8eedc0-0x7a8f5fff] Jan 21 01:00:20.581862 kernel: Zone ranges: Jan 21 01:00:20.581871 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 21 01:00:20.581880 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jan 21 01:00:20.581889 kernel: Normal empty Jan 21 01:00:20.581897 kernel: Device empty Jan 21 01:00:20.581906 kernel: Movable zone start for each node Jan 21 01:00:20.581915 kernel: Early memory node ranges Jan 21 01:00:20.581926 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 21 01:00:20.581935 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jan 21 01:00:20.581944 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jan 21 01:00:20.581953 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jan 21 01:00:20.581962 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 21 01:00:20.581971 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 21 01:00:20.581980 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 21 01:00:20.581992 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jan 21 01:00:20.582001 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 21 01:00:20.582010 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 21 01:00:20.582035 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 21 01:00:20.582046 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 21 01:00:20.582055 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 21 01:00:20.582064 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 21 01:00:20.582073 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 21 01:00:20.582085 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 21 01:00:20.582094 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 21 01:00:20.582103 kernel: TSC deadline timer available Jan 21 01:00:20.582112 kernel: CPU topo: Max. logical packages: 1 Jan 21 01:00:20.582121 kernel: CPU topo: Max. logical dies: 1 Jan 21 01:00:20.582130 kernel: CPU topo: Max. dies per package: 1 Jan 21 01:00:20.582139 kernel: CPU topo: Max. threads per core: 2 Jan 21 01:00:20.582150 kernel: CPU topo: Num. cores per package: 1 Jan 21 01:00:20.582159 kernel: CPU topo: Num. threads per package: 2 Jan 21 01:00:20.582168 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 21 01:00:20.582177 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 21 01:00:20.582186 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jan 21 01:00:20.582195 kernel: Booting paravirtualized kernel on KVM Jan 21 01:00:20.582204 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 21 01:00:20.582214 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 21 01:00:20.582225 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 21 01:00:20.582234 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 21 01:00:20.582243 kernel: pcpu-alloc: [0] 0 1 Jan 21 01:00:20.582252 kernel: kvm-guest: PV spinlocks enabled Jan 21 01:00:20.582261 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 21 01:00:20.582272 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=febd26d0ecadb4f9abb44f6b2a89e793f13258cbb011a4bfe78289e5448c772a Jan 21 01:00:20.582284 kernel: random: crng init done Jan 21 01:00:20.582293 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 21 01:00:20.582302 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 21 01:00:20.582311 kernel: Fallback order for Node 0: 0 Jan 21 01:00:20.582320 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Jan 21 01:00:20.582329 kernel: Policy zone: DMA32 Jan 21 01:00:20.582348 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 21 01:00:20.582358 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 21 01:00:20.582367 kernel: Kernel/User page tables isolation: enabled Jan 21 01:00:20.582379 kernel: ftrace: allocating 40097 entries in 157 pages Jan 21 01:00:20.582389 kernel: ftrace: allocated 157 pages with 5 groups Jan 21 01:00:20.582398 kernel: Dynamic Preempt: voluntary Jan 21 01:00:20.582407 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 21 01:00:20.582418 kernel: rcu: RCU event tracing is enabled. Jan 21 01:00:20.582427 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 21 01:00:20.582437 kernel: Trampoline variant of Tasks RCU enabled. Jan 21 01:00:20.582449 kernel: Rude variant of Tasks RCU enabled. Jan 21 01:00:20.582459 kernel: Tracing variant of Tasks RCU enabled. Jan 21 01:00:20.582468 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 21 01:00:20.582477 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 21 01:00:20.582487 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 21 01:00:20.582499 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 21 01:00:20.582508 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 21 01:00:20.582518 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 21 01:00:20.582528 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 21 01:00:20.582537 kernel: Console: colour dummy device 80x25 Jan 21 01:00:20.582547 kernel: printk: legacy console [tty0] enabled Jan 21 01:00:20.582556 kernel: printk: legacy console [ttyS0] enabled Jan 21 01:00:20.582568 kernel: ACPI: Core revision 20240827 Jan 21 01:00:20.582578 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 21 01:00:20.582588 kernel: APIC: Switch to symmetric I/O mode setup Jan 21 01:00:20.582597 kernel: x2apic enabled Jan 21 01:00:20.582607 kernel: APIC: Switched APIC routing to: physical x2apic Jan 21 01:00:20.582617 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 21 01:00:20.582626 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jan 21 01:00:20.582638 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 21 01:00:20.582648 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jan 21 01:00:20.582657 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 21 01:00:20.582667 kernel: Spectre V2 : Mitigation: Retpolines Jan 21 01:00:20.582676 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 21 01:00:20.582685 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 21 01:00:20.582694 kernel: RETBleed: Vulnerable Jan 21 01:00:20.582704 kernel: Speculative Store Bypass: Vulnerable Jan 21 01:00:20.582713 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 21 01:00:20.582722 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 21 01:00:20.582734 kernel: GDS: Unknown: Dependent on hypervisor status Jan 21 01:00:20.582743 kernel: active return thunk: its_return_thunk Jan 21 01:00:20.582752 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 21 01:00:20.582761 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 21 01:00:20.582771 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 21 01:00:20.582780 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 21 01:00:20.582790 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 21 01:00:20.582799 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 21 01:00:20.582808 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 21 01:00:20.582817 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 21 01:00:20.582829 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 21 01:00:20.582838 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 21 01:00:20.582848 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 21 01:00:20.582857 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 21 01:00:20.582866 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 21 01:00:20.582876 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 21 01:00:20.582885 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 21 01:00:20.582894 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 21 01:00:20.582903 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 21 01:00:20.582913 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 21 01:00:20.582922 kernel: Freeing SMP alternatives memory: 32K Jan 21 01:00:20.582934 kernel: pid_max: default: 32768 minimum: 301 Jan 21 01:00:20.582943 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 21 01:00:20.582952 kernel: landlock: Up and running. Jan 21 01:00:20.582961 kernel: SELinux: Initializing. Jan 21 01:00:20.582971 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 21 01:00:20.582980 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 21 01:00:20.582990 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 21 01:00:20.582999 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 21 01:00:20.583009 kernel: signal: max sigframe size: 3632 Jan 21 01:00:20.583036 kernel: rcu: Hierarchical SRCU implementation. Jan 21 01:00:20.583049 kernel: rcu: Max phase no-delay instances is 400. Jan 21 01:00:20.583059 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 21 01:00:20.583069 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 21 01:00:20.583079 kernel: smp: Bringing up secondary CPUs ... Jan 21 01:00:20.583088 kernel: smpboot: x86: Booting SMP configuration: Jan 21 01:00:20.583098 kernel: .... node #0, CPUs: #1 Jan 21 01:00:20.583108 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 21 01:00:20.583120 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 21 01:00:20.583130 kernel: smp: Brought up 1 node, 2 CPUs Jan 21 01:00:20.583140 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jan 21 01:00:20.583150 kernel: Memory: 1924432K/2037804K available (14336K kernel code, 2445K rwdata, 31636K rodata, 15532K init, 2508K bss, 108808K reserved, 0K cma-reserved) Jan 21 01:00:20.583159 kernel: devtmpfs: initialized Jan 21 01:00:20.583169 kernel: x86/mm: Memory block size: 128MB Jan 21 01:00:20.583181 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jan 21 01:00:20.583191 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 21 01:00:20.583201 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 21 01:00:20.583210 kernel: pinctrl core: initialized pinctrl subsystem Jan 21 01:00:20.583220 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 21 01:00:20.583230 kernel: audit: initializing netlink subsys (disabled) Jan 21 01:00:20.583239 kernel: audit: type=2000 audit(1768957217.033:1): state=initialized audit_enabled=0 res=1 Jan 21 01:00:20.583251 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 21 01:00:20.583261 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 21 01:00:20.583270 kernel: cpuidle: using governor menu Jan 21 01:00:20.583280 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 21 01:00:20.583290 kernel: dca service started, version 1.12.1 Jan 21 01:00:20.583299 kernel: PCI: Using configuration type 1 for base access Jan 21 01:00:20.583309 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 21 01:00:20.583321 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 21 01:00:20.583331 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 21 01:00:20.583340 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 21 01:00:20.583350 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 21 01:00:20.583360 kernel: ACPI: Added _OSI(Module Device) Jan 21 01:00:20.583370 kernel: ACPI: Added _OSI(Processor Device) Jan 21 01:00:20.583379 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 21 01:00:20.583389 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 21 01:00:20.583401 kernel: ACPI: Interpreter enabled Jan 21 01:00:20.583410 kernel: ACPI: PM: (supports S0 S5) Jan 21 01:00:20.583420 kernel: ACPI: Using IOAPIC for interrupt routing Jan 21 01:00:20.583430 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 21 01:00:20.583439 kernel: PCI: Using E820 reservations for host bridge windows Jan 21 01:00:20.583449 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 21 01:00:20.583459 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 21 01:00:20.583669 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 21 01:00:20.583806 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 21 01:00:20.583937 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 21 01:00:20.583950 kernel: acpiphp: Slot [3] registered Jan 21 01:00:20.583959 kernel: acpiphp: Slot [4] registered Jan 21 01:00:20.583972 kernel: acpiphp: Slot [5] registered Jan 21 01:00:20.583982 kernel: acpiphp: Slot [6] registered Jan 21 01:00:20.583991 kernel: acpiphp: Slot [7] registered Jan 21 01:00:20.584001 kernel: acpiphp: Slot [8] registered Jan 21 01:00:20.584010 kernel: acpiphp: Slot [9] registered Jan 21 01:00:20.584036 kernel: acpiphp: Slot [10] registered Jan 21 01:00:20.584047 kernel: acpiphp: Slot [11] registered Jan 21 01:00:20.584057 kernel: acpiphp: Slot [12] registered Jan 21 01:00:20.584070 kernel: acpiphp: Slot [13] registered Jan 21 01:00:20.584079 kernel: acpiphp: Slot [14] registered Jan 21 01:00:20.584089 kernel: acpiphp: Slot [15] registered Jan 21 01:00:20.584098 kernel: acpiphp: Slot [16] registered Jan 21 01:00:20.584108 kernel: acpiphp: Slot [17] registered Jan 21 01:00:20.584117 kernel: acpiphp: Slot [18] registered Jan 21 01:00:20.584127 kernel: acpiphp: Slot [19] registered Jan 21 01:00:20.584139 kernel: acpiphp: Slot [20] registered Jan 21 01:00:20.584149 kernel: acpiphp: Slot [21] registered Jan 21 01:00:20.584158 kernel: acpiphp: Slot [22] registered Jan 21 01:00:20.584168 kernel: acpiphp: Slot [23] registered Jan 21 01:00:20.584178 kernel: acpiphp: Slot [24] registered Jan 21 01:00:20.584187 kernel: acpiphp: Slot [25] registered Jan 21 01:00:20.584197 kernel: acpiphp: Slot [26] registered Jan 21 01:00:20.584206 kernel: acpiphp: Slot [27] registered Jan 21 01:00:20.584218 kernel: acpiphp: Slot [28] registered Jan 21 01:00:20.584228 kernel: acpiphp: Slot [29] registered Jan 21 01:00:20.584237 kernel: acpiphp: Slot [30] registered Jan 21 01:00:20.584247 kernel: acpiphp: Slot [31] registered Jan 21 01:00:20.584256 kernel: PCI host bridge to bus 0000:00 Jan 21 01:00:20.584401 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 21 01:00:20.584524 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 21 01:00:20.584641 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 21 01:00:20.584757 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 21 01:00:20.584878 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jan 21 01:00:20.584997 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 21 01:00:20.585193 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jan 21 01:00:20.585343 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jan 21 01:00:20.585479 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Jan 21 01:00:20.585637 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 21 01:00:20.585774 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 21 01:00:20.585901 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 21 01:00:20.586064 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 21 01:00:20.586197 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 21 01:00:20.586325 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 21 01:00:20.586453 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 21 01:00:20.586586 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Jan 21 01:00:20.586713 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Jan 21 01:00:20.586847 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 21 01:00:20.586975 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 21 01:00:20.587648 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Jan 21 01:00:20.587898 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Jan 21 01:00:20.588526 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Jan 21 01:00:20.588672 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Jan 21 01:00:20.588687 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 21 01:00:20.588698 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 21 01:00:20.588708 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 21 01:00:20.588718 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 21 01:00:20.588728 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 21 01:00:20.588738 kernel: iommu: Default domain type: Translated Jan 21 01:00:20.588751 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 21 01:00:20.588761 kernel: efivars: Registered efivars operations Jan 21 01:00:20.588771 kernel: PCI: Using ACPI for IRQ routing Jan 21 01:00:20.588781 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 21 01:00:20.588791 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Jan 21 01:00:20.588800 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jan 21 01:00:20.588809 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jan 21 01:00:20.588940 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 21 01:00:20.589130 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 21 01:00:20.589317 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 21 01:00:20.589332 kernel: vgaarb: loaded Jan 21 01:00:20.589343 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 21 01:00:20.589354 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 21 01:00:20.589363 kernel: clocksource: Switched to clocksource kvm-clock Jan 21 01:00:20.589374 kernel: VFS: Disk quotas dquot_6.6.0 Jan 21 01:00:20.589388 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 21 01:00:20.589398 kernel: pnp: PnP ACPI init Jan 21 01:00:20.589407 kernel: pnp: PnP ACPI: found 5 devices Jan 21 01:00:20.589417 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 21 01:00:20.589428 kernel: NET: Registered PF_INET protocol family Jan 21 01:00:20.589438 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 21 01:00:20.589448 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 21 01:00:20.589460 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 21 01:00:20.589470 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 21 01:00:20.589480 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 21 01:00:20.589490 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 21 01:00:20.589500 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 21 01:00:20.589510 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 21 01:00:20.589520 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 21 01:00:20.589532 kernel: NET: Registered PF_XDP protocol family Jan 21 01:00:20.589687 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 21 01:00:20.589806 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 21 01:00:20.589922 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 21 01:00:20.590060 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 21 01:00:20.590177 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jan 21 01:00:20.590310 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 21 01:00:20.590326 kernel: PCI: CLS 0 bytes, default 64 Jan 21 01:00:20.590337 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 21 01:00:20.590347 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 21 01:00:20.590357 kernel: clocksource: Switched to clocksource tsc Jan 21 01:00:20.590367 kernel: Initialise system trusted keyrings Jan 21 01:00:20.590378 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 21 01:00:20.590388 kernel: Key type asymmetric registered Jan 21 01:00:20.590400 kernel: Asymmetric key parser 'x509' registered Jan 21 01:00:20.590410 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 21 01:00:20.590420 kernel: io scheduler mq-deadline registered Jan 21 01:00:20.590430 kernel: io scheduler kyber registered Jan 21 01:00:20.590440 kernel: io scheduler bfq registered Jan 21 01:00:20.590450 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 21 01:00:20.590460 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 21 01:00:20.590472 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 21 01:00:20.590482 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 21 01:00:20.590493 kernel: i8042: Warning: Keylock active Jan 21 01:00:20.590503 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 21 01:00:20.590512 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 21 01:00:20.590649 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 21 01:00:20.590774 kernel: rtc_cmos 00:00: registered as rtc0 Jan 21 01:00:20.590893 kernel: rtc_cmos 00:00: setting system clock to 2026-01-21T01:00:17 UTC (1768957217) Jan 21 01:00:20.591012 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 21 01:00:20.591058 kernel: intel_pstate: CPU model not supported Jan 21 01:00:20.591071 kernel: efifb: probing for efifb Jan 21 01:00:20.591081 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Jan 21 01:00:20.591091 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jan 21 01:00:20.591104 kernel: efifb: scrolling: redraw Jan 21 01:00:20.591115 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 21 01:00:20.591125 kernel: Console: switching to colour frame buffer device 100x37 Jan 21 01:00:20.591135 kernel: fb0: EFI VGA frame buffer device Jan 21 01:00:20.591146 kernel: pstore: Using crash dump compression: deflate Jan 21 01:00:20.591156 kernel: pstore: Registered efi_pstore as persistent store backend Jan 21 01:00:20.591167 kernel: NET: Registered PF_INET6 protocol family Jan 21 01:00:20.591179 kernel: Segment Routing with IPv6 Jan 21 01:00:20.591189 kernel: In-situ OAM (IOAM) with IPv6 Jan 21 01:00:20.591199 kernel: NET: Registered PF_PACKET protocol family Jan 21 01:00:20.591210 kernel: Key type dns_resolver registered Jan 21 01:00:20.591220 kernel: IPI shorthand broadcast: enabled Jan 21 01:00:20.591230 kernel: sched_clock: Marking stable (1355002381, 143145377)->(1585834489, -87686731) Jan 21 01:00:20.591241 kernel: registered taskstats version 1 Jan 21 01:00:20.591251 kernel: Loading compiled-in X.509 certificates Jan 21 01:00:20.591264 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 169e95345ec0c7da7389f5f6d7b9c06dfd352178' Jan 21 01:00:20.591274 kernel: Demotion targets for Node 0: null Jan 21 01:00:20.591284 kernel: Key type .fscrypt registered Jan 21 01:00:20.591294 kernel: Key type fscrypt-provisioning registered Jan 21 01:00:20.591304 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 21 01:00:20.591314 kernel: ima: Allocated hash algorithm: sha1 Jan 21 01:00:20.591324 kernel: ima: No architecture policies found Jan 21 01:00:20.591337 kernel: clk: Disabling unused clocks Jan 21 01:00:20.591347 kernel: Freeing unused kernel image (initmem) memory: 15532K Jan 21 01:00:20.591357 kernel: Write protecting the kernel read-only data: 47104k Jan 21 01:00:20.591370 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Jan 21 01:00:20.591383 kernel: Run /init as init process Jan 21 01:00:20.591393 kernel: with arguments: Jan 21 01:00:20.591403 kernel: /init Jan 21 01:00:20.591413 kernel: with environment: Jan 21 01:00:20.591424 kernel: HOME=/ Jan 21 01:00:20.591434 kernel: TERM=linux Jan 21 01:00:20.591545 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 21 01:00:20.591565 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 21 01:00:20.591654 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 21 01:00:20.591668 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 21 01:00:20.591678 kernel: GPT:25804799 != 33554431 Jan 21 01:00:20.591688 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 21 01:00:20.591698 kernel: GPT:25804799 != 33554431 Jan 21 01:00:20.591711 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 21 01:00:20.591721 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 21 01:00:20.591732 kernel: SCSI subsystem initialized Jan 21 01:00:20.591743 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 21 01:00:20.591753 kernel: device-mapper: uevent: version 1.0.3 Jan 21 01:00:20.591763 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 21 01:00:20.591774 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 21 01:00:20.591786 kernel: raid6: avx512x4 gen() 17920 MB/s Jan 21 01:00:20.591797 kernel: raid6: avx512x2 gen() 18014 MB/s Jan 21 01:00:20.591807 kernel: raid6: avx512x1 gen() 17816 MB/s Jan 21 01:00:20.591817 kernel: raid6: avx2x4 gen() 17876 MB/s Jan 21 01:00:20.591828 kernel: raid6: avx2x2 gen() 17853 MB/s Jan 21 01:00:20.591838 kernel: raid6: avx2x1 gen() 13571 MB/s Jan 21 01:00:20.591849 kernel: raid6: using algorithm avx512x2 gen() 18014 MB/s Jan 21 01:00:20.591861 kernel: raid6: .... xor() 24216 MB/s, rmw enabled Jan 21 01:00:20.591871 kernel: raid6: using avx512x2 recovery algorithm Jan 21 01:00:20.591884 kernel: xor: automatically using best checksumming function avx Jan 21 01:00:20.591895 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 21 01:00:20.591905 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 21 01:00:20.591915 kernel: BTRFS: device fsid 1d50d7f2-b244-4434-b37e-796fa0c23345 devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (152) Jan 21 01:00:20.591926 kernel: BTRFS info (device dm-0): first mount of filesystem 1d50d7f2-b244-4434-b37e-796fa0c23345 Jan 21 01:00:20.591938 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 21 01:00:20.591949 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 21 01:00:20.591959 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 21 01:00:20.591970 kernel: BTRFS info (device dm-0): enabling free space tree Jan 21 01:00:20.591980 kernel: loop: module loaded Jan 21 01:00:20.591990 kernel: loop0: detected capacity change from 0 to 100552 Jan 21 01:00:20.592000 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 21 01:00:20.592015 systemd[1]: Successfully made /usr/ read-only. Jan 21 01:00:20.592044 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 21 01:00:20.592056 systemd[1]: Detected virtualization amazon. Jan 21 01:00:20.592066 systemd[1]: Detected architecture x86-64. Jan 21 01:00:20.592076 systemd[1]: Running in initrd. Jan 21 01:00:20.592087 systemd[1]: No hostname configured, using default hostname. Jan 21 01:00:20.592101 systemd[1]: Hostname set to . Jan 21 01:00:20.592112 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 21 01:00:20.592122 systemd[1]: Queued start job for default target initrd.target. Jan 21 01:00:20.592133 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 21 01:00:20.592143 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 21 01:00:20.592154 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 21 01:00:20.592168 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 21 01:00:20.592179 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 21 01:00:20.592190 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 21 01:00:20.592202 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 21 01:00:20.592213 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 21 01:00:20.592223 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 21 01:00:20.592237 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 21 01:00:20.592247 systemd[1]: Reached target paths.target - Path Units. Jan 21 01:00:20.592258 systemd[1]: Reached target slices.target - Slice Units. Jan 21 01:00:20.592269 systemd[1]: Reached target swap.target - Swaps. Jan 21 01:00:20.592280 systemd[1]: Reached target timers.target - Timer Units. Jan 21 01:00:20.592291 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 21 01:00:20.592301 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 21 01:00:20.592314 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 21 01:00:20.592325 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 21 01:00:20.592336 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 21 01:00:20.592346 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 21 01:00:20.592357 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 21 01:00:20.592368 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 21 01:00:20.592378 systemd[1]: Reached target sockets.target - Socket Units. Jan 21 01:00:20.592392 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 21 01:00:20.592403 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 21 01:00:20.592413 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 21 01:00:20.592437 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 21 01:00:20.592448 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 21 01:00:20.592460 systemd[1]: Starting systemd-fsck-usr.service... Jan 21 01:00:20.592473 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 21 01:00:20.592484 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 21 01:00:20.592495 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 01:00:20.592506 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 21 01:00:20.592520 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 21 01:00:20.592531 systemd[1]: Finished systemd-fsck-usr.service. Jan 21 01:00:20.592542 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 21 01:00:20.592575 systemd-journald[290]: Collecting audit messages is enabled. Jan 21 01:00:20.592603 systemd-journald[290]: Journal started Jan 21 01:00:20.592627 systemd-journald[290]: Runtime Journal (/run/log/journal/ec2f59aa1f5ff35f9c97c1e13e9e56e0) is 4.7M, max 38M, 33.2M free. Jan 21 01:00:20.595054 systemd[1]: Started systemd-journald.service - Journal Service. Jan 21 01:00:20.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:20.601049 kernel: audit: type=1130 audit(1768957220.594:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:20.601238 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 21 01:00:20.678336 systemd-tmpfiles[302]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 21 01:00:20.687855 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 21 01:00:20.681194 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 01:00:20.695155 kernel: Bridge firewalling registered Jan 21 01:00:20.695189 kernel: audit: type=1130 audit(1768957220.692:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:20.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:20.690688 systemd-modules-load[291]: Inserted module 'br_netfilter' Jan 21 01:00:20.694446 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 21 01:00:20.703784 kernel: audit: type=1130 audit(1768957220.698:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:20.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:20.699699 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 21 01:00:20.709388 kernel: audit: type=1130 audit(1768957220.703:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:20.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:20.705703 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 21 01:00:20.719213 kernel: audit: type=1130 audit(1768957220.708:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:20.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:20.714201 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 21 01:00:20.721199 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 21 01:00:20.724750 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 21 01:00:20.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:20.746465 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 21 01:00:20.752724 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 21 01:00:20.754045 kernel: audit: type=1130 audit(1768957220.746:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:20.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:20.759892 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 21 01:00:20.767220 kernel: audit: type=1130 audit(1768957220.753:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:20.767262 kernel: audit: type=1130 audit(1768957220.759:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:20.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:20.764218 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 21 01:00:20.767000 audit: BPF prog-id=6 op=LOAD Jan 21 01:00:20.771037 kernel: audit: type=1334 audit(1768957220.767:10): prog-id=6 op=LOAD Jan 21 01:00:20.771007 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 21 01:00:20.798388 dracut-cmdline[325]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=febd26d0ecadb4f9abb44f6b2a89e793f13258cbb011a4bfe78289e5448c772a Jan 21 01:00:20.843108 systemd-resolved[326]: Positive Trust Anchors: Jan 21 01:00:20.844152 systemd-resolved[326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 21 01:00:20.844163 systemd-resolved[326]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 21 01:00:20.844227 systemd-resolved[326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 21 01:00:20.877807 systemd-resolved[326]: Defaulting to hostname 'linux'. Jan 21 01:00:20.879167 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 21 01:00:20.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:20.879743 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 21 01:00:20.984072 kernel: Loading iSCSI transport class v2.0-870. Jan 21 01:00:21.064050 kernel: iscsi: registered transport (tcp) Jan 21 01:00:21.117067 kernel: iscsi: registered transport (qla4xxx) Jan 21 01:00:21.117143 kernel: QLogic iSCSI HBA Driver Jan 21 01:00:21.159144 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 21 01:00:21.185765 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 21 01:00:21.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:21.187909 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 21 01:00:21.237893 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 21 01:00:21.243913 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 21 01:00:21.243944 kernel: audit: type=1130 audit(1768957221.237:13): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:21.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:21.241191 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 21 01:00:21.261191 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 21 01:00:21.287357 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 21 01:00:21.293547 kernel: audit: type=1130 audit(1768957221.286:14): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:21.293587 kernel: audit: type=1334 audit(1768957221.287:15): prog-id=7 op=LOAD Jan 21 01:00:21.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:21.287000 audit: BPF prog-id=7 op=LOAD Jan 21 01:00:21.295177 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 21 01:00:21.292000 audit: BPF prog-id=8 op=LOAD Jan 21 01:00:21.299060 kernel: audit: type=1334 audit(1768957221.292:16): prog-id=8 op=LOAD Jan 21 01:00:21.325200 systemd-udevd[563]: Using default interface naming scheme 'v257'. Jan 21 01:00:21.336128 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 21 01:00:21.347361 kernel: audit: type=1130 audit(1768957221.335:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:21.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:21.339513 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 21 01:00:21.364774 dracut-pre-trigger[638]: rd.md=0: removing MD RAID activation Jan 21 01:00:21.367143 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 21 01:00:21.373042 kernel: audit: type=1130 audit(1768957221.367:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:21.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:21.373000 audit: BPF prog-id=9 op=LOAD Jan 21 01:00:21.375980 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 21 01:00:21.377041 kernel: audit: type=1334 audit(1768957221.373:19): prog-id=9 op=LOAD Jan 21 01:00:21.397560 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 21 01:00:21.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:21.404064 kernel: audit: type=1130 audit(1768957221.397:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:21.404267 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 21 01:00:21.425526 systemd-networkd[683]: lo: Link UP Jan 21 01:00:21.426282 systemd-networkd[683]: lo: Gained carrier Jan 21 01:00:21.427578 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 21 01:00:21.428528 systemd[1]: Reached target network.target - Network. Jan 21 01:00:21.433046 kernel: audit: type=1130 audit(1768957221.427:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:21.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:21.468978 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 21 01:00:21.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:21.476647 kernel: audit: type=1130 audit(1768957221.469:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:21.479431 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 21 01:00:21.591721 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 21 01:00:21.591996 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 01:00:21.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:21.592926 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 01:00:21.597413 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 01:00:21.609996 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 21 01:00:21.610319 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 21 01:00:21.615075 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 21 01:00:21.620037 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:51:68:81:99:11 Jan 21 01:00:21.621486 (udev-worker)[705]: Network interface NamePolicy= disabled on kernel command line. Jan 21 01:00:21.647658 systemd-networkd[683]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 21 01:00:21.647669 systemd-networkd[683]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 21 01:00:21.656394 systemd-networkd[683]: eth0: Link UP Jan 21 01:00:21.656552 systemd-networkd[683]: eth0: Gained carrier Jan 21 01:00:21.656571 systemd-networkd[683]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 21 01:00:21.659217 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 01:00:21.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:21.668626 systemd-networkd[683]: eth0: DHCPv4 address 172.31.28.215/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 21 01:00:21.669987 kernel: cryptd: max_cpu_qlen set to 1000 Jan 21 01:00:21.726044 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jan 21 01:00:21.770904 kernel: AES CTR mode by8 optimization enabled Jan 21 01:00:21.795050 kernel: nvme nvme0: using unchecked data buffer Jan 21 01:00:21.897837 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 21 01:00:21.908730 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 21 01:00:21.911162 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 21 01:00:21.930074 disk-uuid[845]: Primary Header is updated. Jan 21 01:00:21.930074 disk-uuid[845]: Secondary Entries is updated. Jan 21 01:00:21.930074 disk-uuid[845]: Secondary Header is updated. Jan 21 01:00:21.992039 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 21 01:00:22.062563 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 21 01:00:22.303289 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 21 01:00:22.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:22.305293 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 21 01:00:22.305871 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 21 01:00:22.307133 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 21 01:00:22.309212 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 21 01:00:22.334634 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 21 01:00:22.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:23.069688 disk-uuid[846]: Warning: The kernel is still using the old partition table. Jan 21 01:00:23.069688 disk-uuid[846]: The new table will be used at the next reboot or after you Jan 21 01:00:23.069688 disk-uuid[846]: run partprobe(8) or kpartx(8) Jan 21 01:00:23.069688 disk-uuid[846]: The operation has completed successfully. Jan 21 01:00:23.079050 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 21 01:00:23.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:23.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:23.079168 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 21 01:00:23.080626 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 21 01:00:23.124182 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1079) Jan 21 01:00:23.124236 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f0e9d057-8632-47ff-9f6c-54c0e93bf1a9 Jan 21 01:00:23.127584 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 21 01:00:23.133064 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 21 01:00:23.133127 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 21 01:00:23.136499 systemd-networkd[683]: eth0: Gained IPv6LL Jan 21 01:00:23.144684 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f0e9d057-8632-47ff-9f6c-54c0e93bf1a9 Jan 21 01:00:23.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:23.143164 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 21 01:00:23.145414 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 21 01:00:24.331351 ignition[1098]: Ignition 2.24.0 Jan 21 01:00:24.331367 ignition[1098]: Stage: fetch-offline Jan 21 01:00:24.331465 ignition[1098]: no configs at "/usr/lib/ignition/base.d" Jan 21 01:00:24.333525 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 21 01:00:24.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:24.331478 ignition[1098]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 21 01:00:24.331870 ignition[1098]: Ignition finished successfully Jan 21 01:00:24.336435 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 21 01:00:24.373114 ignition[1104]: Ignition 2.24.0 Jan 21 01:00:24.373128 ignition[1104]: Stage: fetch Jan 21 01:00:24.373324 ignition[1104]: no configs at "/usr/lib/ignition/base.d" Jan 21 01:00:24.373336 ignition[1104]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 21 01:00:24.373403 ignition[1104]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 21 01:00:24.382582 ignition[1104]: PUT result: OK Jan 21 01:00:24.384206 ignition[1104]: parsed url from cmdline: "" Jan 21 01:00:24.384216 ignition[1104]: no config URL provided Jan 21 01:00:24.384224 ignition[1104]: reading system config file "/usr/lib/ignition/user.ign" Jan 21 01:00:24.384239 ignition[1104]: no config at "/usr/lib/ignition/user.ign" Jan 21 01:00:24.384256 ignition[1104]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 21 01:00:24.384767 ignition[1104]: PUT result: OK Jan 21 01:00:24.384811 ignition[1104]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 21 01:00:24.385337 ignition[1104]: GET result: OK Jan 21 01:00:24.385400 ignition[1104]: parsing config with SHA512: 1c51da04e67bc309f8069e328224c391eac3362d74d1799f11db0fe3d4e7b4c6072e382f60470c963f27d4177b5a3fc4e3e2c31ff7c05a82ac8e482d48adb87f Jan 21 01:00:24.392509 unknown[1104]: fetched base config from "system" Jan 21 01:00:24.392522 unknown[1104]: fetched base config from "system" Jan 21 01:00:24.392892 ignition[1104]: fetch: fetch complete Jan 21 01:00:24.392528 unknown[1104]: fetched user config from "aws" Jan 21 01:00:24.392897 ignition[1104]: fetch: fetch passed Jan 21 01:00:24.392939 ignition[1104]: Ignition finished successfully Jan 21 01:00:24.394854 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 21 01:00:24.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:24.396562 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 21 01:00:24.426245 ignition[1111]: Ignition 2.24.0 Jan 21 01:00:24.426260 ignition[1111]: Stage: kargs Jan 21 01:00:24.426559 ignition[1111]: no configs at "/usr/lib/ignition/base.d" Jan 21 01:00:24.426571 ignition[1111]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 21 01:00:24.426679 ignition[1111]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 21 01:00:24.427613 ignition[1111]: PUT result: OK Jan 21 01:00:24.431124 ignition[1111]: kargs: kargs passed Jan 21 01:00:24.431213 ignition[1111]: Ignition finished successfully Jan 21 01:00:24.433488 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 21 01:00:24.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:24.435178 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 21 01:00:24.457196 ignition[1117]: Ignition 2.24.0 Jan 21 01:00:24.457211 ignition[1117]: Stage: disks Jan 21 01:00:24.457462 ignition[1117]: no configs at "/usr/lib/ignition/base.d" Jan 21 01:00:24.457474 ignition[1117]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 21 01:00:24.457589 ignition[1117]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 21 01:00:24.459250 ignition[1117]: PUT result: OK Jan 21 01:00:24.462564 ignition[1117]: disks: disks passed Jan 21 01:00:24.462648 ignition[1117]: Ignition finished successfully Jan 21 01:00:24.464158 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 21 01:00:24.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:24.465121 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 21 01:00:24.465828 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 21 01:00:24.466225 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 21 01:00:24.466761 systemd[1]: Reached target sysinit.target - System Initialization. Jan 21 01:00:24.467329 systemd[1]: Reached target basic.target - Basic System. Jan 21 01:00:24.469082 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 21 01:00:24.567596 systemd-fsck[1125]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 21 01:00:24.571487 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 21 01:00:24.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:24.574774 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 21 01:00:24.793052 kernel: EXT4-fs (nvme0n1p9): mounted filesystem cf9e7296-d0ad-4d9a-b030-d4e17a1c88bf r/w with ordered data mode. Quota mode: none. Jan 21 01:00:24.793489 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 21 01:00:24.794497 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 21 01:00:24.850160 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 21 01:00:24.853132 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 21 01:00:24.854455 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 21 01:00:24.855116 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 21 01:00:24.855147 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 21 01:00:24.862367 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 21 01:00:24.864249 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 21 01:00:24.875067 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1144) Jan 21 01:00:24.878098 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f0e9d057-8632-47ff-9f6c-54c0e93bf1a9 Jan 21 01:00:24.878151 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 21 01:00:24.885781 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 21 01:00:24.885872 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 21 01:00:24.887432 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 21 01:00:26.847800 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 21 01:00:26.854091 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 21 01:00:26.854119 kernel: audit: type=1130 audit(1768957226.847:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:26.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:26.851132 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 21 01:00:26.855672 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 21 01:00:26.872974 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 21 01:00:26.875287 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f0e9d057-8632-47ff-9f6c-54c0e93bf1a9 Jan 21 01:00:26.897251 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 21 01:00:26.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:26.900665 ignition[1241]: INFO : Ignition 2.24.0 Jan 21 01:00:26.902846 ignition[1241]: INFO : Stage: mount Jan 21 01:00:26.902846 ignition[1241]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 21 01:00:26.902846 ignition[1241]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 21 01:00:26.902846 ignition[1241]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 21 01:00:26.904363 kernel: audit: type=1130 audit(1768957226.897:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:26.904430 ignition[1241]: INFO : PUT result: OK Jan 21 01:00:26.906654 ignition[1241]: INFO : mount: mount passed Jan 21 01:00:26.906654 ignition[1241]: INFO : Ignition finished successfully Jan 21 01:00:26.908086 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 21 01:00:26.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:26.911121 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 21 01:00:26.913352 kernel: audit: type=1130 audit(1768957226.907:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:26.941315 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 21 01:00:26.967044 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1253) Jan 21 01:00:26.971178 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f0e9d057-8632-47ff-9f6c-54c0e93bf1a9 Jan 21 01:00:26.971243 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 21 01:00:26.977992 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 21 01:00:26.978075 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 21 01:00:26.980090 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 21 01:00:27.007524 ignition[1270]: INFO : Ignition 2.24.0 Jan 21 01:00:27.007524 ignition[1270]: INFO : Stage: files Jan 21 01:00:27.008703 ignition[1270]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 21 01:00:27.008703 ignition[1270]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 21 01:00:27.008703 ignition[1270]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 21 01:00:27.008703 ignition[1270]: INFO : PUT result: OK Jan 21 01:00:27.011555 ignition[1270]: DEBUG : files: compiled without relabeling support, skipping Jan 21 01:00:27.013264 ignition[1270]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 21 01:00:27.013264 ignition[1270]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 21 01:00:27.017956 ignition[1270]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 21 01:00:27.018854 ignition[1270]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 21 01:00:27.018854 ignition[1270]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 21 01:00:27.018510 unknown[1270]: wrote ssh authorized keys file for user: core Jan 21 01:00:27.021253 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 21 01:00:27.021253 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 21 01:00:27.085238 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 21 01:00:27.219834 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 21 01:00:27.219834 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 21 01:00:27.221518 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 21 01:00:27.221518 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 21 01:00:27.221518 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 21 01:00:27.221518 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 21 01:00:27.221518 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 21 01:00:27.221518 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 21 01:00:27.221518 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 21 01:00:27.226124 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 21 01:00:27.226878 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 21 01:00:27.226878 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 21 01:00:27.229093 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 21 01:00:27.229093 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 21 01:00:27.229093 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 21 01:00:27.751105 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 21 01:00:28.736566 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 21 01:00:28.736566 ignition[1270]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 21 01:00:28.738794 ignition[1270]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 21 01:00:28.742473 ignition[1270]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 21 01:00:28.742473 ignition[1270]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 21 01:00:28.742473 ignition[1270]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 21 01:00:28.744599 ignition[1270]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 21 01:00:28.744599 ignition[1270]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 21 01:00:28.744599 ignition[1270]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 21 01:00:28.744599 ignition[1270]: INFO : files: files passed Jan 21 01:00:28.744599 ignition[1270]: INFO : Ignition finished successfully Jan 21 01:00:28.753232 kernel: audit: type=1130 audit(1768957228.744:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:28.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:28.744704 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 21 01:00:28.746771 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 21 01:00:28.757169 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 21 01:00:28.758762 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 21 01:00:28.758857 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 21 01:00:28.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:28.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:28.765776 kernel: audit: type=1130 audit(1768957228.759:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:28.765810 kernel: audit: type=1131 audit(1768957228.759:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:28.787627 initrd-setup-root-after-ignition[1302]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 21 01:00:28.789549 initrd-setup-root-after-ignition[1302]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 21 01:00:28.790530 initrd-setup-root-after-ignition[1306]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 21 01:00:28.792344 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 21 01:00:28.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:28.793697 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 21 01:00:28.799315 kernel: audit: type=1130 audit(1768957228.792:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:28.799837 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 21 01:00:28.860619 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 21 01:00:28.860767 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 21 01:00:28.872420 kernel: audit: type=1130 audit(1768957228.860:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:28.872460 kernel: audit: type=1131 audit(1768957228.860:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:28.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:28.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:28.862129 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 21 01:00:28.872951 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 21 01:00:28.874151 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 21 01:00:28.875468 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 21 01:00:28.910162 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 21 01:00:28.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:28.914329 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 21 01:00:28.918091 kernel: audit: type=1130 audit(1768957228.910:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:28.942669 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 21 01:00:28.944732 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 21 01:00:28.945425 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 21 01:00:28.946427 systemd[1]: Stopped target timers.target - Timer Units. Jan 21 01:00:28.947258 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 21 01:00:28.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:28.947344 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 21 01:00:28.948483 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 21 01:00:28.948970 systemd[1]: Stopped target basic.target - Basic System. Jan 21 01:00:28.949743 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 21 01:00:28.950491 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 21 01:00:28.951190 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 21 01:00:28.951852 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 21 01:00:28.952558 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 21 01:00:28.953758 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 21 01:00:28.954471 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 21 01:00:28.955209 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 21 01:00:28.956236 systemd[1]: Stopped target swap.target - Swaps. Jan 21 01:00:28.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:28.956964 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 21 01:00:28.957084 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 21 01:00:28.958277 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 21 01:00:28.959136 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 21 01:00:28.959792 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 21 01:00:28.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:28.960142 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 21 01:00:28.960539 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 21 01:00:28.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:28.960632 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 21 01:00:28.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:28.961850 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 21 01:00:28.961933 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 21 01:00:28.962972 systemd[1]: ignition-files.service: Deactivated successfully. Jan 21 01:00:28.963066 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 21 01:00:28.964805 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 21 01:00:28.966454 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 21 01:00:28.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:28.966540 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 21 01:00:28.971902 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 21 01:00:28.972358 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 21 01:00:28.972446 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 21 01:00:28.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:28.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:28.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:28.976058 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 21 01:00:28.976151 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 21 01:00:28.976813 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 21 01:00:28.976887 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 21 01:00:28.980951 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 21 01:00:28.981846 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 21 01:00:28.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:28.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:28.996400 ignition[1326]: INFO : Ignition 2.24.0 Jan 21 01:00:28.997430 ignition[1326]: INFO : Stage: umount Jan 21 01:00:28.998601 ignition[1326]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 21 01:00:28.998601 ignition[1326]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 21 01:00:28.998601 ignition[1326]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 21 01:00:29.001305 ignition[1326]: INFO : PUT result: OK Jan 21 01:00:29.005036 ignition[1326]: INFO : umount: umount passed Jan 21 01:00:29.005770 ignition[1326]: INFO : Ignition finished successfully Jan 21 01:00:29.008303 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 21 01:00:29.009129 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 21 01:00:29.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.010515 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 21 01:00:29.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.010580 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 21 01:00:29.013145 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 21 01:00:29.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.013210 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 21 01:00:29.014323 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 21 01:00:29.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.014393 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 21 01:00:29.015448 systemd[1]: Stopped target network.target - Network. Jan 21 01:00:29.016127 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 21 01:00:29.016199 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 21 01:00:29.016643 systemd[1]: Stopped target paths.target - Path Units. Jan 21 01:00:29.017259 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 21 01:00:29.022126 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 21 01:00:29.022518 systemd[1]: Stopped target slices.target - Slice Units. Jan 21 01:00:29.023362 systemd[1]: Stopped target sockets.target - Socket Units. Jan 21 01:00:29.023995 systemd[1]: iscsid.socket: Deactivated successfully. Jan 21 01:00:29.024053 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 21 01:00:29.024619 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 21 01:00:29.024652 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 21 01:00:29.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.025191 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 21 01:00:29.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.025228 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 21 01:00:29.025844 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 21 01:00:29.025904 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 21 01:00:29.026455 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 21 01:00:29.026495 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 21 01:00:29.027115 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 21 01:00:29.027670 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 21 01:00:29.030443 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 21 01:00:29.033689 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 21 01:00:29.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.033825 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 21 01:00:29.036360 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 21 01:00:29.036458 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 21 01:00:29.036000 audit: BPF prog-id=6 op=UNLOAD Jan 21 01:00:29.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.038000 audit: BPF prog-id=9 op=UNLOAD Jan 21 01:00:29.039383 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 21 01:00:29.040091 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 21 01:00:29.040154 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 21 01:00:29.041913 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 21 01:00:29.042418 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 21 01:00:29.042491 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 21 01:00:29.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.045416 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 21 01:00:29.046105 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 21 01:00:29.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.046700 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 21 01:00:29.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.046757 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 21 01:00:29.048177 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 21 01:00:29.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.061856 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 21 01:00:29.062013 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 21 01:00:29.067677 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 21 01:00:29.068752 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 21 01:00:29.070155 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 21 01:00:29.070208 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 21 01:00:29.072344 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 21 01:00:29.072797 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 21 01:00:29.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.073996 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 21 01:00:29.074077 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 21 01:00:29.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.075221 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 21 01:00:29.075288 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 21 01:00:29.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.080580 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 21 01:00:29.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.082096 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 21 01:00:29.082164 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 21 01:00:29.084204 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 21 01:00:29.084255 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 21 01:00:29.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.085737 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 21 01:00:29.085788 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 01:00:29.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.087686 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 21 01:00:29.092174 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 21 01:00:29.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.099443 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 21 01:00:29.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.100129 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 21 01:00:29.152944 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 21 01:00:29.153069 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 21 01:00:29.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.154436 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 21 01:00:29.155195 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 21 01:00:29.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:29.155291 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 21 01:00:29.156544 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 21 01:00:29.170758 systemd[1]: Switching root. Jan 21 01:00:29.201364 systemd-journald[290]: Journal stopped Jan 21 01:00:32.550491 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Jan 21 01:00:32.550579 kernel: SELinux: policy capability network_peer_controls=1 Jan 21 01:00:32.550608 kernel: SELinux: policy capability open_perms=1 Jan 21 01:00:32.550628 kernel: SELinux: policy capability extended_socket_class=1 Jan 21 01:00:32.550648 kernel: SELinux: policy capability always_check_network=0 Jan 21 01:00:32.550671 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 21 01:00:32.550695 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 21 01:00:32.550719 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 21 01:00:32.550739 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 21 01:00:32.550761 kernel: SELinux: policy capability userspace_initial_context=0 Jan 21 01:00:32.550786 systemd[1]: Successfully loaded SELinux policy in 98.807ms. Jan 21 01:00:32.550814 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.669ms. Jan 21 01:00:32.550837 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 21 01:00:32.550859 systemd[1]: Detected virtualization amazon. Jan 21 01:00:32.550880 systemd[1]: Detected architecture x86-64. Jan 21 01:00:32.550901 systemd[1]: Detected first boot. Jan 21 01:00:32.550925 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 21 01:00:32.550946 zram_generator::config[1369]: No configuration found. Jan 21 01:00:32.550968 kernel: Guest personality initialized and is inactive Jan 21 01:00:32.550987 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 21 01:00:32.551007 kernel: Initialized host personality Jan 21 01:00:32.551039 kernel: NET: Registered PF_VSOCK protocol family Jan 21 01:00:32.551067 systemd[1]: Populated /etc with preset unit settings. Jan 21 01:00:32.551088 kernel: kauditd_printk_skb: 43 callbacks suppressed Jan 21 01:00:32.551109 kernel: audit: type=1334 audit(1768957232.090:88): prog-id=12 op=LOAD Jan 21 01:00:32.551129 kernel: audit: type=1334 audit(1768957232.090:89): prog-id=3 op=UNLOAD Jan 21 01:00:32.551148 kernel: audit: type=1334 audit(1768957232.090:90): prog-id=13 op=LOAD Jan 21 01:00:32.551166 kernel: audit: type=1334 audit(1768957232.090:91): prog-id=14 op=LOAD Jan 21 01:00:32.551186 kernel: audit: type=1334 audit(1768957232.090:92): prog-id=4 op=UNLOAD Jan 21 01:00:32.551208 kernel: audit: type=1334 audit(1768957232.090:93): prog-id=5 op=UNLOAD Jan 21 01:00:32.551229 kernel: audit: type=1131 audit(1768957232.093:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:32.551249 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 21 01:00:32.551271 kernel: audit: type=1334 audit(1768957232.103:95): prog-id=12 op=UNLOAD Jan 21 01:00:32.551290 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 21 01:00:32.551310 kernel: audit: type=1130 audit(1768957232.106:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:32.551330 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 21 01:00:32.551354 kernel: audit: type=1131 audit(1768957232.106:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:32.551380 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 21 01:00:32.551401 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 21 01:00:32.551423 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 21 01:00:32.551444 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 21 01:00:32.551468 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 21 01:00:32.551490 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 21 01:00:32.551512 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 21 01:00:32.551533 systemd[1]: Created slice user.slice - User and Session Slice. Jan 21 01:00:32.551554 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 21 01:00:32.551575 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 21 01:00:32.551597 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 21 01:00:32.551621 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 21 01:00:32.551643 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 21 01:00:32.551664 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 21 01:00:32.551685 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 21 01:00:32.551706 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 21 01:00:32.551726 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 21 01:00:32.551747 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 21 01:00:32.551771 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 21 01:00:32.551792 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 21 01:00:32.551813 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 21 01:00:32.551835 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 21 01:00:32.551856 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 21 01:00:32.551877 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 21 01:00:32.551898 systemd[1]: Reached target slices.target - Slice Units. Jan 21 01:00:32.551919 systemd[1]: Reached target swap.target - Swaps. Jan 21 01:00:32.551943 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 21 01:00:32.551964 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 21 01:00:32.551984 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 21 01:00:32.552005 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 21 01:00:32.552038 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 21 01:00:32.552060 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 21 01:00:32.552078 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 21 01:00:32.552102 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 21 01:00:32.552123 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 21 01:00:32.552142 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 21 01:00:32.552162 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 21 01:00:32.552182 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 21 01:00:32.552203 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 21 01:00:32.552225 systemd[1]: Mounting media.mount - External Media Directory... Jan 21 01:00:32.552244 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 01:00:32.552258 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 21 01:00:32.552272 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 21 01:00:32.552285 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 21 01:00:32.552298 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 21 01:00:32.552312 systemd[1]: Reached target machines.target - Containers. Jan 21 01:00:32.552327 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 21 01:00:32.552340 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 21 01:00:32.552353 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 21 01:00:32.552370 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 21 01:00:32.552389 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 21 01:00:32.552404 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 21 01:00:32.552419 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 21 01:00:32.552432 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 21 01:00:32.552446 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 21 01:00:32.552460 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 21 01:00:32.552473 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 21 01:00:32.552488 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 21 01:00:32.552502 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 21 01:00:32.552516 systemd[1]: Stopped systemd-fsck-usr.service. Jan 21 01:00:32.552530 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 21 01:00:32.552549 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 21 01:00:32.552574 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 21 01:00:32.552598 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 21 01:00:32.552648 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 21 01:00:32.552668 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 21 01:00:32.552682 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 21 01:00:32.552700 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 01:00:32.552714 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 21 01:00:32.552730 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 21 01:00:32.552744 systemd[1]: Mounted media.mount - External Media Directory. Jan 21 01:00:32.552758 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 21 01:00:32.552772 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 21 01:00:32.552785 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 21 01:00:32.552802 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 21 01:00:32.552816 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 21 01:00:32.552830 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 21 01:00:32.552847 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 21 01:00:32.552861 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 21 01:00:32.552877 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 21 01:00:32.552891 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 21 01:00:32.552904 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 21 01:00:32.552920 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 21 01:00:32.552934 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 21 01:00:32.552948 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 21 01:00:32.552962 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 21 01:00:32.552976 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 21 01:00:32.552990 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 21 01:00:32.553003 kernel: fuse: init (API version 7.41) Jan 21 01:00:32.553043 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 21 01:00:32.553073 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 21 01:00:32.553093 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 21 01:00:32.553115 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 21 01:00:32.553136 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 21 01:00:32.553157 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 21 01:00:32.553177 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 21 01:00:32.553203 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 21 01:00:32.553222 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 21 01:00:32.553237 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 21 01:00:32.553280 systemd-journald[1447]: Collecting audit messages is enabled. Jan 21 01:00:32.553307 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 21 01:00:32.553322 systemd-journald[1447]: Journal started Jan 21 01:00:32.553350 systemd-journald[1447]: Runtime Journal (/run/log/journal/ec2f59aa1f5ff35f9c97c1e13e9e56e0) is 4.7M, max 38M, 33.2M free. Jan 21 01:00:32.176000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 21 01:00:32.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:32.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:32.369000 audit: BPF prog-id=14 op=UNLOAD Jan 21 01:00:32.369000 audit: BPF prog-id=13 op=UNLOAD Jan 21 01:00:32.370000 audit: BPF prog-id=15 op=LOAD Jan 21 01:00:32.371000 audit: BPF prog-id=16 op=LOAD Jan 21 01:00:32.371000 audit: BPF prog-id=17 op=LOAD Jan 21 01:00:32.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:32.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:32.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:32.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:32.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:32.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:32.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:32.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:32.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:32.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:32.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:32.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:32.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:32.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:32.547000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 21 01:00:32.547000 audit[1447]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffce9e28870 a2=4000 a3=0 items=0 ppid=1 pid=1447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:32.547000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 21 01:00:32.072824 systemd[1]: Queued start job for default target multi-user.target. Jan 21 01:00:32.092679 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 21 01:00:32.093736 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 21 01:00:32.604047 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 21 01:00:32.610046 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 21 01:00:32.617041 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 21 01:00:32.628118 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 21 01:00:32.633051 systemd[1]: Started systemd-journald.service - Journal Service. Jan 21 01:00:32.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:32.640690 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 21 01:00:32.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:32.647535 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 21 01:00:32.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:32.662595 kernel: ACPI: bus type drm_connector registered Jan 21 01:00:32.664656 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 21 01:00:32.664915 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 21 01:00:32.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:32.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:32.668715 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 21 01:00:32.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:32.677847 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 21 01:00:32.683232 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 21 01:00:32.687404 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 21 01:00:32.713168 systemd-journald[1447]: Time spent on flushing to /var/log/journal/ec2f59aa1f5ff35f9c97c1e13e9e56e0 is 33.403ms for 1155 entries. Jan 21 01:00:32.713168 systemd-journald[1447]: System Journal (/var/log/journal/ec2f59aa1f5ff35f9c97c1e13e9e56e0) is 8M, max 588.1M, 580.1M free. Jan 21 01:00:32.761862 systemd-journald[1447]: Received client request to flush runtime journal. Jan 21 01:00:32.761937 kernel: loop1: detected capacity change from 0 to 73176 Jan 21 01:00:32.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:32.740302 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 21 01:00:32.763051 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 21 01:00:32.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:32.764280 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 21 01:00:32.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:32.769202 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 21 01:00:32.799043 kernel: loop2: detected capacity change from 0 to 224512 Jan 21 01:00:33.008303 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 21 01:00:33.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:33.019242 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 21 01:00:33.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:33.019000 audit: BPF prog-id=18 op=LOAD Jan 21 01:00:33.020000 audit: BPF prog-id=19 op=LOAD Jan 21 01:00:33.020000 audit: BPF prog-id=20 op=LOAD Jan 21 01:00:33.022159 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 21 01:00:33.023000 audit: BPF prog-id=21 op=LOAD Jan 21 01:00:33.027223 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 21 01:00:33.030196 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 21 01:00:33.039000 audit: BPF prog-id=22 op=LOAD Jan 21 01:00:33.040000 audit: BPF prog-id=23 op=LOAD Jan 21 01:00:33.040000 audit: BPF prog-id=24 op=LOAD Jan 21 01:00:33.044000 audit: BPF prog-id=25 op=LOAD Jan 21 01:00:33.045000 audit: BPF prog-id=26 op=LOAD Jan 21 01:00:33.045000 audit: BPF prog-id=27 op=LOAD Jan 21 01:00:33.044202 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 21 01:00:33.047258 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 21 01:00:33.100606 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 21 01:00:33.106159 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 21 01:00:33.129994 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 21 01:00:33.160977 systemd-tmpfiles[1523]: ACLs are not supported, ignoring. Jan 21 01:00:33.161428 systemd-tmpfiles[1523]: ACLs are not supported, ignoring. Jan 21 01:00:33.174734 systemd-nsresourced[1524]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 21 01:00:33.174941 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 21 01:00:33.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:33.179972 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 21 01:00:33.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:33.189647 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 21 01:00:33.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:33.319104 kernel: loop3: detected capacity change from 0 to 111560 Jan 21 01:00:33.334369 systemd-oomd[1521]: No swap; memory pressure usage will be degraded Jan 21 01:00:33.335589 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 21 01:00:33.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:33.407292 systemd-resolved[1522]: Positive Trust Anchors: Jan 21 01:00:33.407797 systemd-resolved[1522]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 21 01:00:33.407877 systemd-resolved[1522]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 21 01:00:33.408144 systemd-resolved[1522]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 21 01:00:33.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:33.414631 systemd-resolved[1522]: Defaulting to hostname 'linux'. Jan 21 01:00:33.416448 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 21 01:00:33.417244 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 21 01:00:33.635206 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 21 01:00:33.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:33.635000 audit: BPF prog-id=8 op=UNLOAD Jan 21 01:00:33.635000 audit: BPF prog-id=7 op=UNLOAD Jan 21 01:00:33.636000 audit: BPF prog-id=28 op=LOAD Jan 21 01:00:33.636000 audit: BPF prog-id=29 op=LOAD Jan 21 01:00:33.638204 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 21 01:00:33.670795 systemd-udevd[1547]: Using default interface naming scheme 'v257'. Jan 21 01:00:33.731065 kernel: loop4: detected capacity change from 0 to 50784 Jan 21 01:00:33.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:33.806534 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 21 01:00:33.807000 audit: BPF prog-id=30 op=LOAD Jan 21 01:00:33.810668 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 21 01:00:33.861673 (udev-worker)[1560]: Network interface NamePolicy= disabled on kernel command line. Jan 21 01:00:33.879291 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 21 01:00:33.927925 systemd-networkd[1555]: lo: Link UP Jan 21 01:00:33.928392 systemd-networkd[1555]: lo: Gained carrier Jan 21 01:00:33.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:33.932442 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 21 01:00:33.933621 systemd-networkd[1555]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 21 01:00:33.933627 systemd-networkd[1555]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 21 01:00:33.935257 systemd[1]: Reached target network.target - Network. Jan 21 01:00:33.938168 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 21 01:00:33.941839 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 21 01:00:33.944457 systemd-networkd[1555]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 21 01:00:33.944544 systemd-networkd[1555]: eth0: Link UP Jan 21 01:00:33.944822 systemd-networkd[1555]: eth0: Gained carrier Jan 21 01:00:33.944843 systemd-networkd[1555]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 21 01:00:33.950782 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 21 01:00:33.956573 systemd-networkd[1555]: eth0: DHCPv4 address 172.31.28.215/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 21 01:00:33.957042 kernel: mousedev: PS/2 mouse device common for all mice Jan 21 01:00:33.970041 kernel: ACPI: button: Power Button [PWRF] Jan 21 01:00:33.974054 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jan 21 01:00:33.976055 kernel: ACPI: button: Sleep Button [SLPF] Jan 21 01:00:34.026124 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 21 01:00:34.029097 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 21 01:00:34.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:34.055228 kernel: loop5: detected capacity change from 0 to 73176 Jan 21 01:00:34.082045 kernel: loop6: detected capacity change from 0 to 224512 Jan 21 01:00:34.110886 kernel: loop7: detected capacity change from 0 to 111560 Jan 21 01:00:34.139050 kernel: loop1: detected capacity change from 0 to 50784 Jan 21 01:00:34.164221 (sd-merge)[1592]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-ami.raw'. Jan 21 01:00:34.171332 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 01:00:34.175103 (sd-merge)[1592]: Merged extensions into '/usr'. Jan 21 01:00:34.191188 systemd[1]: Reload requested from client PID 1486 ('systemd-sysext') (unit systemd-sysext.service)... Jan 21 01:00:34.191210 systemd[1]: Reloading... Jan 21 01:00:34.384052 zram_generator::config[1658]: No configuration found. Jan 21 01:00:34.687077 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 21 01:00:34.688518 systemd[1]: Reloading finished in 496 ms. Jan 21 01:00:34.717366 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 21 01:00:34.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:34.718685 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 21 01:00:34.718944 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 01:00:34.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:34.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:34.765493 systemd[1]: Starting ensure-sysext.service... Jan 21 01:00:34.769199 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 21 01:00:34.771250 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 21 01:00:34.774309 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 01:00:34.777000 audit: BPF prog-id=31 op=LOAD Jan 21 01:00:34.777000 audit: BPF prog-id=21 op=UNLOAD Jan 21 01:00:34.779000 audit: BPF prog-id=32 op=LOAD Jan 21 01:00:34.779000 audit: BPF prog-id=30 op=UNLOAD Jan 21 01:00:34.779000 audit: BPF prog-id=33 op=LOAD Jan 21 01:00:34.779000 audit: BPF prog-id=34 op=LOAD Jan 21 01:00:34.779000 audit: BPF prog-id=28 op=UNLOAD Jan 21 01:00:34.779000 audit: BPF prog-id=29 op=UNLOAD Jan 21 01:00:34.779000 audit: BPF prog-id=35 op=LOAD Jan 21 01:00:34.781000 audit: BPF prog-id=25 op=UNLOAD Jan 21 01:00:34.781000 audit: BPF prog-id=36 op=LOAD Jan 21 01:00:34.781000 audit: BPF prog-id=37 op=LOAD Jan 21 01:00:34.781000 audit: BPF prog-id=26 op=UNLOAD Jan 21 01:00:34.781000 audit: BPF prog-id=27 op=UNLOAD Jan 21 01:00:34.782000 audit: BPF prog-id=38 op=LOAD Jan 21 01:00:34.782000 audit: BPF prog-id=22 op=UNLOAD Jan 21 01:00:34.782000 audit: BPF prog-id=39 op=LOAD Jan 21 01:00:34.782000 audit: BPF prog-id=40 op=LOAD Jan 21 01:00:34.782000 audit: BPF prog-id=23 op=UNLOAD Jan 21 01:00:34.782000 audit: BPF prog-id=24 op=UNLOAD Jan 21 01:00:34.783000 audit: BPF prog-id=41 op=LOAD Jan 21 01:00:34.783000 audit: BPF prog-id=15 op=UNLOAD Jan 21 01:00:34.783000 audit: BPF prog-id=42 op=LOAD Jan 21 01:00:34.783000 audit: BPF prog-id=43 op=LOAD Jan 21 01:00:34.783000 audit: BPF prog-id=16 op=UNLOAD Jan 21 01:00:34.783000 audit: BPF prog-id=17 op=UNLOAD Jan 21 01:00:34.784000 audit: BPF prog-id=44 op=LOAD Jan 21 01:00:34.784000 audit: BPF prog-id=18 op=UNLOAD Jan 21 01:00:34.784000 audit: BPF prog-id=45 op=LOAD Jan 21 01:00:34.784000 audit: BPF prog-id=46 op=LOAD Jan 21 01:00:34.784000 audit: BPF prog-id=19 op=UNLOAD Jan 21 01:00:34.784000 audit: BPF prog-id=20 op=UNLOAD Jan 21 01:00:34.794114 systemd[1]: Reload requested from client PID 1763 ('systemctl') (unit ensure-sysext.service)... Jan 21 01:00:34.794249 systemd[1]: Reloading... Jan 21 01:00:34.808531 systemd-tmpfiles[1765]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 21 01:00:34.808566 systemd-tmpfiles[1765]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 21 01:00:34.808812 systemd-tmpfiles[1765]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 21 01:00:34.810693 systemd-tmpfiles[1765]: ACLs are not supported, ignoring. Jan 21 01:00:34.810755 systemd-tmpfiles[1765]: ACLs are not supported, ignoring. Jan 21 01:00:34.819372 systemd-tmpfiles[1765]: Detected autofs mount point /boot during canonicalization of boot. Jan 21 01:00:34.819398 systemd-tmpfiles[1765]: Skipping /boot Jan 21 01:00:34.834563 systemd-tmpfiles[1765]: Detected autofs mount point /boot during canonicalization of boot. Jan 21 01:00:34.835802 systemd-tmpfiles[1765]: Skipping /boot Jan 21 01:00:34.906678 zram_generator::config[1805]: No configuration found. Jan 21 01:00:35.155997 systemd[1]: Reloading finished in 361 ms. Jan 21 01:00:35.182000 audit: BPF prog-id=47 op=LOAD Jan 21 01:00:35.182000 audit: BPF prog-id=44 op=UNLOAD Jan 21 01:00:35.182000 audit: BPF prog-id=48 op=LOAD Jan 21 01:00:35.182000 audit: BPF prog-id=49 op=LOAD Jan 21 01:00:35.182000 audit: BPF prog-id=45 op=UNLOAD Jan 21 01:00:35.182000 audit: BPF prog-id=46 op=UNLOAD Jan 21 01:00:35.184000 audit: BPF prog-id=50 op=LOAD Jan 21 01:00:35.184000 audit: BPF prog-id=35 op=UNLOAD Jan 21 01:00:35.184000 audit: BPF prog-id=51 op=LOAD Jan 21 01:00:35.184000 audit: BPF prog-id=52 op=LOAD Jan 21 01:00:35.184000 audit: BPF prog-id=36 op=UNLOAD Jan 21 01:00:35.184000 audit: BPF prog-id=37 op=UNLOAD Jan 21 01:00:35.184000 audit: BPF prog-id=53 op=LOAD Jan 21 01:00:35.184000 audit: BPF prog-id=54 op=LOAD Jan 21 01:00:35.184000 audit: BPF prog-id=33 op=UNLOAD Jan 21 01:00:35.184000 audit: BPF prog-id=34 op=UNLOAD Jan 21 01:00:35.185000 audit: BPF prog-id=55 op=LOAD Jan 21 01:00:35.185000 audit: BPF prog-id=31 op=UNLOAD Jan 21 01:00:35.185000 audit: BPF prog-id=56 op=LOAD Jan 21 01:00:35.185000 audit: BPF prog-id=38 op=UNLOAD Jan 21 01:00:35.185000 audit: BPF prog-id=57 op=LOAD Jan 21 01:00:35.185000 audit: BPF prog-id=58 op=LOAD Jan 21 01:00:35.185000 audit: BPF prog-id=39 op=UNLOAD Jan 21 01:00:35.185000 audit: BPF prog-id=40 op=UNLOAD Jan 21 01:00:35.186000 audit: BPF prog-id=59 op=LOAD Jan 21 01:00:35.186000 audit: BPF prog-id=41 op=UNLOAD Jan 21 01:00:35.186000 audit: BPF prog-id=60 op=LOAD Jan 21 01:00:35.186000 audit: BPF prog-id=61 op=LOAD Jan 21 01:00:35.186000 audit: BPF prog-id=42 op=UNLOAD Jan 21 01:00:35.186000 audit: BPF prog-id=43 op=UNLOAD Jan 21 01:00:35.187000 audit: BPF prog-id=62 op=LOAD Jan 21 01:00:35.187000 audit: BPF prog-id=32 op=UNLOAD Jan 21 01:00:35.199522 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 21 01:00:35.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:35.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:35.200473 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 21 01:00:35.202716 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 01:00:35.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:35.213101 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 21 01:00:35.216247 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 21 01:00:35.217872 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 21 01:00:35.222885 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 21 01:00:35.225294 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 21 01:00:35.229781 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 01:00:35.231275 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 21 01:00:35.233084 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 21 01:00:35.236295 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 21 01:00:35.242336 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 21 01:00:35.243512 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 21 01:00:35.243714 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 21 01:00:35.243803 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 21 01:00:35.243893 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 01:00:35.247977 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 01:00:35.248693 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 21 01:00:35.248869 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 21 01:00:35.249041 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 21 01:00:35.249128 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 21 01:00:35.249211 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 01:00:35.250616 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 21 01:00:35.251069 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 21 01:00:35.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:35.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:35.252271 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 21 01:00:35.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:35.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:35.255623 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 21 01:00:35.259277 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 21 01:00:35.259589 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 21 01:00:35.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:35.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:35.264000 audit[1864]: SYSTEM_BOOT pid=1864 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 21 01:00:35.270910 systemd[1]: Finished ensure-sysext.service. Jan 21 01:00:35.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:35.272281 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 21 01:00:35.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:35.277283 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 01:00:35.277502 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 21 01:00:35.278756 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 21 01:00:35.280213 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 21 01:00:35.285197 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 21 01:00:35.287973 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 21 01:00:35.288543 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 21 01:00:35.288655 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 21 01:00:35.288692 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 21 01:00:35.288747 systemd[1]: Reached target time-set.target - System Time Set. Jan 21 01:00:35.290055 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 01:00:35.305096 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 21 01:00:35.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:35.306659 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 21 01:00:35.307941 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 21 01:00:35.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:35.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:35.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:35.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:35.309354 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 21 01:00:35.309586 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 21 01:00:35.310350 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 21 01:00:35.310530 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 21 01:00:35.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:35.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:35.313047 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 21 01:00:35.326431 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 21 01:00:35.326727 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 21 01:00:35.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:35.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:35.327689 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 21 01:00:35.582000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 21 01:00:35.582000 audit[1901]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffff5967120 a2=420 a3=0 items=0 ppid=1860 pid=1901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:35.582000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 21 01:00:35.583803 augenrules[1901]: No rules Jan 21 01:00:35.585174 systemd[1]: audit-rules.service: Deactivated successfully. Jan 21 01:00:35.585470 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 21 01:00:35.597950 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 21 01:00:35.599321 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 21 01:00:35.872226 systemd-networkd[1555]: eth0: Gained IPv6LL Jan 21 01:00:35.874594 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 21 01:00:35.875254 systemd[1]: Reached target network-online.target - Network is Online. Jan 21 01:00:38.784976 ldconfig[1862]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 21 01:00:38.795205 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 21 01:00:38.797166 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 21 01:00:38.815632 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 21 01:00:38.816331 systemd[1]: Reached target sysinit.target - System Initialization. Jan 21 01:00:38.816810 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 21 01:00:38.817220 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 21 01:00:38.817543 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 21 01:00:38.818322 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 21 01:00:38.818721 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 21 01:00:38.819064 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 21 01:00:38.819432 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 21 01:00:38.819720 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 21 01:00:38.820008 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 21 01:00:38.820058 systemd[1]: Reached target paths.target - Path Units. Jan 21 01:00:38.820337 systemd[1]: Reached target timers.target - Timer Units. Jan 21 01:00:38.822616 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 21 01:00:38.824300 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 21 01:00:38.826884 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 21 01:00:38.827380 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 21 01:00:38.827696 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 21 01:00:38.829948 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 21 01:00:38.830671 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 21 01:00:38.831758 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 21 01:00:38.832980 systemd[1]: Reached target sockets.target - Socket Units. Jan 21 01:00:38.833323 systemd[1]: Reached target basic.target - Basic System. Jan 21 01:00:38.833817 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 21 01:00:38.833848 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 21 01:00:38.834955 systemd[1]: Starting containerd.service - containerd container runtime... Jan 21 01:00:38.837185 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 21 01:00:38.842237 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 21 01:00:38.845172 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 21 01:00:38.847252 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 21 01:00:38.850233 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 21 01:00:38.850620 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 21 01:00:38.853239 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 21 01:00:38.858194 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 01:00:38.863417 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 21 01:00:38.867452 jq[1918]: false Jan 21 01:00:38.868728 systemd[1]: Started ntpd.service - Network Time Service. Jan 21 01:00:38.876730 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 21 01:00:38.879103 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 21 01:00:38.887474 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 21 01:00:38.894516 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 21 01:00:38.904008 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 21 01:00:38.913705 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 21 01:00:38.915364 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 21 01:00:38.915852 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 21 01:00:38.922273 systemd[1]: Starting update-engine.service - Update Engine... Jan 21 01:00:38.929401 extend-filesystems[1919]: Found /dev/nvme0n1p6 Jan 21 01:00:38.930556 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 21 01:00:38.935046 google_oslogin_nss_cache[1920]: oslogin_cache_refresh[1920]: Refreshing passwd entry cache Jan 21 01:00:38.932570 oslogin_cache_refresh[1920]: Refreshing passwd entry cache Jan 21 01:00:38.939138 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 21 01:00:38.940918 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 21 01:00:38.949039 google_oslogin_nss_cache[1920]: oslogin_cache_refresh[1920]: Failure getting users, quitting Jan 21 01:00:38.949039 google_oslogin_nss_cache[1920]: oslogin_cache_refresh[1920]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 21 01:00:38.949039 google_oslogin_nss_cache[1920]: oslogin_cache_refresh[1920]: Refreshing group entry cache Jan 21 01:00:38.947204 oslogin_cache_refresh[1920]: Failure getting users, quitting Jan 21 01:00:38.947223 oslogin_cache_refresh[1920]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 21 01:00:38.947272 oslogin_cache_refresh[1920]: Refreshing group entry cache Jan 21 01:00:38.949954 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 21 01:00:38.955045 google_oslogin_nss_cache[1920]: oslogin_cache_refresh[1920]: Failure getting groups, quitting Jan 21 01:00:38.955045 google_oslogin_nss_cache[1920]: oslogin_cache_refresh[1920]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 21 01:00:38.951789 oslogin_cache_refresh[1920]: Failure getting groups, quitting Jan 21 01:00:38.951802 oslogin_cache_refresh[1920]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 21 01:00:38.958525 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 21 01:00:38.959761 jq[1942]: true Jan 21 01:00:38.969057 extend-filesystems[1919]: Found /dev/nvme0n1p9 Jan 21 01:00:38.969057 extend-filesystems[1919]: Checking size of /dev/nvme0n1p9 Jan 21 01:00:38.969238 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 21 01:00:38.972315 systemd[1]: motdgen.service: Deactivated successfully. Jan 21 01:00:38.974291 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 21 01:00:38.982691 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 21 01:00:38.983040 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 21 01:00:38.991344 extend-filesystems[1919]: Resized partition /dev/nvme0n1p9 Jan 21 01:00:39.069672 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 21 01:00:39.080085 update_engine[1939]: I20260121 01:00:39.079479 1939 main.cc:92] Flatcar Update Engine starting Jan 21 01:00:39.084417 extend-filesystems[1987]: resize2fs 1.47.3 (8-Jul-2025) Jan 21 01:00:39.086640 ntpd[1923]: 21 Jan 01:00:39 ntpd[1923]: ntpd 4.2.8p18@1.4062-o Tue Jan 20 21:35:32 UTC 2026 (1): Starting Jan 21 01:00:39.086640 ntpd[1923]: 21 Jan 01:00:39 ntpd[1923]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 21 01:00:39.086640 ntpd[1923]: 21 Jan 01:00:39 ntpd[1923]: ---------------------------------------------------- Jan 21 01:00:39.086640 ntpd[1923]: 21 Jan 01:00:39 ntpd[1923]: ntp-4 is maintained by Network Time Foundation, Jan 21 01:00:39.086640 ntpd[1923]: 21 Jan 01:00:39 ntpd[1923]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 21 01:00:39.086640 ntpd[1923]: 21 Jan 01:00:39 ntpd[1923]: corporation. Support and training for ntp-4 are Jan 21 01:00:39.086640 ntpd[1923]: 21 Jan 01:00:39 ntpd[1923]: available at https://www.nwtime.org/support Jan 21 01:00:39.086640 ntpd[1923]: 21 Jan 01:00:39 ntpd[1923]: ---------------------------------------------------- Jan 21 01:00:39.086640 ntpd[1923]: 21 Jan 01:00:39 ntpd[1923]: proto: precision = 0.064 usec (-24) Jan 21 01:00:39.083527 ntpd[1923]: ntpd 4.2.8p18@1.4062-o Tue Jan 20 21:35:32 UTC 2026 (1): Starting Jan 21 01:00:39.083604 ntpd[1923]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 21 01:00:39.083616 ntpd[1923]: ---------------------------------------------------- Jan 21 01:00:39.083626 ntpd[1923]: ntp-4 is maintained by Network Time Foundation, Jan 21 01:00:39.083636 ntpd[1923]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 21 01:00:39.083646 ntpd[1923]: corporation. Support and training for ntp-4 are Jan 21 01:00:39.083655 ntpd[1923]: available at https://www.nwtime.org/support Jan 21 01:00:39.083665 ntpd[1923]: ---------------------------------------------------- Jan 21 01:00:39.086593 ntpd[1923]: proto: precision = 0.064 usec (-24) Jan 21 01:00:39.100399 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 1617920 to 2604027 blocks Jan 21 01:00:39.100479 ntpd[1923]: 21 Jan 01:00:39 ntpd[1923]: basedate set to 2026-01-08 Jan 21 01:00:39.100479 ntpd[1923]: 21 Jan 01:00:39 ntpd[1923]: gps base set to 2026-01-11 (week 2401) Jan 21 01:00:39.100479 ntpd[1923]: 21 Jan 01:00:39 ntpd[1923]: Listen and drop on 0 v6wildcard [::]:123 Jan 21 01:00:39.100479 ntpd[1923]: 21 Jan 01:00:39 ntpd[1923]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 21 01:00:39.100479 ntpd[1923]: 21 Jan 01:00:39 ntpd[1923]: Listen normally on 2 lo 127.0.0.1:123 Jan 21 01:00:39.100479 ntpd[1923]: 21 Jan 01:00:39 ntpd[1923]: Listen normally on 3 eth0 172.31.28.215:123 Jan 21 01:00:39.100479 ntpd[1923]: 21 Jan 01:00:39 ntpd[1923]: Listen normally on 4 lo [::1]:123 Jan 21 01:00:39.100479 ntpd[1923]: 21 Jan 01:00:39 ntpd[1923]: Listen normally on 5 eth0 [fe80::451:68ff:fe81:9911%2]:123 Jan 21 01:00:39.100479 ntpd[1923]: 21 Jan 01:00:39 ntpd[1923]: Listening on routing socket on fd #22 for interface updates Jan 21 01:00:39.100479 ntpd[1923]: 21 Jan 01:00:39 ntpd[1923]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 21 01:00:39.100479 ntpd[1923]: 21 Jan 01:00:39 ntpd[1923]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 21 01:00:39.100895 jq[1961]: true Jan 21 01:00:39.093274 ntpd[1923]: basedate set to 2026-01-08 Jan 21 01:00:39.093302 ntpd[1923]: gps base set to 2026-01-11 (week 2401) Jan 21 01:00:39.093447 ntpd[1923]: Listen and drop on 0 v6wildcard [::]:123 Jan 21 01:00:39.093475 ntpd[1923]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 21 01:00:39.095753 ntpd[1923]: Listen normally on 2 lo 127.0.0.1:123 Jan 21 01:00:39.095794 ntpd[1923]: Listen normally on 3 eth0 172.31.28.215:123 Jan 21 01:00:39.095832 ntpd[1923]: Listen normally on 4 lo [::1]:123 Jan 21 01:00:39.095862 ntpd[1923]: Listen normally on 5 eth0 [fe80::451:68ff:fe81:9911%2]:123 Jan 21 01:00:39.095889 ntpd[1923]: Listening on routing socket on fd #22 for interface updates Jan 21 01:00:39.098423 ntpd[1923]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 21 01:00:39.098457 ntpd[1923]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 21 01:00:39.126689 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 2604027 Jan 21 01:00:39.133884 tar[1956]: linux-amd64/LICENSE Jan 21 01:00:39.142696 tar[1956]: linux-amd64/helm Jan 21 01:00:39.142775 extend-filesystems[1987]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 21 01:00:39.142775 extend-filesystems[1987]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 21 01:00:39.142775 extend-filesystems[1987]: The filesystem on /dev/nvme0n1p9 is now 2604027 (4k) blocks long. Jan 21 01:00:39.137159 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 21 01:00:39.171232 dbus-daemon[1916]: [system] SELinux support is enabled Jan 21 01:00:39.181705 extend-filesystems[1919]: Resized filesystem in /dev/nvme0n1p9 Jan 21 01:00:39.156389 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 21 01:00:39.158620 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 21 01:00:39.159693 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 21 01:00:39.171592 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 21 01:00:39.182256 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 21 01:00:39.182296 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 21 01:00:39.187933 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 21 01:00:39.187962 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 21 01:00:39.208620 systemd-logind[1938]: Watching system buttons on /dev/input/event2 (Power Button) Jan 21 01:00:39.208651 systemd-logind[1938]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 21 01:00:39.208679 systemd-logind[1938]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 21 01:00:39.212665 systemd-logind[1938]: New seat seat0. Jan 21 01:00:39.214726 dbus-daemon[1916]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1555 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 21 01:00:39.221739 update_engine[1939]: I20260121 01:00:39.221503 1939 update_check_scheduler.cc:74] Next update check in 11m6s Jan 21 01:00:39.228056 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 21 01:00:39.228875 systemd[1]: Started systemd-logind.service - User Login Management. Jan 21 01:00:39.230780 systemd[1]: Started update-engine.service - Update Engine. Jan 21 01:00:39.242620 coreos-metadata[1915]: Jan 21 01:00:39.242 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 21 01:00:39.251820 coreos-metadata[1915]: Jan 21 01:00:39.250 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 21 01:00:39.253250 coreos-metadata[1915]: Jan 21 01:00:39.252 INFO Fetch successful Jan 21 01:00:39.253250 coreos-metadata[1915]: Jan 21 01:00:39.252 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 21 01:00:39.253250 coreos-metadata[1915]: Jan 21 01:00:39.252 INFO Fetch successful Jan 21 01:00:39.253250 coreos-metadata[1915]: Jan 21 01:00:39.252 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 21 01:00:39.257363 coreos-metadata[1915]: Jan 21 01:00:39.257 INFO Fetch successful Jan 21 01:00:39.257363 coreos-metadata[1915]: Jan 21 01:00:39.257 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 21 01:00:39.259510 coreos-metadata[1915]: Jan 21 01:00:39.259 INFO Fetch successful Jan 21 01:00:39.259510 coreos-metadata[1915]: Jan 21 01:00:39.259 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 21 01:00:39.260056 coreos-metadata[1915]: Jan 21 01:00:39.260 INFO Fetch failed with 404: resource not found Jan 21 01:00:39.260140 coreos-metadata[1915]: Jan 21 01:00:39.260 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 21 01:00:39.264167 coreos-metadata[1915]: Jan 21 01:00:39.264 INFO Fetch successful Jan 21 01:00:39.264167 coreos-metadata[1915]: Jan 21 01:00:39.264 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 21 01:00:39.264329 coreos-metadata[1915]: Jan 21 01:00:39.264 INFO Fetch successful Jan 21 01:00:39.264329 coreos-metadata[1915]: Jan 21 01:00:39.264 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 21 01:00:39.264450 coreos-metadata[1915]: Jan 21 01:00:39.264 INFO Fetch successful Jan 21 01:00:39.264512 coreos-metadata[1915]: Jan 21 01:00:39.264 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 21 01:00:39.265667 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 21 01:00:39.267270 coreos-metadata[1915]: Jan 21 01:00:39.267 INFO Fetch successful Jan 21 01:00:39.267349 coreos-metadata[1915]: Jan 21 01:00:39.267 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 21 01:00:39.277691 coreos-metadata[1915]: Jan 21 01:00:39.277 INFO Fetch successful Jan 21 01:00:39.376312 bash[2026]: Updated "/home/core/.ssh/authorized_keys" Jan 21 01:00:39.384142 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 21 01:00:39.390930 systemd[1]: Starting sshkeys.service... Jan 21 01:00:39.421486 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 21 01:00:39.423282 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 21 01:00:39.473945 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 21 01:00:39.481096 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 21 01:00:39.574089 amazon-ssm-agent[1997]: Initializing new seelog logger Jan 21 01:00:39.582940 amazon-ssm-agent[1997]: New Seelog Logger Creation Complete Jan 21 01:00:39.582940 amazon-ssm-agent[1997]: 2026/01/21 01:00:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 21 01:00:39.582940 amazon-ssm-agent[1997]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 21 01:00:39.584168 amazon-ssm-agent[1997]: 2026/01/21 01:00:39 processing appconfig overrides Jan 21 01:00:39.590318 amazon-ssm-agent[1997]: 2026/01/21 01:00:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 21 01:00:39.590318 amazon-ssm-agent[1997]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 21 01:00:39.590318 amazon-ssm-agent[1997]: 2026/01/21 01:00:39 processing appconfig overrides Jan 21 01:00:39.590318 amazon-ssm-agent[1997]: 2026/01/21 01:00:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 21 01:00:39.590318 amazon-ssm-agent[1997]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 21 01:00:39.590318 amazon-ssm-agent[1997]: 2026/01/21 01:00:39 processing appconfig overrides Jan 21 01:00:39.613282 amazon-ssm-agent[1997]: 2026-01-21 01:00:39.5886 INFO Proxy environment variables: Jan 21 01:00:39.622058 amazon-ssm-agent[1997]: 2026/01/21 01:00:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 21 01:00:39.622058 amazon-ssm-agent[1997]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 21 01:00:39.622480 amazon-ssm-agent[1997]: 2026/01/21 01:00:39 processing appconfig overrides Jan 21 01:00:39.710162 amazon-ssm-agent[1997]: 2026-01-21 01:00:39.5886 INFO https_proxy: Jan 21 01:00:39.790732 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 21 01:00:39.801517 dbus-daemon[1916]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 21 01:00:39.803724 dbus-daemon[1916]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=2003 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 21 01:00:39.804199 locksmithd[2008]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 21 01:00:39.815475 amazon-ssm-agent[1997]: 2026-01-21 01:00:39.5886 INFO http_proxy: Jan 21 01:00:39.818242 systemd[1]: Starting polkit.service - Authorization Manager... Jan 21 01:00:39.821827 coreos-metadata[2051]: Jan 21 01:00:39.820 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 21 01:00:39.821827 coreos-metadata[2051]: Jan 21 01:00:39.820 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 21 01:00:39.821827 coreos-metadata[2051]: Jan 21 01:00:39.821 INFO Fetch successful Jan 21 01:00:39.821827 coreos-metadata[2051]: Jan 21 01:00:39.821 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 21 01:00:39.827263 coreos-metadata[2051]: Jan 21 01:00:39.826 INFO Fetch successful Jan 21 01:00:39.829380 unknown[2051]: wrote ssh authorized keys file for user: core Jan 21 01:00:39.908120 amazon-ssm-agent[1997]: 2026-01-21 01:00:39.5886 INFO no_proxy: Jan 21 01:00:39.934444 update-ssh-keys[2129]: Updated "/home/core/.ssh/authorized_keys" Jan 21 01:00:39.944923 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 21 01:00:39.963257 systemd[1]: Finished sshkeys.service. Jan 21 01:00:40.007056 amazon-ssm-agent[1997]: 2026-01-21 01:00:39.5888 INFO Checking if agent identity type OnPrem can be assumed Jan 21 01:00:40.108039 amazon-ssm-agent[1997]: 2026-01-21 01:00:39.5890 INFO Checking if agent identity type EC2 can be assumed Jan 21 01:00:40.167588 polkitd[2106]: Started polkitd version 126 Jan 21 01:00:40.179819 polkitd[2106]: Loading rules from directory /etc/polkit-1/rules.d Jan 21 01:00:40.186600 polkitd[2106]: Loading rules from directory /run/polkit-1/rules.d Jan 21 01:00:40.189178 polkitd[2106]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 21 01:00:40.193853 polkitd[2106]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 21 01:00:40.194333 polkitd[2106]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 21 01:00:40.198676 polkitd[2106]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 21 01:00:40.204314 polkitd[2106]: Finished loading, compiling and executing 2 rules Jan 21 01:00:40.204769 systemd[1]: Started polkit.service - Authorization Manager. Jan 21 01:00:40.209149 amazon-ssm-agent[1997]: 2026-01-21 01:00:39.8267 INFO Agent will take identity from EC2 Jan 21 01:00:40.213412 dbus-daemon[1916]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 21 01:00:40.217038 polkitd[2106]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 21 01:00:40.240438 containerd[1962]: time="2026-01-21T01:00:40Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 21 01:00:40.240438 containerd[1962]: time="2026-01-21T01:00:40.239626327Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 21 01:00:40.264985 systemd-hostnamed[2003]: Hostname set to (transient) Jan 21 01:00:40.265009 systemd-resolved[1522]: System hostname changed to 'ip-172-31-28-215'. Jan 21 01:00:40.307755 amazon-ssm-agent[1997]: 2026-01-21 01:00:39.8296 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jan 21 01:00:40.319959 containerd[1962]: time="2026-01-21T01:00:40.315532914Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.178µs" Jan 21 01:00:40.319959 containerd[1962]: time="2026-01-21T01:00:40.315576675Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 21 01:00:40.319959 containerd[1962]: time="2026-01-21T01:00:40.315624976Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 21 01:00:40.319959 containerd[1962]: time="2026-01-21T01:00:40.315641560Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 21 01:00:40.319959 containerd[1962]: time="2026-01-21T01:00:40.315817036Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 21 01:00:40.319959 containerd[1962]: time="2026-01-21T01:00:40.315835957Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 21 01:00:40.319959 containerd[1962]: time="2026-01-21T01:00:40.315899667Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 21 01:00:40.319959 containerd[1962]: time="2026-01-21T01:00:40.315914500Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 21 01:00:40.319959 containerd[1962]: time="2026-01-21T01:00:40.316219227Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 21 01:00:40.319959 containerd[1962]: time="2026-01-21T01:00:40.316242561Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 21 01:00:40.319959 containerd[1962]: time="2026-01-21T01:00:40.316259451Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 21 01:00:40.319959 containerd[1962]: time="2026-01-21T01:00:40.316271544Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 21 01:00:40.325047 containerd[1962]: time="2026-01-21T01:00:40.316479169Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 21 01:00:40.325047 containerd[1962]: time="2026-01-21T01:00:40.316509959Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 21 01:00:40.325047 containerd[1962]: time="2026-01-21T01:00:40.316601545Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 21 01:00:40.325047 containerd[1962]: time="2026-01-21T01:00:40.316828104Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 21 01:00:40.325047 containerd[1962]: time="2026-01-21T01:00:40.316864535Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 21 01:00:40.325047 containerd[1962]: time="2026-01-21T01:00:40.316879777Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 21 01:00:40.325047 containerd[1962]: time="2026-01-21T01:00:40.318621140Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 21 01:00:40.325047 containerd[1962]: time="2026-01-21T01:00:40.319127873Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 21 01:00:40.325047 containerd[1962]: time="2026-01-21T01:00:40.319236826Z" level=info msg="metadata content store policy set" policy=shared Jan 21 01:00:40.327545 containerd[1962]: time="2026-01-21T01:00:40.327305168Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 21 01:00:40.327545 containerd[1962]: time="2026-01-21T01:00:40.327391295Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 21 01:00:40.327674 containerd[1962]: time="2026-01-21T01:00:40.327545594Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 21 01:00:40.327674 containerd[1962]: time="2026-01-21T01:00:40.327566772Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 21 01:00:40.327674 containerd[1962]: time="2026-01-21T01:00:40.327586524Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 21 01:00:40.327674 containerd[1962]: time="2026-01-21T01:00:40.327603143Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 21 01:00:40.327674 containerd[1962]: time="2026-01-21T01:00:40.327621115Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 21 01:00:40.327674 containerd[1962]: time="2026-01-21T01:00:40.327634632Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 21 01:00:40.327674 containerd[1962]: time="2026-01-21T01:00:40.327652116Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 21 01:00:40.327674 containerd[1962]: time="2026-01-21T01:00:40.327670242Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 21 01:00:40.327916 containerd[1962]: time="2026-01-21T01:00:40.327685789Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 21 01:00:40.327916 containerd[1962]: time="2026-01-21T01:00:40.327702096Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 21 01:00:40.327916 containerd[1962]: time="2026-01-21T01:00:40.327717414Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 21 01:00:40.327916 containerd[1962]: time="2026-01-21T01:00:40.327735176Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 21 01:00:40.327916 containerd[1962]: time="2026-01-21T01:00:40.327888116Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 21 01:00:40.327916 containerd[1962]: time="2026-01-21T01:00:40.327914407Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 21 01:00:40.328132 containerd[1962]: time="2026-01-21T01:00:40.327936093Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 21 01:00:40.328132 containerd[1962]: time="2026-01-21T01:00:40.327951538Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 21 01:00:40.328132 containerd[1962]: time="2026-01-21T01:00:40.327966286Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 21 01:00:40.328132 containerd[1962]: time="2026-01-21T01:00:40.327995526Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 21 01:00:40.330264 containerd[1962]: time="2026-01-21T01:00:40.328014026Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 21 01:00:40.330264 containerd[1962]: time="2026-01-21T01:00:40.329089833Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 21 01:00:40.330264 containerd[1962]: time="2026-01-21T01:00:40.329114735Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 21 01:00:40.330264 containerd[1962]: time="2026-01-21T01:00:40.329135011Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 21 01:00:40.330264 containerd[1962]: time="2026-01-21T01:00:40.329150831Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 21 01:00:40.330264 containerd[1962]: time="2026-01-21T01:00:40.329190222Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 21 01:00:40.330264 containerd[1962]: time="2026-01-21T01:00:40.329263024Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 21 01:00:40.330264 containerd[1962]: time="2026-01-21T01:00:40.329283627Z" level=info msg="Start snapshots syncer" Jan 21 01:00:40.330264 containerd[1962]: time="2026-01-21T01:00:40.329343240Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 21 01:00:40.330628 containerd[1962]: time="2026-01-21T01:00:40.329912652Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 21 01:00:40.330628 containerd[1962]: time="2026-01-21T01:00:40.329987762Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 21 01:00:40.332332 containerd[1962]: time="2026-01-21T01:00:40.331103877Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 21 01:00:40.332332 containerd[1962]: time="2026-01-21T01:00:40.331309069Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 21 01:00:40.332332 containerd[1962]: time="2026-01-21T01:00:40.331342964Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 21 01:00:40.332332 containerd[1962]: time="2026-01-21T01:00:40.331359765Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 21 01:00:40.332332 containerd[1962]: time="2026-01-21T01:00:40.331375441Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 21 01:00:40.332332 containerd[1962]: time="2026-01-21T01:00:40.331395537Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 21 01:00:40.332332 containerd[1962]: time="2026-01-21T01:00:40.331417485Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 21 01:00:40.332332 containerd[1962]: time="2026-01-21T01:00:40.331432925Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 21 01:00:40.332332 containerd[1962]: time="2026-01-21T01:00:40.331450105Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 21 01:00:40.332332 containerd[1962]: time="2026-01-21T01:00:40.331467196Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 21 01:00:40.333295 containerd[1962]: time="2026-01-21T01:00:40.333065096Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 21 01:00:40.333295 containerd[1962]: time="2026-01-21T01:00:40.333168265Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 21 01:00:40.333295 containerd[1962]: time="2026-01-21T01:00:40.333186247Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 21 01:00:40.333295 containerd[1962]: time="2026-01-21T01:00:40.333203042Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 21 01:00:40.333295 containerd[1962]: time="2026-01-21T01:00:40.333217391Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 21 01:00:40.333295 containerd[1962]: time="2026-01-21T01:00:40.333236927Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 21 01:00:40.333295 containerd[1962]: time="2026-01-21T01:00:40.333255230Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 21 01:00:40.333295 containerd[1962]: time="2026-01-21T01:00:40.333272232Z" level=info msg="runtime interface created" Jan 21 01:00:40.333295 containerd[1962]: time="2026-01-21T01:00:40.333281596Z" level=info msg="created NRI interface" Jan 21 01:00:40.333295 containerd[1962]: time="2026-01-21T01:00:40.333293804Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 21 01:00:40.333661 containerd[1962]: time="2026-01-21T01:00:40.333313215Z" level=info msg="Connect containerd service" Jan 21 01:00:40.333661 containerd[1962]: time="2026-01-21T01:00:40.333345955Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 21 01:00:40.335140 containerd[1962]: time="2026-01-21T01:00:40.334418827Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 21 01:00:40.349282 amazon-ssm-agent[1997]: 2026/01/21 01:00:40 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 21 01:00:40.349282 amazon-ssm-agent[1997]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 21 01:00:40.350358 amazon-ssm-agent[1997]: 2026/01/21 01:00:40 processing appconfig overrides Jan 21 01:00:40.394077 amazon-ssm-agent[1997]: 2026-01-21 01:00:39.8296 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 21 01:00:40.394319 amazon-ssm-agent[1997]: 2026-01-21 01:00:39.8296 INFO [amazon-ssm-agent] Starting Core Agent Jan 21 01:00:40.394319 amazon-ssm-agent[1997]: 2026-01-21 01:00:39.8296 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jan 21 01:00:40.394319 amazon-ssm-agent[1997]: 2026-01-21 01:00:39.8296 INFO [Registrar] Starting registrar module Jan 21 01:00:40.394319 amazon-ssm-agent[1997]: 2026-01-21 01:00:39.8454 INFO [EC2Identity] Checking disk for registration info Jan 21 01:00:40.394319 amazon-ssm-agent[1997]: 2026-01-21 01:00:39.8455 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jan 21 01:00:40.394994 amazon-ssm-agent[1997]: 2026-01-21 01:00:39.8455 INFO [EC2Identity] Generating registration keypair Jan 21 01:00:40.394994 amazon-ssm-agent[1997]: 2026-01-21 01:00:40.2850 INFO [EC2Identity] Checking write access before registering Jan 21 01:00:40.394994 amazon-ssm-agent[1997]: 2026-01-21 01:00:40.2855 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jan 21 01:00:40.394994 amazon-ssm-agent[1997]: 2026-01-21 01:00:40.3488 INFO [EC2Identity] EC2 registration was successful. Jan 21 01:00:40.394994 amazon-ssm-agent[1997]: 2026-01-21 01:00:40.3488 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jan 21 01:00:40.394994 amazon-ssm-agent[1997]: 2026-01-21 01:00:40.3489 INFO [CredentialRefresher] credentialRefresher has started Jan 21 01:00:40.394994 amazon-ssm-agent[1997]: 2026-01-21 01:00:40.3489 INFO [CredentialRefresher] Starting credentials refresher loop Jan 21 01:00:40.394994 amazon-ssm-agent[1997]: 2026-01-21 01:00:40.3934 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 21 01:00:40.394994 amazon-ssm-agent[1997]: 2026-01-21 01:00:40.3938 INFO [CredentialRefresher] Credentials ready Jan 21 01:00:40.406549 amazon-ssm-agent[1997]: 2026-01-21 01:00:40.3949 INFO [CredentialRefresher] Next credential rotation will be in 29.999975525300002 minutes Jan 21 01:00:40.452045 sshd_keygen[1950]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 21 01:00:40.500309 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 21 01:00:40.506933 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 21 01:00:40.528982 systemd[1]: issuegen.service: Deactivated successfully. Jan 21 01:00:40.529941 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 21 01:00:40.536134 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 21 01:00:40.580213 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 21 01:00:40.584691 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 21 01:00:40.588739 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 21 01:00:40.590277 systemd[1]: Reached target getty.target - Login Prompts. Jan 21 01:00:40.631945 tar[1956]: linux-amd64/README.md Jan 21 01:00:40.649835 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 21 01:00:40.710039 containerd[1962]: time="2026-01-21T01:00:40.709745787Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 21 01:00:40.710039 containerd[1962]: time="2026-01-21T01:00:40.709895822Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 21 01:00:40.710039 containerd[1962]: time="2026-01-21T01:00:40.709953082Z" level=info msg="Start subscribing containerd event" Jan 21 01:00:40.710039 containerd[1962]: time="2026-01-21T01:00:40.709995831Z" level=info msg="Start recovering state" Jan 21 01:00:40.710314 containerd[1962]: time="2026-01-21T01:00:40.710300428Z" level=info msg="Start event monitor" Jan 21 01:00:40.710758 containerd[1962]: time="2026-01-21T01:00:40.710359448Z" level=info msg="Start cni network conf syncer for default" Jan 21 01:00:40.710758 containerd[1962]: time="2026-01-21T01:00:40.710379184Z" level=info msg="Start streaming server" Jan 21 01:00:40.710758 containerd[1962]: time="2026-01-21T01:00:40.710387662Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 21 01:00:40.710758 containerd[1962]: time="2026-01-21T01:00:40.710395694Z" level=info msg="runtime interface starting up..." Jan 21 01:00:40.710758 containerd[1962]: time="2026-01-21T01:00:40.710405965Z" level=info msg="starting plugins..." Jan 21 01:00:40.710758 containerd[1962]: time="2026-01-21T01:00:40.710418172Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 21 01:00:40.711119 systemd[1]: Started containerd.service - containerd container runtime. Jan 21 01:00:40.712389 containerd[1962]: time="2026-01-21T01:00:40.712328533Z" level=info msg="containerd successfully booted in 0.479756s" Jan 21 01:00:41.407861 amazon-ssm-agent[1997]: 2026-01-21 01:00:41.4077 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 21 01:00:41.510038 amazon-ssm-agent[1997]: 2026-01-21 01:00:41.4110 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2194) started Jan 21 01:00:41.610250 amazon-ssm-agent[1997]: 2026-01-21 01:00:41.4110 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 21 01:00:44.129625 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 01:00:44.131715 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 21 01:00:44.133160 systemd[1]: Startup finished in 3.609s (kernel) + 10.326s (initrd) + 14.034s (userspace) = 27.970s. Jan 21 01:00:44.150678 (kubelet)[2211]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 01:00:45.192379 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 21 01:00:45.193881 systemd[1]: Started sshd@0-172.31.28.215:22-68.220.241.50:51342.service - OpenSSH per-connection server daemon (68.220.241.50:51342). Jan 21 01:00:45.781134 sshd[2221]: Accepted publickey for core from 68.220.241.50 port 51342 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:00:45.783287 sshd-session[2221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:00:45.790421 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 21 01:00:45.792100 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 21 01:00:45.800206 systemd-logind[1938]: New session 1 of user core. Jan 21 01:00:45.812558 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 21 01:00:45.816244 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 21 01:00:45.834769 (systemd)[2227]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:00:45.841195 systemd-logind[1938]: New session 2 of user core. Jan 21 01:00:46.026973 systemd[2227]: Queued start job for default target default.target. Jan 21 01:00:46.033781 systemd[2227]: Created slice app.slice - User Application Slice. Jan 21 01:00:46.033831 systemd[2227]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 21 01:00:46.033853 systemd[2227]: Reached target paths.target - Paths. Jan 21 01:00:46.034381 systemd[2227]: Reached target timers.target - Timers. Jan 21 01:00:46.035887 systemd[2227]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 21 01:00:46.036899 systemd[2227]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 21 01:00:46.059300 systemd[2227]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 21 01:00:46.059713 systemd[2227]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 21 01:00:46.060636 systemd[2227]: Reached target sockets.target - Sockets. Jan 21 01:00:46.060832 systemd[2227]: Reached target basic.target - Basic System. Jan 21 01:00:46.061035 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 21 01:00:46.062167 systemd[2227]: Reached target default.target - Main User Target. Jan 21 01:00:46.062393 systemd[2227]: Startup finished in 213ms. Jan 21 01:00:46.066331 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 21 01:00:47.134368 systemd-resolved[1522]: Clock change detected. Flushing caches. Jan 21 01:00:47.326812 kubelet[2211]: E0121 01:00:47.326730 2211 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 01:00:47.329515 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 01:00:47.329663 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 01:00:47.330315 systemd[1]: kubelet.service: Consumed 1.038s CPU time, 266.8M memory peak. Jan 21 01:00:47.358117 systemd[1]: Started sshd@1-172.31.28.215:22-68.220.241.50:51344.service - OpenSSH per-connection server daemon (68.220.241.50:51344). Jan 21 01:00:47.779755 sshd[2243]: Accepted publickey for core from 68.220.241.50 port 51344 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:00:47.781144 sshd-session[2243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:00:47.786535 systemd-logind[1938]: New session 3 of user core. Jan 21 01:00:47.792438 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 21 01:00:48.012338 sshd[2247]: Connection closed by 68.220.241.50 port 51344 Jan 21 01:00:48.012891 sshd-session[2243]: pam_unix(sshd:session): session closed for user core Jan 21 01:00:48.016671 systemd[1]: sshd@1-172.31.28.215:22-68.220.241.50:51344.service: Deactivated successfully. Jan 21 01:00:48.018859 systemd[1]: session-3.scope: Deactivated successfully. Jan 21 01:00:48.021080 systemd-logind[1938]: Session 3 logged out. Waiting for processes to exit. Jan 21 01:00:48.022613 systemd-logind[1938]: Removed session 3. Jan 21 01:00:48.104558 systemd[1]: Started sshd@2-172.31.28.215:22-68.220.241.50:51352.service - OpenSSH per-connection server daemon (68.220.241.50:51352). Jan 21 01:00:48.539314 sshd[2253]: Accepted publickey for core from 68.220.241.50 port 51352 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:00:48.540591 sshd-session[2253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:00:48.545926 systemd-logind[1938]: New session 4 of user core. Jan 21 01:00:48.551463 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 21 01:00:48.772304 sshd[2257]: Connection closed by 68.220.241.50 port 51352 Jan 21 01:00:48.774246 sshd-session[2253]: pam_unix(sshd:session): session closed for user core Jan 21 01:00:48.777944 systemd[1]: sshd@2-172.31.28.215:22-68.220.241.50:51352.service: Deactivated successfully. Jan 21 01:00:48.780123 systemd[1]: session-4.scope: Deactivated successfully. Jan 21 01:00:48.782164 systemd-logind[1938]: Session 4 logged out. Waiting for processes to exit. Jan 21 01:00:48.783172 systemd-logind[1938]: Removed session 4. Jan 21 01:00:48.870093 systemd[1]: Started sshd@3-172.31.28.215:22-68.220.241.50:51368.service - OpenSSH per-connection server daemon (68.220.241.50:51368). Jan 21 01:00:49.298246 sshd[2263]: Accepted publickey for core from 68.220.241.50 port 51368 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:00:49.298962 sshd-session[2263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:00:49.305744 systemd-logind[1938]: New session 5 of user core. Jan 21 01:00:49.311555 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 21 01:00:49.529782 sshd[2267]: Connection closed by 68.220.241.50 port 51368 Jan 21 01:00:49.531407 sshd-session[2263]: pam_unix(sshd:session): session closed for user core Jan 21 01:00:49.536084 systemd[1]: sshd@3-172.31.28.215:22-68.220.241.50:51368.service: Deactivated successfully. Jan 21 01:00:49.538156 systemd[1]: session-5.scope: Deactivated successfully. Jan 21 01:00:49.539278 systemd-logind[1938]: Session 5 logged out. Waiting for processes to exit. Jan 21 01:00:49.541108 systemd-logind[1938]: Removed session 5. Jan 21 01:00:49.635440 systemd[1]: Started sshd@4-172.31.28.215:22-68.220.241.50:51372.service - OpenSSH per-connection server daemon (68.220.241.50:51372). Jan 21 01:00:50.117999 sshd[2273]: Accepted publickey for core from 68.220.241.50 port 51372 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:00:50.118748 sshd-session[2273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:00:50.123617 systemd-logind[1938]: New session 6 of user core. Jan 21 01:00:50.130454 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 21 01:00:50.331078 sudo[2278]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 21 01:00:50.331426 sudo[2278]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 21 01:00:50.340337 sudo[2278]: pam_unix(sudo:session): session closed for user root Jan 21 01:00:50.424239 sshd[2277]: Connection closed by 68.220.241.50 port 51372 Jan 21 01:00:50.426189 sshd-session[2273]: pam_unix(sshd:session): session closed for user core Jan 21 01:00:50.431775 systemd[1]: sshd@4-172.31.28.215:22-68.220.241.50:51372.service: Deactivated successfully. Jan 21 01:00:50.434629 systemd[1]: session-6.scope: Deactivated successfully. Jan 21 01:00:50.437176 systemd-logind[1938]: Session 6 logged out. Waiting for processes to exit. Jan 21 01:00:50.438829 systemd-logind[1938]: Removed session 6. Jan 21 01:00:50.513326 systemd[1]: Started sshd@5-172.31.28.215:22-68.220.241.50:51374.service - OpenSSH per-connection server daemon (68.220.241.50:51374). Jan 21 01:00:50.947300 sshd[2285]: Accepted publickey for core from 68.220.241.50 port 51374 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:00:50.948901 sshd-session[2285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:00:50.955277 systemd-logind[1938]: New session 7 of user core. Jan 21 01:00:50.964543 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 21 01:00:51.109409 sudo[2291]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 21 01:00:51.109816 sudo[2291]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 21 01:00:51.114159 sudo[2291]: pam_unix(sudo:session): session closed for user root Jan 21 01:00:51.121296 sudo[2290]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 21 01:00:51.121713 sudo[2290]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 21 01:00:51.130119 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 21 01:00:51.174556 kernel: kauditd_printk_skb: 151 callbacks suppressed Jan 21 01:00:51.174663 kernel: audit: type=1305 audit(1768957251.172:245): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 21 01:00:51.172000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 21 01:00:51.174760 augenrules[2315]: No rules Jan 21 01:00:51.176744 systemd[1]: audit-rules.service: Deactivated successfully. Jan 21 01:00:51.176978 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 21 01:00:51.178035 sudo[2290]: pam_unix(sudo:session): session closed for user root Jan 21 01:00:51.172000 audit[2315]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdfb992ab0 a2=420 a3=0 items=0 ppid=2296 pid=2315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:51.183264 kernel: audit: type=1300 audit(1768957251.172:245): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdfb992ab0 a2=420 a3=0 items=0 ppid=2296 pid=2315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:51.183358 kernel: audit: type=1327 audit(1768957251.172:245): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 21 01:00:51.172000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 21 01:00:51.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:51.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:51.190100 kernel: audit: type=1130 audit(1768957251.176:246): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:51.190195 kernel: audit: type=1131 audit(1768957251.176:247): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:51.190235 kernel: audit: type=1106 audit(1768957251.176:248): pid=2290 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 01:00:51.176000 audit[2290]: USER_END pid=2290 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 01:00:51.196322 kernel: audit: type=1104 audit(1768957251.177:249): pid=2290 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 01:00:51.177000 audit[2290]: CRED_DISP pid=2290 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 01:00:51.257488 sshd[2289]: Connection closed by 68.220.241.50 port 51374 Jan 21 01:00:51.258507 sshd-session[2285]: pam_unix(sshd:session): session closed for user core Jan 21 01:00:51.258000 audit[2285]: USER_END pid=2285 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:00:51.266028 systemd[1]: sshd@5-172.31.28.215:22-68.220.241.50:51374.service: Deactivated successfully. Jan 21 01:00:51.266226 kernel: audit: type=1106 audit(1768957251.258:250): pid=2285 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:00:51.258000 audit[2285]: CRED_DISP pid=2285 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:00:51.268062 systemd[1]: session-7.scope: Deactivated successfully. Jan 21 01:00:51.269096 systemd-logind[1938]: Session 7 logged out. Waiting for processes to exit. Jan 21 01:00:51.271752 systemd-logind[1938]: Removed session 7. Jan 21 01:00:51.272343 kernel: audit: type=1104 audit(1768957251.258:251): pid=2285 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:00:51.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.28.215:22-68.220.241.50:51374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:51.277266 kernel: audit: type=1131 audit(1768957251.265:252): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.28.215:22-68.220.241.50:51374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:51.344227 systemd[1]: Started sshd@6-172.31.28.215:22-68.220.241.50:51386.service - OpenSSH per-connection server daemon (68.220.241.50:51386). Jan 21 01:00:51.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.28.215:22-68.220.241.50:51386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:51.770000 audit[2324]: USER_ACCT pid=2324 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:00:51.771741 sshd[2324]: Accepted publickey for core from 68.220.241.50 port 51386 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:00:51.771000 audit[2324]: CRED_ACQ pid=2324 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:00:51.771000 audit[2324]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc61f22db0 a2=3 a3=0 items=0 ppid=1 pid=2324 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:51.771000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:00:51.773327 sshd-session[2324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:00:51.778269 systemd-logind[1938]: New session 8 of user core. Jan 21 01:00:51.785480 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 21 01:00:51.787000 audit[2324]: USER_START pid=2324 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:00:51.788000 audit[2328]: CRED_ACQ pid=2328 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:00:51.929000 audit[2329]: USER_ACCT pid=2329 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 01:00:51.930957 sudo[2329]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 21 01:00:51.929000 audit[2329]: CRED_REFR pid=2329 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 01:00:51.930000 audit[2329]: USER_START pid=2329 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 01:00:51.931287 sudo[2329]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 21 01:00:53.194356 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 21 01:00:53.215677 (dockerd)[2347]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 21 01:00:54.306357 dockerd[2347]: time="2026-01-21T01:00:54.306064407Z" level=info msg="Starting up" Jan 21 01:00:54.308569 dockerd[2347]: time="2026-01-21T01:00:54.308524023Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 21 01:00:54.320613 dockerd[2347]: time="2026-01-21T01:00:54.320566507Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 21 01:00:54.404394 dockerd[2347]: time="2026-01-21T01:00:54.404349530Z" level=info msg="Loading containers: start." Jan 21 01:00:54.419240 kernel: Initializing XFRM netlink socket Jan 21 01:00:54.547000 audit[2395]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2395 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:00:54.547000 audit[2395]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fffb622e030 a2=0 a3=0 items=0 ppid=2347 pid=2395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.547000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 21 01:00:54.549000 audit[2397]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2397 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:00:54.549000 audit[2397]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff0e3e4ae0 a2=0 a3=0 items=0 ppid=2347 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.549000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 21 01:00:54.551000 audit[2399]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2399 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:00:54.551000 audit[2399]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd6bc417f0 a2=0 a3=0 items=0 ppid=2347 pid=2399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.551000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 21 01:00:54.554000 audit[2401]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2401 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:00:54.554000 audit[2401]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe385886f0 a2=0 a3=0 items=0 ppid=2347 pid=2401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.554000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 21 01:00:54.556000 audit[2403]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2403 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:00:54.556000 audit[2403]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffedbab8c0 a2=0 a3=0 items=0 ppid=2347 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.556000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 21 01:00:54.558000 audit[2405]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2405 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:00:54.558000 audit[2405]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe16e31d20 a2=0 a3=0 items=0 ppid=2347 pid=2405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.558000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 21 01:00:54.561000 audit[2407]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2407 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:00:54.561000 audit[2407]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffb3c57870 a2=0 a3=0 items=0 ppid=2347 pid=2407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.561000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 21 01:00:54.563000 audit[2409]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2409 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:00:54.563000 audit[2409]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffeae29d6b0 a2=0 a3=0 items=0 ppid=2347 pid=2409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.563000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 21 01:00:54.592000 audit[2412]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2412 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:00:54.592000 audit[2412]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fff129cdd70 a2=0 a3=0 items=0 ppid=2347 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.592000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 21 01:00:54.596000 audit[2414]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2414 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:00:54.596000 audit[2414]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcd5fe9750 a2=0 a3=0 items=0 ppid=2347 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.596000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 21 01:00:54.599000 audit[2416]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2416 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:00:54.599000 audit[2416]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe8521a0a0 a2=0 a3=0 items=0 ppid=2347 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.599000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 21 01:00:54.601000 audit[2418]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2418 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:00:54.601000 audit[2418]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd53ea77b0 a2=0 a3=0 items=0 ppid=2347 pid=2418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.601000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 21 01:00:54.603000 audit[2420]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2420 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:00:54.603000 audit[2420]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff6fe8eae0 a2=0 a3=0 items=0 ppid=2347 pid=2420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.603000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 21 01:00:54.685000 audit[2450]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2450 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:00:54.685000 audit[2450]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fffb5534c40 a2=0 a3=0 items=0 ppid=2347 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.685000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 21 01:00:54.688000 audit[2452]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2452 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:00:54.688000 audit[2452]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffefddb0cb0 a2=0 a3=0 items=0 ppid=2347 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.688000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 21 01:00:54.690000 audit[2454]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2454 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:00:54.690000 audit[2454]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe755a9070 a2=0 a3=0 items=0 ppid=2347 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.690000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 21 01:00:54.692000 audit[2456]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2456 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:00:54.692000 audit[2456]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe8de9d6a0 a2=0 a3=0 items=0 ppid=2347 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.692000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 21 01:00:54.694000 audit[2458]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2458 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:00:54.694000 audit[2458]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe04ebf060 a2=0 a3=0 items=0 ppid=2347 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.694000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 21 01:00:54.697000 audit[2460]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2460 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:00:54.697000 audit[2460]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc7f510f30 a2=0 a3=0 items=0 ppid=2347 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.697000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 21 01:00:54.699000 audit[2462]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2462 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:00:54.699000 audit[2462]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdd99a7750 a2=0 a3=0 items=0 ppid=2347 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.699000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 21 01:00:54.701000 audit[2464]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2464 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:00:54.701000 audit[2464]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffc189d7530 a2=0 a3=0 items=0 ppid=2347 pid=2464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.701000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 21 01:00:54.704000 audit[2466]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2466 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:00:54.704000 audit[2466]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffe26fdbc60 a2=0 a3=0 items=0 ppid=2347 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.704000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 21 01:00:54.707000 audit[2468]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2468 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:00:54.707000 audit[2468]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd7c42bf20 a2=0 a3=0 items=0 ppid=2347 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.707000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 21 01:00:54.710000 audit[2470]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2470 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:00:54.710000 audit[2470]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd12bfdb20 a2=0 a3=0 items=0 ppid=2347 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.710000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 21 01:00:54.712000 audit[2472]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2472 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:00:54.712000 audit[2472]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd074182b0 a2=0 a3=0 items=0 ppid=2347 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.712000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 21 01:00:54.715000 audit[2474]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2474 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:00:54.715000 audit[2474]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe26138600 a2=0 a3=0 items=0 ppid=2347 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.715000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 21 01:00:54.721000 audit[2479]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2479 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:00:54.721000 audit[2479]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcb4ce6410 a2=0 a3=0 items=0 ppid=2347 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.721000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 21 01:00:54.724000 audit[2481]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2481 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:00:54.724000 audit[2481]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff78d208a0 a2=0 a3=0 items=0 ppid=2347 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.724000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 21 01:00:54.726000 audit[2483]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:00:54.726000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd428a9da0 a2=0 a3=0 items=0 ppid=2347 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.726000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 21 01:00:54.728000 audit[2485]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2485 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:00:54.728000 audit[2485]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc5c268fb0 a2=0 a3=0 items=0 ppid=2347 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.728000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 21 01:00:54.731000 audit[2487]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2487 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:00:54.731000 audit[2487]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe03ce2b30 a2=0 a3=0 items=0 ppid=2347 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.731000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 21 01:00:54.734000 audit[2489]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2489 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:00:54.734000 audit[2489]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff8822ccc0 a2=0 a3=0 items=0 ppid=2347 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.734000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 21 01:00:54.752844 (udev-worker)[2368]: Network interface NamePolicy= disabled on kernel command line. Jan 21 01:00:54.767000 audit[2495]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2495 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:00:54.767000 audit[2495]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffea0377f50 a2=0 a3=0 items=0 ppid=2347 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.767000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 21 01:00:54.770000 audit[2497]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2497 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:00:54.770000 audit[2497]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd8585b9d0 a2=0 a3=0 items=0 ppid=2347 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.770000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 21 01:00:54.780000 audit[2505]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2505 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:00:54.780000 audit[2505]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7fff4f71e150 a2=0 a3=0 items=0 ppid=2347 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.780000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 21 01:00:54.822000 audit[2511]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2511 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:00:54.822000 audit[2511]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd9ba23c20 a2=0 a3=0 items=0 ppid=2347 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.822000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 21 01:00:54.826000 audit[2513]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2513 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:00:54.826000 audit[2513]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffd972c6eb0 a2=0 a3=0 items=0 ppid=2347 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.826000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 21 01:00:54.829000 audit[2515]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2515 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:00:54.829000 audit[2515]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffef7bf1a30 a2=0 a3=0 items=0 ppid=2347 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.829000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 21 01:00:54.832000 audit[2517]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2517 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:00:54.832000 audit[2517]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffc3f03b290 a2=0 a3=0 items=0 ppid=2347 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.832000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 21 01:00:54.834000 audit[2519]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2519 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:00:54.834000 audit[2519]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdf02ca750 a2=0 a3=0 items=0 ppid=2347 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.834000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 21 01:00:54.836593 systemd-networkd[1555]: docker0: Link UP Jan 21 01:00:54.847617 dockerd[2347]: time="2026-01-21T01:00:54.847544870Z" level=info msg="Loading containers: done." Jan 21 01:00:54.864334 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4287028960-merged.mount: Deactivated successfully. Jan 21 01:00:54.898096 dockerd[2347]: time="2026-01-21T01:00:54.898046973Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 21 01:00:54.898348 dockerd[2347]: time="2026-01-21T01:00:54.898137750Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 21 01:00:54.898348 dockerd[2347]: time="2026-01-21T01:00:54.898245693Z" level=info msg="Initializing buildkit" Jan 21 01:00:54.936660 dockerd[2347]: time="2026-01-21T01:00:54.936620779Z" level=info msg="Completed buildkit initialization" Jan 21 01:00:54.943441 dockerd[2347]: time="2026-01-21T01:00:54.943396564Z" level=info msg="Daemon has completed initialization" Jan 21 01:00:54.944060 dockerd[2347]: time="2026-01-21T01:00:54.943459774Z" level=info msg="API listen on /run/docker.sock" Jan 21 01:00:54.943770 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 21 01:00:54.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:56.769076 containerd[1962]: time="2026-01-21T01:00:56.769033771Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 21 01:00:57.391939 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 21 01:00:57.395440 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 01:00:57.410711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1792939230.mount: Deactivated successfully. Jan 21 01:00:57.706989 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 01:00:57.715653 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 21 01:00:57.715803 kernel: audit: type=1130 audit(1768957257.708:303): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:57.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:57.722673 (kubelet)[2580]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 01:00:57.815107 kubelet[2580]: E0121 01:00:57.815024 2580 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 01:00:57.820775 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 01:00:57.820977 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 01:00:57.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 01:00:57.821738 systemd[1]: kubelet.service: Consumed 221ms CPU time, 110.6M memory peak. Jan 21 01:00:57.826252 kernel: audit: type=1131 audit(1768957257.820:304): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 01:00:58.750146 containerd[1962]: time="2026-01-21T01:00:58.750095084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:00:58.751166 containerd[1962]: time="2026-01-21T01:00:58.750995747Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=27401903" Jan 21 01:00:58.752231 containerd[1962]: time="2026-01-21T01:00:58.752181664Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:00:58.754517 containerd[1962]: time="2026-01-21T01:00:58.754486143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:00:58.755687 containerd[1962]: time="2026-01-21T01:00:58.755440919Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 1.986370394s" Jan 21 01:00:58.755687 containerd[1962]: time="2026-01-21T01:00:58.755472728Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 21 01:00:58.756293 containerd[1962]: time="2026-01-21T01:00:58.756272428Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 21 01:01:00.462329 containerd[1962]: time="2026-01-21T01:01:00.462269475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:01:00.463666 containerd[1962]: time="2026-01-21T01:01:00.463452918Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 21 01:01:00.464765 containerd[1962]: time="2026-01-21T01:01:00.464731702Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:01:00.467297 containerd[1962]: time="2026-01-21T01:01:00.467268922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:01:00.468191 containerd[1962]: time="2026-01-21T01:01:00.468160652Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.711794484s" Jan 21 01:01:00.468273 containerd[1962]: time="2026-01-21T01:01:00.468219409Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 21 01:01:00.468949 containerd[1962]: time="2026-01-21T01:01:00.468901276Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 21 01:01:03.459707 containerd[1962]: time="2026-01-21T01:01:03.459641536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:01:03.463455 containerd[1962]: time="2026-01-21T01:01:03.463172314Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 21 01:01:03.467966 containerd[1962]: time="2026-01-21T01:01:03.467863042Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:01:03.476228 containerd[1962]: time="2026-01-21T01:01:03.476107193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:01:03.478266 containerd[1962]: time="2026-01-21T01:01:03.477591708Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 3.008652514s" Jan 21 01:01:03.478266 containerd[1962]: time="2026-01-21T01:01:03.477641331Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 21 01:01:03.478777 containerd[1962]: time="2026-01-21T01:01:03.478716463Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 21 01:01:04.613444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1133333009.mount: Deactivated successfully. Jan 21 01:01:05.562389 containerd[1962]: time="2026-01-21T01:01:05.562328269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:01:05.607270 containerd[1962]: time="2026-01-21T01:01:05.607191773Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=0" Jan 21 01:01:05.641933 containerd[1962]: time="2026-01-21T01:01:05.641834690Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:01:05.693105 containerd[1962]: time="2026-01-21T01:01:05.693033061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:01:05.694189 containerd[1962]: time="2026-01-21T01:01:05.693829551Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 2.214953982s" Jan 21 01:01:05.694189 containerd[1962]: time="2026-01-21T01:01:05.693871620Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 21 01:01:05.701553 containerd[1962]: time="2026-01-21T01:01:05.694662463Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 21 01:01:06.510097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2016678223.mount: Deactivated successfully. Jan 21 01:01:07.456411 containerd[1962]: time="2026-01-21T01:01:07.456355206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:01:07.457872 containerd[1962]: time="2026-01-21T01:01:07.457839109Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=17569900" Jan 21 01:01:07.459477 containerd[1962]: time="2026-01-21T01:01:07.459428804Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:01:07.462413 containerd[1962]: time="2026-01-21T01:01:07.462363067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:01:07.463483 containerd[1962]: time="2026-01-21T01:01:07.463261219Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.768560305s" Jan 21 01:01:07.463483 containerd[1962]: time="2026-01-21T01:01:07.463290030Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 21 01:01:07.463760 containerd[1962]: time="2026-01-21T01:01:07.463746546Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 21 01:01:07.891337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 21 01:01:07.893372 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 01:01:07.901197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1756003990.mount: Deactivated successfully. Jan 21 01:01:07.908224 containerd[1962]: time="2026-01-21T01:01:07.908083609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 21 01:01:07.910096 containerd[1962]: time="2026-01-21T01:01:07.910066413Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 21 01:01:07.911410 containerd[1962]: time="2026-01-21T01:01:07.911365105Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 21 01:01:07.914510 containerd[1962]: time="2026-01-21T01:01:07.914467016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 21 01:01:07.916744 containerd[1962]: time="2026-01-21T01:01:07.916700400Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 452.869784ms" Jan 21 01:01:07.916744 containerd[1962]: time="2026-01-21T01:01:07.916744828Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 21 01:01:07.919246 containerd[1962]: time="2026-01-21T01:01:07.918408077Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 21 01:01:08.163519 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 01:01:08.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:08.169348 kernel: audit: type=1130 audit(1768957268.163:305): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:08.175561 (kubelet)[2712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 01:01:08.242472 kubelet[2712]: E0121 01:01:08.242425 2712 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 01:01:08.245019 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 01:01:08.245169 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 01:01:08.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 01:01:08.245575 systemd[1]: kubelet.service: Consumed 179ms CPU time, 110.3M memory peak. Jan 21 01:01:08.250254 kernel: audit: type=1131 audit(1768957268.244:306): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 01:01:08.440432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1641053275.mount: Deactivated successfully. Jan 21 01:01:11.254613 containerd[1962]: time="2026-01-21T01:01:11.254546435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:01:11.255595 containerd[1962]: time="2026-01-21T01:01:11.255510467Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=55728979" Jan 21 01:01:11.256660 containerd[1962]: time="2026-01-21T01:01:11.256625936Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:01:11.259537 containerd[1962]: time="2026-01-21T01:01:11.259483407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:01:11.261281 containerd[1962]: time="2026-01-21T01:01:11.261064610Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.342621938s" Jan 21 01:01:11.261281 containerd[1962]: time="2026-01-21T01:01:11.261097528Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 21 01:01:11.349778 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 21 01:01:11.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:11.356668 kernel: audit: type=1131 audit(1768957271.349:307): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:11.369000 audit: BPF prog-id=66 op=UNLOAD Jan 21 01:01:11.372250 kernel: audit: type=1334 audit(1768957271.369:308): prog-id=66 op=UNLOAD Jan 21 01:01:14.053289 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 01:01:14.053564 systemd[1]: kubelet.service: Consumed 179ms CPU time, 110.3M memory peak. Jan 21 01:01:14.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:14.062688 kernel: audit: type=1130 audit(1768957274.052:309): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:14.062796 kernel: audit: type=1131 audit(1768957274.052:310): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:14.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:14.060549 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 01:01:14.097053 systemd[1]: Reload requested from client PID 2803 ('systemctl') (unit session-8.scope)... Jan 21 01:01:14.097078 systemd[1]: Reloading... Jan 21 01:01:14.232243 zram_generator::config[2856]: No configuration found. Jan 21 01:01:14.519363 systemd[1]: Reloading finished in 421 ms. Jan 21 01:01:14.552266 kernel: audit: type=1334 audit(1768957274.546:311): prog-id=70 op=LOAD Jan 21 01:01:14.552370 kernel: audit: type=1334 audit(1768957274.549:312): prog-id=59 op=UNLOAD Jan 21 01:01:14.546000 audit: BPF prog-id=70 op=LOAD Jan 21 01:01:14.549000 audit: BPF prog-id=59 op=UNLOAD Jan 21 01:01:14.549000 audit: BPF prog-id=71 op=LOAD Jan 21 01:01:14.556390 kernel: audit: type=1334 audit(1768957274.549:313): prog-id=71 op=LOAD Jan 21 01:01:14.556471 kernel: audit: type=1334 audit(1768957274.549:314): prog-id=72 op=LOAD Jan 21 01:01:14.549000 audit: BPF prog-id=72 op=LOAD Jan 21 01:01:14.558418 kernel: audit: type=1334 audit(1768957274.549:315): prog-id=60 op=UNLOAD Jan 21 01:01:14.549000 audit: BPF prog-id=60 op=UNLOAD Jan 21 01:01:14.560751 kernel: audit: type=1334 audit(1768957274.549:316): prog-id=61 op=UNLOAD Jan 21 01:01:14.549000 audit: BPF prog-id=61 op=UNLOAD Jan 21 01:01:14.565439 kernel: audit: type=1334 audit(1768957274.552:317): prog-id=73 op=LOAD Jan 21 01:01:14.565527 kernel: audit: type=1334 audit(1768957274.552:318): prog-id=47 op=UNLOAD Jan 21 01:01:14.552000 audit: BPF prog-id=73 op=LOAD Jan 21 01:01:14.552000 audit: BPF prog-id=47 op=UNLOAD Jan 21 01:01:14.552000 audit: BPF prog-id=74 op=LOAD Jan 21 01:01:14.552000 audit: BPF prog-id=75 op=LOAD Jan 21 01:01:14.552000 audit: BPF prog-id=48 op=UNLOAD Jan 21 01:01:14.552000 audit: BPF prog-id=49 op=UNLOAD Jan 21 01:01:14.553000 audit: BPF prog-id=76 op=LOAD Jan 21 01:01:14.553000 audit: BPF prog-id=56 op=UNLOAD Jan 21 01:01:14.553000 audit: BPF prog-id=77 op=LOAD Jan 21 01:01:14.553000 audit: BPF prog-id=78 op=LOAD Jan 21 01:01:14.553000 audit: BPF prog-id=57 op=UNLOAD Jan 21 01:01:14.553000 audit: BPF prog-id=58 op=UNLOAD Jan 21 01:01:14.554000 audit: BPF prog-id=79 op=LOAD Jan 21 01:01:14.554000 audit: BPF prog-id=62 op=UNLOAD Jan 21 01:01:14.555000 audit: BPF prog-id=80 op=LOAD Jan 21 01:01:14.555000 audit: BPF prog-id=69 op=UNLOAD Jan 21 01:01:14.556000 audit: BPF prog-id=81 op=LOAD Jan 21 01:01:14.556000 audit: BPF prog-id=63 op=UNLOAD Jan 21 01:01:14.556000 audit: BPF prog-id=82 op=LOAD Jan 21 01:01:14.556000 audit: BPF prog-id=83 op=LOAD Jan 21 01:01:14.556000 audit: BPF prog-id=64 op=UNLOAD Jan 21 01:01:14.556000 audit: BPF prog-id=65 op=UNLOAD Jan 21 01:01:14.557000 audit: BPF prog-id=84 op=LOAD Jan 21 01:01:14.557000 audit: BPF prog-id=55 op=UNLOAD Jan 21 01:01:14.558000 audit: BPF prog-id=85 op=LOAD Jan 21 01:01:14.558000 audit: BPF prog-id=86 op=LOAD Jan 21 01:01:14.558000 audit: BPF prog-id=53 op=UNLOAD Jan 21 01:01:14.558000 audit: BPF prog-id=54 op=UNLOAD Jan 21 01:01:14.559000 audit: BPF prog-id=87 op=LOAD Jan 21 01:01:14.559000 audit: BPF prog-id=50 op=UNLOAD Jan 21 01:01:14.559000 audit: BPF prog-id=88 op=LOAD Jan 21 01:01:14.559000 audit: BPF prog-id=89 op=LOAD Jan 21 01:01:14.559000 audit: BPF prog-id=51 op=UNLOAD Jan 21 01:01:14.559000 audit: BPF prog-id=52 op=UNLOAD Jan 21 01:01:14.572812 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 21 01:01:14.572896 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 21 01:01:14.573166 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 01:01:14.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 01:01:14.573289 systemd[1]: kubelet.service: Consumed 134ms CPU time, 98.5M memory peak. Jan 21 01:01:14.574996 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 01:01:14.840801 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 01:01:14.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:14.852566 (kubelet)[2911]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 21 01:01:14.915242 kubelet[2911]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 01:01:14.915242 kubelet[2911]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 21 01:01:14.915242 kubelet[2911]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 01:01:14.915242 kubelet[2911]: I0121 01:01:14.914908 2911 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 01:01:15.554740 kubelet[2911]: I0121 01:01:15.554673 2911 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 21 01:01:15.554740 kubelet[2911]: I0121 01:01:15.554716 2911 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 01:01:15.555057 kubelet[2911]: I0121 01:01:15.555037 2911 server.go:954] "Client rotation is on, will bootstrap in background" Jan 21 01:01:15.613234 kubelet[2911]: E0121 01:01:15.612631 2911 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.28.215:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.215:6443: connect: connection refused" logger="UnhandledError" Jan 21 01:01:15.613758 kubelet[2911]: I0121 01:01:15.613742 2911 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 21 01:01:15.626307 kubelet[2911]: I0121 01:01:15.626260 2911 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 01:01:15.632992 kubelet[2911]: I0121 01:01:15.632959 2911 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 21 01:01:15.638227 kubelet[2911]: I0121 01:01:15.638157 2911 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 01:01:15.638429 kubelet[2911]: I0121 01:01:15.638222 2911 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-215","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 01:01:15.640505 kubelet[2911]: I0121 01:01:15.640407 2911 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 01:01:15.640505 kubelet[2911]: I0121 01:01:15.640439 2911 container_manager_linux.go:304] "Creating device plugin manager" Jan 21 01:01:15.641931 kubelet[2911]: I0121 01:01:15.641904 2911 state_mem.go:36] "Initialized new in-memory state store" Jan 21 01:01:15.646832 kubelet[2911]: I0121 01:01:15.646741 2911 kubelet.go:446] "Attempting to sync node with API server" Jan 21 01:01:15.646832 kubelet[2911]: I0121 01:01:15.646785 2911 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 01:01:15.648945 kubelet[2911]: I0121 01:01:15.648907 2911 kubelet.go:352] "Adding apiserver pod source" Jan 21 01:01:15.648945 kubelet[2911]: I0121 01:01:15.648935 2911 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 01:01:15.658479 kubelet[2911]: I0121 01:01:15.658057 2911 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 21 01:01:15.661721 kubelet[2911]: I0121 01:01:15.661675 2911 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 21 01:01:15.661856 kubelet[2911]: W0121 01:01:15.661744 2911 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 21 01:01:15.662442 kubelet[2911]: I0121 01:01:15.662279 2911 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 21 01:01:15.662442 kubelet[2911]: I0121 01:01:15.662315 2911 server.go:1287] "Started kubelet" Jan 21 01:01:15.662514 kubelet[2911]: W0121 01:01:15.662449 2911 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.215:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-215&limit=500&resourceVersion=0": dial tcp 172.31.28.215:6443: connect: connection refused Jan 21 01:01:15.662514 kubelet[2911]: E0121 01:01:15.662497 2911 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.28.215:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-215&limit=500&resourceVersion=0\": dial tcp 172.31.28.215:6443: connect: connection refused" logger="UnhandledError" Jan 21 01:01:15.669016 kubelet[2911]: W0121 01:01:15.668189 2911 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.215:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.28.215:6443: connect: connection refused Jan 21 01:01:15.669016 kubelet[2911]: E0121 01:01:15.668268 2911 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.28.215:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.215:6443: connect: connection refused" logger="UnhandledError" Jan 21 01:01:15.669016 kubelet[2911]: I0121 01:01:15.668364 2911 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 01:01:15.670841 kubelet[2911]: I0121 01:01:15.670759 2911 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 01:01:15.671363 kubelet[2911]: I0121 01:01:15.671153 2911 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 01:01:15.676360 kubelet[2911]: I0121 01:01:15.676327 2911 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 01:01:15.685234 kubelet[2911]: E0121 01:01:15.679201 2911 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.215:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.215:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-215.188c994516b1a115 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-215,UID:ip-172-31-28-215,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-215,},FirstTimestamp:2026-01-21 01:01:15.662295317 +0000 UTC m=+0.805615819,LastTimestamp:2026-01-21 01:01:15.662295317 +0000 UTC m=+0.805615819,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-215,}" Jan 21 01:01:15.684000 audit[2922]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2922 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:15.684000 audit[2922]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd00b372f0 a2=0 a3=0 items=0 ppid=2911 pid=2922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:15.684000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 21 01:01:15.688000 audit[2923]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2923 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:15.688000 audit[2923]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf8f41080 a2=0 a3=0 items=0 ppid=2911 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:15.688000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 21 01:01:15.700866 kubelet[2911]: I0121 01:01:15.693765 2911 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 21 01:01:15.700866 kubelet[2911]: E0121 01:01:15.694088 2911 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-215\" not found" Jan 21 01:01:15.700866 kubelet[2911]: I0121 01:01:15.697458 2911 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 21 01:01:15.700866 kubelet[2911]: I0121 01:01:15.697520 2911 reconciler.go:26] "Reconciler: start to sync state" Jan 21 01:01:15.694000 audit[2925]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2925 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:15.694000 audit[2925]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffffe96e3d0 a2=0 a3=0 items=0 ppid=2911 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:15.694000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 21 01:01:15.697000 audit[2927]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2927 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:15.697000 audit[2927]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff3b7786c0 a2=0 a3=0 items=0 ppid=2911 pid=2927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:15.697000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 21 01:01:15.703396 kubelet[2911]: I0121 01:01:15.702958 2911 server.go:479] "Adding debug handlers to kubelet server" Jan 21 01:01:15.705758 kubelet[2911]: I0121 01:01:15.705734 2911 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 21 01:01:15.716242 kubelet[2911]: E0121 01:01:15.715629 2911 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-215?timeout=10s\": dial tcp 172.31.28.215:6443: connect: connection refused" interval="200ms" Jan 21 01:01:15.716242 kubelet[2911]: W0121 01:01:15.715810 2911 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.215:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.215:6443: connect: connection refused Jan 21 01:01:15.716242 kubelet[2911]: E0121 01:01:15.715886 2911 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.28.215:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.215:6443: connect: connection refused" logger="UnhandledError" Jan 21 01:01:15.724273 kubelet[2911]: I0121 01:01:15.724234 2911 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 21 01:01:15.722000 audit[2930]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2930 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:15.722000 audit[2930]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff00aeee30 a2=0 a3=0 items=0 ppid=2911 pid=2930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:15.722000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 21 01:01:15.725000 audit[2931]: NETFILTER_CFG table=mangle:47 family=2 entries=1 op=nft_register_chain pid=2931 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:15.725000 audit[2931]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd29978a40 a2=0 a3=0 items=0 ppid=2911 pid=2931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:15.725000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 21 01:01:15.726000 audit[2932]: NETFILTER_CFG table=mangle:48 family=10 entries=2 op=nft_register_chain pid=2932 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:01:15.726000 audit[2932]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe4dd01c90 a2=0 a3=0 items=0 ppid=2911 pid=2932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:15.726000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 21 01:01:15.729001 kubelet[2911]: I0121 01:01:15.728581 2911 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 21 01:01:15.729001 kubelet[2911]: I0121 01:01:15.728612 2911 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 21 01:01:15.729001 kubelet[2911]: I0121 01:01:15.728639 2911 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 21 01:01:15.729001 kubelet[2911]: I0121 01:01:15.728650 2911 kubelet.go:2382] "Starting kubelet main sync loop" Jan 21 01:01:15.729001 kubelet[2911]: E0121 01:01:15.728707 2911 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 01:01:15.728000 audit[2935]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2935 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:01:15.728000 audit[2935]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe875f5400 a2=0 a3=0 items=0 ppid=2911 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:15.728000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 21 01:01:15.729000 audit[2934]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2934 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:15.729000 audit[2934]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb0e230b0 a2=0 a3=0 items=0 ppid=2911 pid=2934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:15.729000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 21 01:01:15.730000 audit[2936]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2936 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:01:15.730000 audit[2936]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc06783440 a2=0 a3=0 items=0 ppid=2911 pid=2936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:15.730000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 21 01:01:15.731000 audit[2937]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2937 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:15.731000 audit[2937]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe70bd4ed0 a2=0 a3=0 items=0 ppid=2911 pid=2937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:15.731000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 21 01:01:15.733165 kubelet[2911]: I0121 01:01:15.733135 2911 factory.go:221] Registration of the containerd container factory successfully Jan 21 01:01:15.733165 kubelet[2911]: I0121 01:01:15.733160 2911 factory.go:221] Registration of the systemd container factory successfully Jan 21 01:01:15.733414 kubelet[2911]: I0121 01:01:15.733327 2911 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 21 01:01:15.732000 audit[2938]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2938 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:01:15.732000 audit[2938]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffddf64160 a2=0 a3=0 items=0 ppid=2911 pid=2938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:15.732000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 21 01:01:15.737110 kubelet[2911]: W0121 01:01:15.737052 2911 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.215:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.215:6443: connect: connection refused Jan 21 01:01:15.737292 kubelet[2911]: E0121 01:01:15.737129 2911 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.28.215:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.215:6443: connect: connection refused" logger="UnhandledError" Jan 21 01:01:15.764543 kubelet[2911]: I0121 01:01:15.764507 2911 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 21 01:01:15.764543 kubelet[2911]: I0121 01:01:15.764532 2911 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 21 01:01:15.764543 kubelet[2911]: I0121 01:01:15.764554 2911 state_mem.go:36] "Initialized new in-memory state store" Jan 21 01:01:15.767369 kubelet[2911]: I0121 01:01:15.767301 2911 policy_none.go:49] "None policy: Start" Jan 21 01:01:15.767369 kubelet[2911]: I0121 01:01:15.767328 2911 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 21 01:01:15.767369 kubelet[2911]: I0121 01:01:15.767340 2911 state_mem.go:35] "Initializing new in-memory state store" Jan 21 01:01:15.781777 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 21 01:01:15.794177 kubelet[2911]: E0121 01:01:15.794146 2911 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-215\" not found" Jan 21 01:01:15.795069 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 21 01:01:15.799376 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 21 01:01:15.810356 kubelet[2911]: I0121 01:01:15.810203 2911 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 21 01:01:15.810456 kubelet[2911]: I0121 01:01:15.810423 2911 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 01:01:15.810456 kubelet[2911]: I0121 01:01:15.810434 2911 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 01:01:15.811104 kubelet[2911]: I0121 01:01:15.811004 2911 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 01:01:15.814358 kubelet[2911]: E0121 01:01:15.814336 2911 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 21 01:01:15.814427 kubelet[2911]: E0121 01:01:15.814389 2911 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-215\" not found" Jan 21 01:01:15.839487 systemd[1]: Created slice kubepods-burstable-podfd8bef70e3a96f325c0739478590c942.slice - libcontainer container kubepods-burstable-podfd8bef70e3a96f325c0739478590c942.slice. Jan 21 01:01:15.848278 kubelet[2911]: E0121 01:01:15.848190 2911 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-215\" not found" node="ip-172-31-28-215" Jan 21 01:01:15.851603 systemd[1]: Created slice kubepods-burstable-pod70460e35dfd0a02b96af697e16499982.slice - libcontainer container kubepods-burstable-pod70460e35dfd0a02b96af697e16499982.slice. Jan 21 01:01:15.862708 kubelet[2911]: E0121 01:01:15.862538 2911 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-215\" not found" node="ip-172-31-28-215" Jan 21 01:01:15.864657 systemd[1]: Created slice kubepods-burstable-podfc5ca329622273519eeb84c00162a7e6.slice - libcontainer container kubepods-burstable-podfc5ca329622273519eeb84c00162a7e6.slice. Jan 21 01:01:15.866521 kubelet[2911]: E0121 01:01:15.866497 2911 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-215\" not found" node="ip-172-31-28-215" Jan 21 01:01:15.898534 kubelet[2911]: I0121 01:01:15.898468 2911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd8bef70e3a96f325c0739478590c942-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-215\" (UID: \"fd8bef70e3a96f325c0739478590c942\") " pod="kube-system/kube-controller-manager-ip-172-31-28-215" Jan 21 01:01:15.898534 kubelet[2911]: I0121 01:01:15.898517 2911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd8bef70e3a96f325c0739478590c942-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-215\" (UID: \"fd8bef70e3a96f325c0739478590c942\") " pod="kube-system/kube-controller-manager-ip-172-31-28-215" Jan 21 01:01:15.898842 kubelet[2911]: I0121 01:01:15.898546 2911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd8bef70e3a96f325c0739478590c942-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-215\" (UID: \"fd8bef70e3a96f325c0739478590c942\") " pod="kube-system/kube-controller-manager-ip-172-31-28-215" Jan 21 01:01:15.898842 kubelet[2911]: I0121 01:01:15.898569 2911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fc5ca329622273519eeb84c00162a7e6-ca-certs\") pod \"kube-apiserver-ip-172-31-28-215\" (UID: \"fc5ca329622273519eeb84c00162a7e6\") " pod="kube-system/kube-apiserver-ip-172-31-28-215" Jan 21 01:01:15.898842 kubelet[2911]: I0121 01:01:15.898592 2911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fc5ca329622273519eeb84c00162a7e6-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-215\" (UID: \"fc5ca329622273519eeb84c00162a7e6\") " pod="kube-system/kube-apiserver-ip-172-31-28-215" Jan 21 01:01:15.898842 kubelet[2911]: I0121 01:01:15.898614 2911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd8bef70e3a96f325c0739478590c942-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-215\" (UID: \"fd8bef70e3a96f325c0739478590c942\") " pod="kube-system/kube-controller-manager-ip-172-31-28-215" Jan 21 01:01:15.898842 kubelet[2911]: I0121 01:01:15.898636 2911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd8bef70e3a96f325c0739478590c942-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-215\" (UID: \"fd8bef70e3a96f325c0739478590c942\") " pod="kube-system/kube-controller-manager-ip-172-31-28-215" Jan 21 01:01:15.898980 kubelet[2911]: I0121 01:01:15.898660 2911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/70460e35dfd0a02b96af697e16499982-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-215\" (UID: \"70460e35dfd0a02b96af697e16499982\") " pod="kube-system/kube-scheduler-ip-172-31-28-215" Jan 21 01:01:15.898980 kubelet[2911]: I0121 01:01:15.898685 2911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fc5ca329622273519eeb84c00162a7e6-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-215\" (UID: \"fc5ca329622273519eeb84c00162a7e6\") " pod="kube-system/kube-apiserver-ip-172-31-28-215" Jan 21 01:01:15.912817 kubelet[2911]: I0121 01:01:15.912788 2911 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-215" Jan 21 01:01:15.913138 kubelet[2911]: E0121 01:01:15.913114 2911 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.215:6443/api/v1/nodes\": dial tcp 172.31.28.215:6443: connect: connection refused" node="ip-172-31-28-215" Jan 21 01:01:15.916590 kubelet[2911]: E0121 01:01:15.916553 2911 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-215?timeout=10s\": dial tcp 172.31.28.215:6443: connect: connection refused" interval="400ms" Jan 21 01:01:16.021781 kubelet[2911]: E0121 01:01:16.021676 2911 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.215:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.215:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-215.188c994516b1a115 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-215,UID:ip-172-31-28-215,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-215,},FirstTimestamp:2026-01-21 01:01:15.662295317 +0000 UTC m=+0.805615819,LastTimestamp:2026-01-21 01:01:15.662295317 +0000 UTC m=+0.805615819,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-215,}" Jan 21 01:01:16.115704 kubelet[2911]: I0121 01:01:16.115352 2911 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-215" Jan 21 01:01:16.115804 kubelet[2911]: E0121 01:01:16.115732 2911 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.215:6443/api/v1/nodes\": dial tcp 172.31.28.215:6443: connect: connection refused" node="ip-172-31-28-215" Jan 21 01:01:16.150464 containerd[1962]: time="2026-01-21T01:01:16.150419408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-215,Uid:fd8bef70e3a96f325c0739478590c942,Namespace:kube-system,Attempt:0,}" Jan 21 01:01:16.163863 containerd[1962]: time="2026-01-21T01:01:16.163818166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-215,Uid:70460e35dfd0a02b96af697e16499982,Namespace:kube-system,Attempt:0,}" Jan 21 01:01:16.167957 containerd[1962]: time="2026-01-21T01:01:16.167834894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-215,Uid:fc5ca329622273519eeb84c00162a7e6,Namespace:kube-system,Attempt:0,}" Jan 21 01:01:16.276755 containerd[1962]: time="2026-01-21T01:01:16.276692296Z" level=info msg="connecting to shim 4b9b8d425f1b0456c5d2f163d77c50aaa5492fc13af358271d23f039583f0e0c" address="unix:///run/containerd/s/39cd39532a5345f5f687e58c3f9bde71bce131639639ebc20ef4d38b4e7d6e4d" namespace=k8s.io protocol=ttrpc version=3 Jan 21 01:01:16.277541 containerd[1962]: time="2026-01-21T01:01:16.277449125Z" level=info msg="connecting to shim 2ec4ab54cd73b460bd374332d83a253f396d80dd9953dac37589ed25ddd0261a" address="unix:///run/containerd/s/4acf3389b691611a0e1a854b0caaa831f9be5fdd473e663bdbc874bb9b79eb9d" namespace=k8s.io protocol=ttrpc version=3 Jan 21 01:01:16.287737 containerd[1962]: time="2026-01-21T01:01:16.287667744Z" level=info msg="connecting to shim b9b6c94412847c06a2e6c9639da9efa42f1580051d23324414e9f19b414de701" address="unix:///run/containerd/s/e9c98797218a5c402829dceb88208731e7a0c520f881c3d6caaca3f1ee4524b9" namespace=k8s.io protocol=ttrpc version=3 Jan 21 01:01:16.317966 kubelet[2911]: E0121 01:01:16.317914 2911 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-215?timeout=10s\": dial tcp 172.31.28.215:6443: connect: connection refused" interval="800ms" Jan 21 01:01:16.391462 systemd[1]: Started cri-containerd-b9b6c94412847c06a2e6c9639da9efa42f1580051d23324414e9f19b414de701.scope - libcontainer container b9b6c94412847c06a2e6c9639da9efa42f1580051d23324414e9f19b414de701. Jan 21 01:01:16.397997 systemd[1]: Started cri-containerd-2ec4ab54cd73b460bd374332d83a253f396d80dd9953dac37589ed25ddd0261a.scope - libcontainer container 2ec4ab54cd73b460bd374332d83a253f396d80dd9953dac37589ed25ddd0261a. Jan 21 01:01:16.400131 systemd[1]: Started cri-containerd-4b9b8d425f1b0456c5d2f163d77c50aaa5492fc13af358271d23f039583f0e0c.scope - libcontainer container 4b9b8d425f1b0456c5d2f163d77c50aaa5492fc13af358271d23f039583f0e0c. Jan 21 01:01:16.428000 audit: BPF prog-id=90 op=LOAD Jan 21 01:01:16.429000 audit: BPF prog-id=91 op=LOAD Jan 21 01:01:16.429000 audit[2998]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00019e238 a2=98 a3=0 items=0 ppid=2967 pid=2998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.429000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462396238643432356631623034353663356432663136336437376335 Jan 21 01:01:16.429000 audit: BPF prog-id=91 op=UNLOAD Jan 21 01:01:16.429000 audit[2998]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2967 pid=2998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.429000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462396238643432356631623034353663356432663136336437376335 Jan 21 01:01:16.431000 audit: BPF prog-id=92 op=LOAD Jan 21 01:01:16.432000 audit: BPF prog-id=93 op=LOAD Jan 21 01:01:16.432000 audit[2995]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2966 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.432000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265633461623534636437336234363062643337343333326438336132 Jan 21 01:01:16.432000 audit: BPF prog-id=93 op=UNLOAD Jan 21 01:01:16.432000 audit[2995]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2966 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.432000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265633461623534636437336234363062643337343333326438336132 Jan 21 01:01:16.434000 audit: BPF prog-id=94 op=LOAD Jan 21 01:01:16.434000 audit[2995]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2966 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.434000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265633461623534636437336234363062643337343333326438336132 Jan 21 01:01:16.435000 audit: BPF prog-id=95 op=LOAD Jan 21 01:01:16.435000 audit[2995]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2966 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265633461623534636437336234363062643337343333326438336132 Jan 21 01:01:16.435000 audit: BPF prog-id=95 op=UNLOAD Jan 21 01:01:16.435000 audit[2995]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2966 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265633461623534636437336234363062643337343333326438336132 Jan 21 01:01:16.435000 audit: BPF prog-id=94 op=UNLOAD Jan 21 01:01:16.435000 audit[2995]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2966 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265633461623534636437336234363062643337343333326438336132 Jan 21 01:01:16.435000 audit: BPF prog-id=96 op=LOAD Jan 21 01:01:16.435000 audit[2998]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00019e488 a2=98 a3=0 items=0 ppid=2967 pid=2998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462396238643432356631623034353663356432663136336437376335 Jan 21 01:01:16.435000 audit: BPF prog-id=97 op=LOAD Jan 21 01:01:16.435000 audit[2998]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00019e218 a2=98 a3=0 items=0 ppid=2967 pid=2998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462396238643432356631623034353663356432663136336437376335 Jan 21 01:01:16.435000 audit: BPF prog-id=97 op=UNLOAD Jan 21 01:01:16.435000 audit[2998]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2967 pid=2998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462396238643432356631623034353663356432663136336437376335 Jan 21 01:01:16.435000 audit: BPF prog-id=98 op=LOAD Jan 21 01:01:16.435000 audit: BPF prog-id=96 op=UNLOAD Jan 21 01:01:16.435000 audit[2998]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2967 pid=2998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462396238643432356631623034353663356432663136336437376335 Jan 21 01:01:16.435000 audit[2995]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2966 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265633461623534636437336234363062643337343333326438336132 Jan 21 01:01:16.436000 audit: BPF prog-id=99 op=LOAD Jan 21 01:01:16.436000 audit[2998]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00019e6e8 a2=98 a3=0 items=0 ppid=2967 pid=2998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.436000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462396238643432356631623034353663356432663136336437376335 Jan 21 01:01:16.445000 audit: BPF prog-id=100 op=LOAD Jan 21 01:01:16.445000 audit: BPF prog-id=101 op=LOAD Jan 21 01:01:16.445000 audit[3003]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2983 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.445000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239623663393434313238343763303661326536633936333964613965 Jan 21 01:01:16.445000 audit: BPF prog-id=101 op=UNLOAD Jan 21 01:01:16.445000 audit[3003]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2983 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.445000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239623663393434313238343763303661326536633936333964613965 Jan 21 01:01:16.446000 audit: BPF prog-id=102 op=LOAD Jan 21 01:01:16.446000 audit[3003]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2983 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239623663393434313238343763303661326536633936333964613965 Jan 21 01:01:16.446000 audit: BPF prog-id=103 op=LOAD Jan 21 01:01:16.446000 audit[3003]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2983 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239623663393434313238343763303661326536633936333964613965 Jan 21 01:01:16.446000 audit: BPF prog-id=103 op=UNLOAD Jan 21 01:01:16.446000 audit[3003]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2983 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239623663393434313238343763303661326536633936333964613965 Jan 21 01:01:16.446000 audit: BPF prog-id=102 op=UNLOAD Jan 21 01:01:16.446000 audit[3003]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2983 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239623663393434313238343763303661326536633936333964613965 Jan 21 01:01:16.446000 audit: BPF prog-id=104 op=LOAD Jan 21 01:01:16.446000 audit[3003]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2983 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239623663393434313238343763303661326536633936333964613965 Jan 21 01:01:16.522836 kubelet[2911]: I0121 01:01:16.522805 2911 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-215" Jan 21 01:01:16.523153 kubelet[2911]: E0121 01:01:16.523123 2911 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.215:6443/api/v1/nodes\": dial tcp 172.31.28.215:6443: connect: connection refused" node="ip-172-31-28-215" Jan 21 01:01:16.523696 kubelet[2911]: W0121 01:01:16.523638 2911 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.215:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-215&limit=500&resourceVersion=0": dial tcp 172.31.28.215:6443: connect: connection refused Jan 21 01:01:16.523791 kubelet[2911]: E0121 01:01:16.523709 2911 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.28.215:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-215&limit=500&resourceVersion=0\": dial tcp 172.31.28.215:6443: connect: connection refused" logger="UnhandledError" Jan 21 01:01:16.527465 containerd[1962]: time="2026-01-21T01:01:16.527398330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-215,Uid:fc5ca329622273519eeb84c00162a7e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ec4ab54cd73b460bd374332d83a253f396d80dd9953dac37589ed25ddd0261a\"" Jan 21 01:01:16.528033 containerd[1962]: time="2026-01-21T01:01:16.527996157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-215,Uid:fd8bef70e3a96f325c0739478590c942,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b9b8d425f1b0456c5d2f163d77c50aaa5492fc13af358271d23f039583f0e0c\"" Jan 21 01:01:16.533054 containerd[1962]: time="2026-01-21T01:01:16.533008187Z" level=info msg="CreateContainer within sandbox \"2ec4ab54cd73b460bd374332d83a253f396d80dd9953dac37589ed25ddd0261a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 21 01:01:16.537789 containerd[1962]: time="2026-01-21T01:01:16.537723196Z" level=info msg="CreateContainer within sandbox \"4b9b8d425f1b0456c5d2f163d77c50aaa5492fc13af358271d23f039583f0e0c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 21 01:01:16.558069 containerd[1962]: time="2026-01-21T01:01:16.558016277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-215,Uid:70460e35dfd0a02b96af697e16499982,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9b6c94412847c06a2e6c9639da9efa42f1580051d23324414e9f19b414de701\"" Jan 21 01:01:16.561735 containerd[1962]: time="2026-01-21T01:01:16.561671676Z" level=info msg="CreateContainer within sandbox \"b9b6c94412847c06a2e6c9639da9efa42f1580051d23324414e9f19b414de701\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 21 01:01:16.596246 containerd[1962]: time="2026-01-21T01:01:16.595952610Z" level=info msg="Container 522e7555334ca6e98b89d3e3005986365587286a99d0f29f88211ebf06030d8a: CDI devices from CRI Config.CDIDevices: []" Jan 21 01:01:16.596246 containerd[1962]: time="2026-01-21T01:01:16.596098934Z" level=info msg="Container c96d7165529460d582e1253cf276e4dd287ff988c317719b40d0a3ef44b9e42b: CDI devices from CRI Config.CDIDevices: []" Jan 21 01:01:16.596998 containerd[1962]: time="2026-01-21T01:01:16.596975228Z" level=info msg="Container 774a4cf037534b99988d7faa8a7762e92f3d88f36d5d5c012b7f8b498838f9dd: CDI devices from CRI Config.CDIDevices: []" Jan 21 01:01:16.610252 containerd[1962]: time="2026-01-21T01:01:16.610192927Z" level=info msg="CreateContainer within sandbox \"2ec4ab54cd73b460bd374332d83a253f396d80dd9953dac37589ed25ddd0261a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c96d7165529460d582e1253cf276e4dd287ff988c317719b40d0a3ef44b9e42b\"" Jan 21 01:01:16.610975 containerd[1962]: time="2026-01-21T01:01:16.610898462Z" level=info msg="CreateContainer within sandbox \"b9b6c94412847c06a2e6c9639da9efa42f1580051d23324414e9f19b414de701\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"522e7555334ca6e98b89d3e3005986365587286a99d0f29f88211ebf06030d8a\"" Jan 21 01:01:16.613127 containerd[1962]: time="2026-01-21T01:01:16.611498663Z" level=info msg="StartContainer for \"c96d7165529460d582e1253cf276e4dd287ff988c317719b40d0a3ef44b9e42b\"" Jan 21 01:01:16.613127 containerd[1962]: time="2026-01-21T01:01:16.612083747Z" level=info msg="CreateContainer within sandbox \"4b9b8d425f1b0456c5d2f163d77c50aaa5492fc13af358271d23f039583f0e0c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"774a4cf037534b99988d7faa8a7762e92f3d88f36d5d5c012b7f8b498838f9dd\"" Jan 21 01:01:16.613127 containerd[1962]: time="2026-01-21T01:01:16.612198794Z" level=info msg="StartContainer for \"522e7555334ca6e98b89d3e3005986365587286a99d0f29f88211ebf06030d8a\"" Jan 21 01:01:16.613127 containerd[1962]: time="2026-01-21T01:01:16.612603464Z" level=info msg="connecting to shim c96d7165529460d582e1253cf276e4dd287ff988c317719b40d0a3ef44b9e42b" address="unix:///run/containerd/s/4acf3389b691611a0e1a854b0caaa831f9be5fdd473e663bdbc874bb9b79eb9d" protocol=ttrpc version=3 Jan 21 01:01:16.613405 containerd[1962]: time="2026-01-21T01:01:16.613377850Z" level=info msg="connecting to shim 522e7555334ca6e98b89d3e3005986365587286a99d0f29f88211ebf06030d8a" address="unix:///run/containerd/s/e9c98797218a5c402829dceb88208731e7a0c520f881c3d6caaca3f1ee4524b9" protocol=ttrpc version=3 Jan 21 01:01:16.615434 containerd[1962]: time="2026-01-21T01:01:16.615407983Z" level=info msg="StartContainer for \"774a4cf037534b99988d7faa8a7762e92f3d88f36d5d5c012b7f8b498838f9dd\"" Jan 21 01:01:16.616416 containerd[1962]: time="2026-01-21T01:01:16.616392182Z" level=info msg="connecting to shim 774a4cf037534b99988d7faa8a7762e92f3d88f36d5d5c012b7f8b498838f9dd" address="unix:///run/containerd/s/39cd39532a5345f5f687e58c3f9bde71bce131639639ebc20ef4d38b4e7d6e4d" protocol=ttrpc version=3 Jan 21 01:01:16.634404 systemd[1]: Started cri-containerd-c96d7165529460d582e1253cf276e4dd287ff988c317719b40d0a3ef44b9e42b.scope - libcontainer container c96d7165529460d582e1253cf276e4dd287ff988c317719b40d0a3ef44b9e42b. Jan 21 01:01:16.644458 systemd[1]: Started cri-containerd-522e7555334ca6e98b89d3e3005986365587286a99d0f29f88211ebf06030d8a.scope - libcontainer container 522e7555334ca6e98b89d3e3005986365587286a99d0f29f88211ebf06030d8a. Jan 21 01:01:16.655455 systemd[1]: Started cri-containerd-774a4cf037534b99988d7faa8a7762e92f3d88f36d5d5c012b7f8b498838f9dd.scope - libcontainer container 774a4cf037534b99988d7faa8a7762e92f3d88f36d5d5c012b7f8b498838f9dd. Jan 21 01:01:16.657000 audit: BPF prog-id=105 op=LOAD Jan 21 01:01:16.658000 audit: BPF prog-id=106 op=LOAD Jan 21 01:01:16.658000 audit[3081]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2966 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339366437313635353239343630643538326531323533636632373665 Jan 21 01:01:16.658000 audit: BPF prog-id=106 op=UNLOAD Jan 21 01:01:16.658000 audit[3081]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2966 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339366437313635353239343630643538326531323533636632373665 Jan 21 01:01:16.659000 audit: BPF prog-id=107 op=LOAD Jan 21 01:01:16.659000 audit[3081]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2966 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.659000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339366437313635353239343630643538326531323533636632373665 Jan 21 01:01:16.660000 audit: BPF prog-id=108 op=LOAD Jan 21 01:01:16.660000 audit[3081]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2966 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.660000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339366437313635353239343630643538326531323533636632373665 Jan 21 01:01:16.661000 audit: BPF prog-id=108 op=UNLOAD Jan 21 01:01:16.661000 audit[3081]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2966 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.661000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339366437313635353239343630643538326531323533636632373665 Jan 21 01:01:16.661000 audit: BPF prog-id=107 op=UNLOAD Jan 21 01:01:16.661000 audit[3081]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2966 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.661000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339366437313635353239343630643538326531323533636632373665 Jan 21 01:01:16.661000 audit: BPF prog-id=109 op=LOAD Jan 21 01:01:16.661000 audit[3081]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2966 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.661000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339366437313635353239343630643538326531323533636632373665 Jan 21 01:01:16.665000 audit: BPF prog-id=110 op=LOAD Jan 21 01:01:16.666000 audit: BPF prog-id=111 op=LOAD Jan 21 01:01:16.666000 audit[3082]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2983 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532326537353535333334636136653938623839643365333030353938 Jan 21 01:01:16.666000 audit: BPF prog-id=111 op=UNLOAD Jan 21 01:01:16.666000 audit[3082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2983 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532326537353535333334636136653938623839643365333030353938 Jan 21 01:01:16.667000 audit: BPF prog-id=112 op=LOAD Jan 21 01:01:16.667000 audit[3082]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2983 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532326537353535333334636136653938623839643365333030353938 Jan 21 01:01:16.667000 audit: BPF prog-id=113 op=LOAD Jan 21 01:01:16.667000 audit[3082]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2983 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532326537353535333334636136653938623839643365333030353938 Jan 21 01:01:16.667000 audit: BPF prog-id=113 op=UNLOAD Jan 21 01:01:16.667000 audit[3082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2983 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532326537353535333334636136653938623839643365333030353938 Jan 21 01:01:16.667000 audit: BPF prog-id=112 op=UNLOAD Jan 21 01:01:16.667000 audit[3082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2983 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532326537353535333334636136653938623839643365333030353938 Jan 21 01:01:16.667000 audit: BPF prog-id=114 op=LOAD Jan 21 01:01:16.667000 audit[3082]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2983 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532326537353535333334636136653938623839643365333030353938 Jan 21 01:01:16.680000 audit: BPF prog-id=115 op=LOAD Jan 21 01:01:16.681000 audit: BPF prog-id=116 op=LOAD Jan 21 01:01:16.681000 audit[3083]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c238 a2=98 a3=0 items=0 ppid=2967 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737346134636630333735333462393939383864376661613861373736 Jan 21 01:01:16.681000 audit: BPF prog-id=116 op=UNLOAD Jan 21 01:01:16.681000 audit[3083]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2967 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737346134636630333735333462393939383864376661613861373736 Jan 21 01:01:16.682000 audit: BPF prog-id=117 op=LOAD Jan 21 01:01:16.682000 audit[3083]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c488 a2=98 a3=0 items=0 ppid=2967 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737346134636630333735333462393939383864376661613861373736 Jan 21 01:01:16.682000 audit: BPF prog-id=118 op=LOAD Jan 21 01:01:16.682000 audit[3083]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00018c218 a2=98 a3=0 items=0 ppid=2967 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737346134636630333735333462393939383864376661613861373736 Jan 21 01:01:16.682000 audit: BPF prog-id=118 op=UNLOAD Jan 21 01:01:16.682000 audit[3083]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2967 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737346134636630333735333462393939383864376661613861373736 Jan 21 01:01:16.682000 audit: BPF prog-id=117 op=UNLOAD Jan 21 01:01:16.682000 audit[3083]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2967 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737346134636630333735333462393939383864376661613861373736 Jan 21 01:01:16.682000 audit: BPF prog-id=119 op=LOAD Jan 21 01:01:16.682000 audit[3083]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c6e8 a2=98 a3=0 items=0 ppid=2967 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737346134636630333735333462393939383864376661613861373736 Jan 21 01:01:16.768603 containerd[1962]: time="2026-01-21T01:01:16.768462191Z" level=info msg="StartContainer for \"c96d7165529460d582e1253cf276e4dd287ff988c317719b40d0a3ef44b9e42b\" returns successfully" Jan 21 01:01:16.771199 containerd[1962]: time="2026-01-21T01:01:16.771160536Z" level=info msg="StartContainer for \"774a4cf037534b99988d7faa8a7762e92f3d88f36d5d5c012b7f8b498838f9dd\" returns successfully" Jan 21 01:01:16.781777 kubelet[2911]: E0121 01:01:16.781748 2911 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-215\" not found" node="ip-172-31-28-215" Jan 21 01:01:16.785434 containerd[1962]: time="2026-01-21T01:01:16.784906416Z" level=info msg="StartContainer for \"522e7555334ca6e98b89d3e3005986365587286a99d0f29f88211ebf06030d8a\" returns successfully" Jan 21 01:01:16.788307 kubelet[2911]: E0121 01:01:16.788043 2911 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-215\" not found" node="ip-172-31-28-215" Jan 21 01:01:16.897971 kubelet[2911]: W0121 01:01:16.897002 2911 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.215:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.215:6443: connect: connection refused Jan 21 01:01:16.898228 kubelet[2911]: E0121 01:01:16.898171 2911 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.28.215:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.215:6443: connect: connection refused" logger="UnhandledError" Jan 21 01:01:16.908246 kubelet[2911]: W0121 01:01:16.908070 2911 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.215:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.28.215:6443: connect: connection refused Jan 21 01:01:16.908513 kubelet[2911]: E0121 01:01:16.908472 2911 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.28.215:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.215:6443: connect: connection refused" logger="UnhandledError" Jan 21 01:01:17.119230 kubelet[2911]: E0121 01:01:17.119160 2911 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-215?timeout=10s\": dial tcp 172.31.28.215:6443: connect: connection refused" interval="1.6s" Jan 21 01:01:17.204489 kubelet[2911]: W0121 01:01:17.204300 2911 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.215:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.215:6443: connect: connection refused Jan 21 01:01:17.204489 kubelet[2911]: E0121 01:01:17.204389 2911 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.28.215:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.215:6443: connect: connection refused" logger="UnhandledError" Jan 21 01:01:17.328263 kubelet[2911]: I0121 01:01:17.326022 2911 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-215" Jan 21 01:01:17.328794 kubelet[2911]: E0121 01:01:17.328763 2911 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.215:6443/api/v1/nodes\": dial tcp 172.31.28.215:6443: connect: connection refused" node="ip-172-31-28-215" Jan 21 01:01:17.749240 kubelet[2911]: E0121 01:01:17.748722 2911 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.28.215:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.215:6443: connect: connection refused" logger="UnhandledError" Jan 21 01:01:17.794497 kubelet[2911]: E0121 01:01:17.794463 2911 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-215\" not found" node="ip-172-31-28-215" Jan 21 01:01:17.794952 kubelet[2911]: E0121 01:01:17.794926 2911 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-215\" not found" node="ip-172-31-28-215" Jan 21 01:01:18.454232 kubelet[2911]: W0121 01:01:18.453519 2911 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.215:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-215&limit=500&resourceVersion=0": dial tcp 172.31.28.215:6443: connect: connection refused Jan 21 01:01:18.454232 kubelet[2911]: E0121 01:01:18.453577 2911 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.28.215:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-215&limit=500&resourceVersion=0\": dial tcp 172.31.28.215:6443: connect: connection refused" logger="UnhandledError" Jan 21 01:01:18.558804 kubelet[2911]: W0121 01:01:18.558760 2911 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.215:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.215:6443: connect: connection refused Jan 21 01:01:18.558959 kubelet[2911]: E0121 01:01:18.558816 2911 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.28.215:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.215:6443: connect: connection refused" logger="UnhandledError" Jan 21 01:01:18.720311 kubelet[2911]: E0121 01:01:18.720179 2911 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-215?timeout=10s\": dial tcp 172.31.28.215:6443: connect: connection refused" interval="3.2s" Jan 21 01:01:18.796231 kubelet[2911]: E0121 01:01:18.795912 2911 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-215\" not found" node="ip-172-31-28-215" Jan 21 01:01:18.798431 kubelet[2911]: E0121 01:01:18.798400 2911 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-215\" not found" node="ip-172-31-28-215" Jan 21 01:01:18.830390 kubelet[2911]: W0121 01:01:18.830322 2911 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.215:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.28.215:6443: connect: connection refused Jan 21 01:01:18.830551 kubelet[2911]: E0121 01:01:18.830403 2911 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.28.215:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.215:6443: connect: connection refused" logger="UnhandledError" Jan 21 01:01:18.931235 kubelet[2911]: I0121 01:01:18.930912 2911 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-215" Jan 21 01:01:18.931388 kubelet[2911]: E0121 01:01:18.931298 2911 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.215:6443/api/v1/nodes\": dial tcp 172.31.28.215:6443: connect: connection refused" node="ip-172-31-28-215" Jan 21 01:01:19.313220 kubelet[2911]: E0121 01:01:19.313176 2911 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-215\" not found" node="ip-172-31-28-215" Jan 21 01:01:19.795814 kubelet[2911]: E0121 01:01:19.795770 2911 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-215\" not found" node="ip-172-31-28-215" Jan 21 01:01:22.133751 kubelet[2911]: I0121 01:01:22.133724 2911 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-215" Jan 21 01:01:22.185822 kubelet[2911]: E0121 01:01:22.185791 2911 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-28-215\" not found" node="ip-172-31-28-215" Jan 21 01:01:22.288620 kubelet[2911]: I0121 01:01:22.288574 2911 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-215" Jan 21 01:01:22.288756 kubelet[2911]: E0121 01:01:22.288631 2911 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-28-215\": node \"ip-172-31-28-215\" not found" Jan 21 01:01:22.313542 kubelet[2911]: E0121 01:01:22.313509 2911 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-215\" not found" Jan 21 01:01:22.414782 kubelet[2911]: E0121 01:01:22.414663 2911 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-215\" not found" Jan 21 01:01:22.515464 kubelet[2911]: E0121 01:01:22.515420 2911 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-215\" not found" Jan 21 01:01:22.594961 kubelet[2911]: I0121 01:01:22.594909 2911 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-215" Jan 21 01:01:22.601572 kubelet[2911]: E0121 01:01:22.601534 2911 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-28-215\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-28-215" Jan 21 01:01:22.601572 kubelet[2911]: I0121 01:01:22.601567 2911 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-215" Jan 21 01:01:22.603193 kubelet[2911]: E0121 01:01:22.603152 2911 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-215\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-28-215" Jan 21 01:01:22.603193 kubelet[2911]: I0121 01:01:22.603180 2911 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-215" Jan 21 01:01:22.604946 kubelet[2911]: E0121 01:01:22.604909 2911 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-215\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-28-215" Jan 21 01:01:22.668922 kubelet[2911]: I0121 01:01:22.668708 2911 apiserver.go:52] "Watching apiserver" Jan 21 01:01:22.697794 kubelet[2911]: I0121 01:01:22.697756 2911 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 21 01:01:23.685576 kubelet[2911]: I0121 01:01:23.685538 2911 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-215" Jan 21 01:01:24.330046 systemd[1]: Reload requested from client PID 3177 ('systemctl') (unit session-8.scope)... Jan 21 01:01:24.330063 systemd[1]: Reloading... Jan 21 01:01:24.416243 zram_generator::config[3220]: No configuration found. Jan 21 01:01:24.713441 systemd[1]: Reloading finished in 382 ms. Jan 21 01:01:24.746063 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 01:01:24.758895 systemd[1]: kubelet.service: Deactivated successfully. Jan 21 01:01:24.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:24.759186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 01:01:24.760274 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 21 01:01:24.760356 kernel: audit: type=1131 audit(1768957284.758:413): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:24.763917 systemd[1]: kubelet.service: Consumed 1.212s CPU time, 128.8M memory peak. Jan 21 01:01:24.771156 kernel: audit: type=1334 audit(1768957284.765:414): prog-id=120 op=LOAD Jan 21 01:01:24.771307 kernel: audit: type=1334 audit(1768957284.765:415): prog-id=76 op=UNLOAD Jan 21 01:01:24.765000 audit: BPF prog-id=120 op=LOAD Jan 21 01:01:24.765000 audit: BPF prog-id=76 op=UNLOAD Jan 21 01:01:24.766090 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 01:01:24.765000 audit: BPF prog-id=121 op=LOAD Jan 21 01:01:24.765000 audit: BPF prog-id=122 op=LOAD Jan 21 01:01:24.773760 kernel: audit: type=1334 audit(1768957284.765:416): prog-id=121 op=LOAD Jan 21 01:01:24.773826 kernel: audit: type=1334 audit(1768957284.765:417): prog-id=122 op=LOAD Jan 21 01:01:24.773848 kernel: audit: type=1334 audit(1768957284.765:418): prog-id=77 op=UNLOAD Jan 21 01:01:24.765000 audit: BPF prog-id=77 op=UNLOAD Jan 21 01:01:24.774846 kernel: audit: type=1334 audit(1768957284.765:419): prog-id=78 op=UNLOAD Jan 21 01:01:24.765000 audit: BPF prog-id=78 op=UNLOAD Jan 21 01:01:24.775977 kernel: audit: type=1334 audit(1768957284.765:420): prog-id=123 op=LOAD Jan 21 01:01:24.765000 audit: BPF prog-id=123 op=LOAD Jan 21 01:01:24.777006 kernel: audit: type=1334 audit(1768957284.765:421): prog-id=124 op=LOAD Jan 21 01:01:24.765000 audit: BPF prog-id=124 op=LOAD Jan 21 01:01:24.778065 kernel: audit: type=1334 audit(1768957284.765:422): prog-id=85 op=UNLOAD Jan 21 01:01:24.765000 audit: BPF prog-id=85 op=UNLOAD Jan 21 01:01:24.765000 audit: BPF prog-id=86 op=UNLOAD Jan 21 01:01:24.769000 audit: BPF prog-id=125 op=LOAD Jan 21 01:01:24.769000 audit: BPF prog-id=84 op=UNLOAD Jan 21 01:01:24.780000 audit: BPF prog-id=126 op=LOAD Jan 21 01:01:24.780000 audit: BPF prog-id=80 op=UNLOAD Jan 21 01:01:24.781000 audit: BPF prog-id=127 op=LOAD Jan 21 01:01:24.781000 audit: BPF prog-id=73 op=UNLOAD Jan 21 01:01:24.781000 audit: BPF prog-id=128 op=LOAD Jan 21 01:01:24.781000 audit: BPF prog-id=129 op=LOAD Jan 21 01:01:24.781000 audit: BPF prog-id=74 op=UNLOAD Jan 21 01:01:24.781000 audit: BPF prog-id=75 op=UNLOAD Jan 21 01:01:24.782000 audit: BPF prog-id=130 op=LOAD Jan 21 01:01:24.782000 audit: BPF prog-id=70 op=UNLOAD Jan 21 01:01:24.782000 audit: BPF prog-id=131 op=LOAD Jan 21 01:01:24.782000 audit: BPF prog-id=132 op=LOAD Jan 21 01:01:24.782000 audit: BPF prog-id=71 op=UNLOAD Jan 21 01:01:24.782000 audit: BPF prog-id=72 op=UNLOAD Jan 21 01:01:24.783000 audit: BPF prog-id=133 op=LOAD Jan 21 01:01:24.783000 audit: BPF prog-id=79 op=UNLOAD Jan 21 01:01:24.786000 audit: BPF prog-id=134 op=LOAD Jan 21 01:01:24.786000 audit: BPF prog-id=81 op=UNLOAD Jan 21 01:01:24.786000 audit: BPF prog-id=135 op=LOAD Jan 21 01:01:24.786000 audit: BPF prog-id=136 op=LOAD Jan 21 01:01:24.786000 audit: BPF prog-id=82 op=UNLOAD Jan 21 01:01:24.786000 audit: BPF prog-id=83 op=UNLOAD Jan 21 01:01:24.787000 audit: BPF prog-id=137 op=LOAD Jan 21 01:01:24.787000 audit: BPF prog-id=87 op=UNLOAD Jan 21 01:01:24.787000 audit: BPF prog-id=138 op=LOAD Jan 21 01:01:24.787000 audit: BPF prog-id=139 op=LOAD Jan 21 01:01:24.787000 audit: BPF prog-id=88 op=UNLOAD Jan 21 01:01:24.787000 audit: BPF prog-id=89 op=UNLOAD Jan 21 01:01:25.214372 update_engine[1939]: I20260121 01:01:25.213726 1939 update_attempter.cc:509] Updating boot flags... Jan 21 01:01:25.902003 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 01:01:25.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:25.915641 (kubelet)[3468]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 21 01:01:26.009261 kubelet[3468]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 01:01:26.009261 kubelet[3468]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 21 01:01:26.009261 kubelet[3468]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 01:01:26.009261 kubelet[3468]: I0121 01:01:26.008394 3468 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 01:01:26.021367 kubelet[3468]: I0121 01:01:26.021326 3468 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 21 01:01:26.021367 kubelet[3468]: I0121 01:01:26.021362 3468 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 01:01:26.021990 kubelet[3468]: I0121 01:01:26.021967 3468 server.go:954] "Client rotation is on, will bootstrap in background" Jan 21 01:01:26.025310 kubelet[3468]: I0121 01:01:26.025274 3468 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 21 01:01:26.038235 kubelet[3468]: I0121 01:01:26.037715 3468 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 21 01:01:26.044203 kubelet[3468]: I0121 01:01:26.044184 3468 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 01:01:26.047387 kubelet[3468]: I0121 01:01:26.047364 3468 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 21 01:01:26.050496 kubelet[3468]: I0121 01:01:26.050457 3468 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 01:01:26.050836 kubelet[3468]: I0121 01:01:26.050619 3468 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-215","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 01:01:26.051012 kubelet[3468]: I0121 01:01:26.051001 3468 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 01:01:26.051084 kubelet[3468]: I0121 01:01:26.051076 3468 container_manager_linux.go:304] "Creating device plugin manager" Jan 21 01:01:26.054759 kubelet[3468]: I0121 01:01:26.054697 3468 state_mem.go:36] "Initialized new in-memory state store" Jan 21 01:01:26.055044 kubelet[3468]: I0121 01:01:26.054998 3468 kubelet.go:446] "Attempting to sync node with API server" Jan 21 01:01:26.058363 kubelet[3468]: I0121 01:01:26.058292 3468 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 01:01:26.058363 kubelet[3468]: I0121 01:01:26.058351 3468 kubelet.go:352] "Adding apiserver pod source" Jan 21 01:01:26.058748 kubelet[3468]: I0121 01:01:26.058724 3468 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 01:01:26.068234 kubelet[3468]: I0121 01:01:26.068106 3468 apiserver.go:52] "Watching apiserver" Jan 21 01:01:26.072620 kubelet[3468]: I0121 01:01:26.072559 3468 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 21 01:01:26.075005 kubelet[3468]: I0121 01:01:26.074981 3468 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 21 01:01:26.084055 kubelet[3468]: I0121 01:01:26.083691 3468 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 21 01:01:26.087671 kubelet[3468]: I0121 01:01:26.087365 3468 server.go:1287] "Started kubelet" Jan 21 01:01:26.093163 kubelet[3468]: I0121 01:01:26.092570 3468 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 01:01:26.093163 kubelet[3468]: I0121 01:01:26.092999 3468 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 01:01:26.093163 kubelet[3468]: I0121 01:01:26.093063 3468 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 01:01:26.096042 kubelet[3468]: I0121 01:01:26.096020 3468 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 01:01:26.106650 kubelet[3468]: I0121 01:01:26.106549 3468 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 21 01:01:26.109489 kubelet[3468]: I0121 01:01:26.109466 3468 server.go:479] "Adding debug handlers to kubelet server" Jan 21 01:01:26.116631 kubelet[3468]: I0121 01:01:26.111456 3468 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 21 01:01:26.122071 kubelet[3468]: I0121 01:01:26.111584 3468 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 21 01:01:26.122846 kubelet[3468]: I0121 01:01:26.122242 3468 reconciler.go:26] "Reconciler: start to sync state" Jan 21 01:01:26.124244 kubelet[3468]: I0121 01:01:26.124131 3468 factory.go:221] Registration of the systemd container factory successfully Jan 21 01:01:26.124805 kubelet[3468]: I0121 01:01:26.124774 3468 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 21 01:01:26.126671 kubelet[3468]: I0121 01:01:26.126631 3468 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 21 01:01:26.127537 kubelet[3468]: I0121 01:01:26.127128 3468 factory.go:221] Registration of the containerd container factory successfully Jan 21 01:01:26.129293 kubelet[3468]: I0121 01:01:26.128330 3468 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 21 01:01:26.129842 kubelet[3468]: I0121 01:01:26.129824 3468 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 21 01:01:26.129940 kubelet[3468]: I0121 01:01:26.129925 3468 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 21 01:01:26.130002 kubelet[3468]: I0121 01:01:26.129944 3468 kubelet.go:2382] "Starting kubelet main sync loop" Jan 21 01:01:26.131833 kubelet[3468]: E0121 01:01:26.131793 3468 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 01:01:26.200796 kubelet[3468]: I0121 01:01:26.200684 3468 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 21 01:01:26.200796 kubelet[3468]: I0121 01:01:26.200706 3468 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 21 01:01:26.200796 kubelet[3468]: I0121 01:01:26.200727 3468 state_mem.go:36] "Initialized new in-memory state store" Jan 21 01:01:26.201002 kubelet[3468]: I0121 01:01:26.200936 3468 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 21 01:01:26.201002 kubelet[3468]: I0121 01:01:26.200952 3468 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 21 01:01:26.201002 kubelet[3468]: I0121 01:01:26.200975 3468 policy_none.go:49] "None policy: Start" Jan 21 01:01:26.201002 kubelet[3468]: I0121 01:01:26.200988 3468 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 21 01:01:26.201002 kubelet[3468]: I0121 01:01:26.201001 3468 state_mem.go:35] "Initializing new in-memory state store" Jan 21 01:01:26.201195 kubelet[3468]: I0121 01:01:26.201143 3468 state_mem.go:75] "Updated machine memory state" Jan 21 01:01:26.207824 kubelet[3468]: I0121 01:01:26.207793 3468 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 21 01:01:26.208047 kubelet[3468]: I0121 01:01:26.208006 3468 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 01:01:26.208047 kubelet[3468]: I0121 01:01:26.208021 3468 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 01:01:26.215458 kubelet[3468]: I0121 01:01:26.214684 3468 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 01:01:26.222127 kubelet[3468]: E0121 01:01:26.220573 3468 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 21 01:01:26.232962 kubelet[3468]: I0121 01:01:26.232932 3468 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-215" Jan 21 01:01:26.236567 kubelet[3468]: I0121 01:01:26.236541 3468 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-215" Jan 21 01:01:26.236780 kubelet[3468]: I0121 01:01:26.236544 3468 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-215" Jan 21 01:01:26.264483 kubelet[3468]: E0121 01:01:26.264432 3468 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-215\" already exists" pod="kube-system/kube-scheduler-ip-172-31-28-215" Jan 21 01:01:26.300833 kubelet[3468]: I0121 01:01:26.300764 3468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-215" podStartSLOduration=3.300743684 podStartE2EDuration="3.300743684s" podCreationTimestamp="2026-01-21 01:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 01:01:26.287125472 +0000 UTC m=+0.342529696" watchObservedRunningTime="2026-01-21 01:01:26.300743684 +0000 UTC m=+0.356147904" Jan 21 01:01:26.301039 kubelet[3468]: I0121 01:01:26.300907 3468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-215" podStartSLOduration=0.30089762 podStartE2EDuration="300.89762ms" podCreationTimestamp="2026-01-21 01:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 01:01:26.30050504 +0000 UTC m=+0.355909266" watchObservedRunningTime="2026-01-21 01:01:26.30089762 +0000 UTC m=+0.356301847" Jan 21 01:01:26.322709 kubelet[3468]: I0121 01:01:26.322654 3468 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 21 01:01:26.326632 kubelet[3468]: I0121 01:01:26.325780 3468 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-215" Jan 21 01:01:26.342801 kubelet[3468]: I0121 01:01:26.342770 3468 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-28-215" Jan 21 01:01:26.343017 kubelet[3468]: I0121 01:01:26.342857 3468 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-215" Jan 21 01:01:26.422603 kubelet[3468]: I0121 01:01:26.422541 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fc5ca329622273519eeb84c00162a7e6-ca-certs\") pod \"kube-apiserver-ip-172-31-28-215\" (UID: \"fc5ca329622273519eeb84c00162a7e6\") " pod="kube-system/kube-apiserver-ip-172-31-28-215" Jan 21 01:01:26.422603 kubelet[3468]: I0121 01:01:26.422597 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd8bef70e3a96f325c0739478590c942-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-215\" (UID: \"fd8bef70e3a96f325c0739478590c942\") " pod="kube-system/kube-controller-manager-ip-172-31-28-215" Jan 21 01:01:26.422815 kubelet[3468]: I0121 01:01:26.422625 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd8bef70e3a96f325c0739478590c942-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-215\" (UID: \"fd8bef70e3a96f325c0739478590c942\") " pod="kube-system/kube-controller-manager-ip-172-31-28-215" Jan 21 01:01:26.422815 kubelet[3468]: I0121 01:01:26.422651 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/70460e35dfd0a02b96af697e16499982-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-215\" (UID: \"70460e35dfd0a02b96af697e16499982\") " pod="kube-system/kube-scheduler-ip-172-31-28-215" Jan 21 01:01:26.422815 kubelet[3468]: I0121 01:01:26.422676 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fc5ca329622273519eeb84c00162a7e6-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-215\" (UID: \"fc5ca329622273519eeb84c00162a7e6\") " pod="kube-system/kube-apiserver-ip-172-31-28-215" Jan 21 01:01:26.422815 kubelet[3468]: I0121 01:01:26.422697 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fc5ca329622273519eeb84c00162a7e6-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-215\" (UID: \"fc5ca329622273519eeb84c00162a7e6\") " pod="kube-system/kube-apiserver-ip-172-31-28-215" Jan 21 01:01:26.422815 kubelet[3468]: I0121 01:01:26.422717 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd8bef70e3a96f325c0739478590c942-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-215\" (UID: \"fd8bef70e3a96f325c0739478590c942\") " pod="kube-system/kube-controller-manager-ip-172-31-28-215" Jan 21 01:01:26.423012 kubelet[3468]: I0121 01:01:26.422739 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd8bef70e3a96f325c0739478590c942-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-215\" (UID: \"fd8bef70e3a96f325c0739478590c942\") " pod="kube-system/kube-controller-manager-ip-172-31-28-215" Jan 21 01:01:26.423012 kubelet[3468]: I0121 01:01:26.422761 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd8bef70e3a96f325c0739478590c942-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-215\" (UID: \"fd8bef70e3a96f325c0739478590c942\") " pod="kube-system/kube-controller-manager-ip-172-31-28-215" Jan 21 01:01:26.800940 kubelet[3468]: I0121 01:01:26.800872 3468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-215" podStartSLOduration=0.800854494 podStartE2EDuration="800.854494ms" podCreationTimestamp="2026-01-21 01:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 01:01:26.319148724 +0000 UTC m=+0.374552950" watchObservedRunningTime="2026-01-21 01:01:26.800854494 +0000 UTC m=+0.856258718" Jan 21 01:01:30.794785 kubelet[3468]: I0121 01:01:30.794738 3468 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 21 01:01:30.820404 containerd[1962]: time="2026-01-21T01:01:30.820340113Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 21 01:01:30.820792 kubelet[3468]: I0121 01:01:30.820711 3468 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 21 01:01:31.118627 systemd[1]: Created slice kubepods-besteffort-pod1d0f1a89_f143_4aec_9348_6c730059d943.slice - libcontainer container kubepods-besteffort-pod1d0f1a89_f143_4aec_9348_6c730059d943.slice. Jan 21 01:01:31.153746 kubelet[3468]: I0121 01:01:31.153692 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1d0f1a89-f143-4aec-9348-6c730059d943-kube-proxy\") pod \"kube-proxy-5gs87\" (UID: \"1d0f1a89-f143-4aec-9348-6c730059d943\") " pod="kube-system/kube-proxy-5gs87" Jan 21 01:01:31.153746 kubelet[3468]: I0121 01:01:31.153736 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d0f1a89-f143-4aec-9348-6c730059d943-xtables-lock\") pod \"kube-proxy-5gs87\" (UID: \"1d0f1a89-f143-4aec-9348-6c730059d943\") " pod="kube-system/kube-proxy-5gs87" Jan 21 01:01:31.153746 kubelet[3468]: I0121 01:01:31.153756 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s54x\" (UniqueName: \"kubernetes.io/projected/1d0f1a89-f143-4aec-9348-6c730059d943-kube-api-access-7s54x\") pod \"kube-proxy-5gs87\" (UID: \"1d0f1a89-f143-4aec-9348-6c730059d943\") " pod="kube-system/kube-proxy-5gs87" Jan 21 01:01:31.153954 kubelet[3468]: I0121 01:01:31.153774 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d0f1a89-f143-4aec-9348-6c730059d943-lib-modules\") pod \"kube-proxy-5gs87\" (UID: \"1d0f1a89-f143-4aec-9348-6c730059d943\") " pod="kube-system/kube-proxy-5gs87" Jan 21 01:01:31.430916 containerd[1962]: time="2026-01-21T01:01:31.430813537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5gs87,Uid:1d0f1a89-f143-4aec-9348-6c730059d943,Namespace:kube-system,Attempt:0,}" Jan 21 01:01:31.458096 containerd[1962]: time="2026-01-21T01:01:31.458047711Z" level=info msg="connecting to shim 2b1e8b591b6e6fc5ef48ba03d2984530b55e65f01f7b7ba41ef83cc15c229a53" address="unix:///run/containerd/s/7a6767f9b753564e120743ae5bc02cc92db429206a0534f311775c132c9b3441" namespace=k8s.io protocol=ttrpc version=3 Jan 21 01:01:31.504804 systemd[1]: Started cri-containerd-2b1e8b591b6e6fc5ef48ba03d2984530b55e65f01f7b7ba41ef83cc15c229a53.scope - libcontainer container 2b1e8b591b6e6fc5ef48ba03d2984530b55e65f01f7b7ba41ef83cc15c229a53. Jan 21 01:01:31.516000 audit: BPF prog-id=140 op=LOAD Jan 21 01:01:31.519110 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 21 01:01:31.519222 kernel: audit: type=1334 audit(1768957291.516:455): prog-id=140 op=LOAD Jan 21 01:01:31.517000 audit: BPF prog-id=141 op=LOAD Jan 21 01:01:31.520419 kernel: audit: type=1334 audit(1768957291.517:456): prog-id=141 op=LOAD Jan 21 01:01:31.523328 kernel: audit: type=1300 audit(1768957291.517:456): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3525 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:31.517000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3525 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:31.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262316538623539316236653666633565663438626130336432393834 Jan 21 01:01:31.528413 kernel: audit: type=1327 audit(1768957291.517:456): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262316538623539316236653666633565663438626130336432393834 Jan 21 01:01:31.517000 audit: BPF prog-id=141 op=UNLOAD Jan 21 01:01:31.541416 kernel: audit: type=1334 audit(1768957291.517:457): prog-id=141 op=UNLOAD Jan 21 01:01:31.517000 audit[3536]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3525 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:31.547247 kernel: audit: type=1300 audit(1768957291.517:457): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3525 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:31.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262316538623539316236653666633565663438626130336432393834 Jan 21 01:01:31.556367 kernel: audit: type=1327 audit(1768957291.517:457): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262316538623539316236653666633565663438626130336432393834 Jan 21 01:01:31.517000 audit: BPF prog-id=142 op=LOAD Jan 21 01:01:31.565413 kernel: audit: type=1334 audit(1768957291.517:458): prog-id=142 op=LOAD Jan 21 01:01:31.565534 kernel: audit: type=1300 audit(1768957291.517:458): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3525 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:31.517000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3525 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:31.571310 kernel: audit: type=1327 audit(1768957291.517:458): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262316538623539316236653666633565663438626130336432393834 Jan 21 01:01:31.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262316538623539316236653666633565663438626130336432393834 Jan 21 01:01:31.517000 audit: BPF prog-id=143 op=LOAD Jan 21 01:01:31.517000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3525 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:31.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262316538623539316236653666633565663438626130336432393834 Jan 21 01:01:31.517000 audit: BPF prog-id=143 op=UNLOAD Jan 21 01:01:31.517000 audit[3536]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3525 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:31.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262316538623539316236653666633565663438626130336432393834 Jan 21 01:01:31.517000 audit: BPF prog-id=142 op=UNLOAD Jan 21 01:01:31.517000 audit[3536]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3525 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:31.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262316538623539316236653666633565663438626130336432393834 Jan 21 01:01:31.517000 audit: BPF prog-id=144 op=LOAD Jan 21 01:01:31.572117 containerd[1962]: time="2026-01-21T01:01:31.572076870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5gs87,Uid:1d0f1a89-f143-4aec-9348-6c730059d943,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b1e8b591b6e6fc5ef48ba03d2984530b55e65f01f7b7ba41ef83cc15c229a53\"" Jan 21 01:01:31.517000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3525 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:31.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262316538623539316236653666633565663438626130336432393834 Jan 21 01:01:31.576511 containerd[1962]: time="2026-01-21T01:01:31.576473126Z" level=info msg="CreateContainer within sandbox \"2b1e8b591b6e6fc5ef48ba03d2984530b55e65f01f7b7ba41ef83cc15c229a53\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 21 01:01:31.616635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3113867596.mount: Deactivated successfully. Jan 21 01:01:31.625607 containerd[1962]: time="2026-01-21T01:01:31.625544170Z" level=info msg="Container 093e0a7e0c2be7d7eb181aa6ea262d1b8b64e974924b9d279b40338caffd9ada: CDI devices from CRI Config.CDIDevices: []" Jan 21 01:01:31.634321 containerd[1962]: time="2026-01-21T01:01:31.634242578Z" level=info msg="CreateContainer within sandbox \"2b1e8b591b6e6fc5ef48ba03d2984530b55e65f01f7b7ba41ef83cc15c229a53\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"093e0a7e0c2be7d7eb181aa6ea262d1b8b64e974924b9d279b40338caffd9ada\"" Jan 21 01:01:31.634951 containerd[1962]: time="2026-01-21T01:01:31.634928256Z" level=info msg="StartContainer for \"093e0a7e0c2be7d7eb181aa6ea262d1b8b64e974924b9d279b40338caffd9ada\"" Jan 21 01:01:31.637449 containerd[1962]: time="2026-01-21T01:01:31.637401767Z" level=info msg="connecting to shim 093e0a7e0c2be7d7eb181aa6ea262d1b8b64e974924b9d279b40338caffd9ada" address="unix:///run/containerd/s/7a6767f9b753564e120743ae5bc02cc92db429206a0534f311775c132c9b3441" protocol=ttrpc version=3 Jan 21 01:01:31.690649 systemd[1]: Started cri-containerd-093e0a7e0c2be7d7eb181aa6ea262d1b8b64e974924b9d279b40338caffd9ada.scope - libcontainer container 093e0a7e0c2be7d7eb181aa6ea262d1b8b64e974924b9d279b40338caffd9ada. Jan 21 01:01:31.756000 audit: BPF prog-id=145 op=LOAD Jan 21 01:01:31.756000 audit[3563]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3525 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:31.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039336530613765306332626537643765623138316161366561323632 Jan 21 01:01:31.756000 audit: BPF prog-id=146 op=LOAD Jan 21 01:01:31.756000 audit[3563]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3525 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:31.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039336530613765306332626537643765623138316161366561323632 Jan 21 01:01:31.756000 audit: BPF prog-id=146 op=UNLOAD Jan 21 01:01:31.756000 audit[3563]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3525 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:31.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039336530613765306332626537643765623138316161366561323632 Jan 21 01:01:31.756000 audit: BPF prog-id=145 op=UNLOAD Jan 21 01:01:31.756000 audit[3563]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3525 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:31.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039336530613765306332626537643765623138316161366561323632 Jan 21 01:01:31.756000 audit: BPF prog-id=147 op=LOAD Jan 21 01:01:31.756000 audit[3563]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3525 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:31.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039336530613765306332626537643765623138316161366561323632 Jan 21 01:01:31.778695 containerd[1962]: time="2026-01-21T01:01:31.778650716Z" level=info msg="StartContainer for \"093e0a7e0c2be7d7eb181aa6ea262d1b8b64e974924b9d279b40338caffd9ada\" returns successfully" Jan 21 01:01:31.905148 systemd[1]: Created slice kubepods-besteffort-pod611a37f6_2bab_4980_b128_e9e057421fbd.slice - libcontainer container kubepods-besteffort-pod611a37f6_2bab_4980_b128_e9e057421fbd.slice. Jan 21 01:01:31.958763 kubelet[3468]: I0121 01:01:31.958638 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhxvn\" (UniqueName: \"kubernetes.io/projected/611a37f6-2bab-4980-b128-e9e057421fbd-kube-api-access-jhxvn\") pod \"tigera-operator-7dcd859c48-dsbx6\" (UID: \"611a37f6-2bab-4980-b128-e9e057421fbd\") " pod="tigera-operator/tigera-operator-7dcd859c48-dsbx6" Jan 21 01:01:31.958763 kubelet[3468]: I0121 01:01:31.958683 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/611a37f6-2bab-4980-b128-e9e057421fbd-var-lib-calico\") pod \"tigera-operator-7dcd859c48-dsbx6\" (UID: \"611a37f6-2bab-4980-b128-e9e057421fbd\") " pod="tigera-operator/tigera-operator-7dcd859c48-dsbx6" Jan 21 01:01:32.211281 containerd[1962]: time="2026-01-21T01:01:32.210760960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-dsbx6,Uid:611a37f6-2bab-4980-b128-e9e057421fbd,Namespace:tigera-operator,Attempt:0,}" Jan 21 01:01:32.223817 kubelet[3468]: I0121 01:01:32.223638 3468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5gs87" podStartSLOduration=1.223621617 podStartE2EDuration="1.223621617s" podCreationTimestamp="2026-01-21 01:01:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 01:01:32.211551924 +0000 UTC m=+6.266956149" watchObservedRunningTime="2026-01-21 01:01:32.223621617 +0000 UTC m=+6.279025839" Jan 21 01:01:32.237096 containerd[1962]: time="2026-01-21T01:01:32.237039078Z" level=info msg="connecting to shim f58941a7c84eb20ccb3ed5359ada202e94cab4f54e8107c00617efda4919f643" address="unix:///run/containerd/s/7fd096551bd9d6c67ab91c4bdfa2e855f5647926d10d32d3a02168baa963ae9c" namespace=k8s.io protocol=ttrpc version=3 Jan 21 01:01:32.266473 systemd[1]: Started cri-containerd-f58941a7c84eb20ccb3ed5359ada202e94cab4f54e8107c00617efda4919f643.scope - libcontainer container f58941a7c84eb20ccb3ed5359ada202e94cab4f54e8107c00617efda4919f643. Jan 21 01:01:32.273835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount450719092.mount: Deactivated successfully. Jan 21 01:01:32.290000 audit: BPF prog-id=148 op=LOAD Jan 21 01:01:32.291000 audit: BPF prog-id=149 op=LOAD Jan 21 01:01:32.291000 audit[3613]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3602 pid=3613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:32.291000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635383934316137633834656232306363623365643533353961646132 Jan 21 01:01:32.291000 audit: BPF prog-id=149 op=UNLOAD Jan 21 01:01:32.291000 audit[3613]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3602 pid=3613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:32.291000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635383934316137633834656232306363623365643533353961646132 Jan 21 01:01:32.291000 audit: BPF prog-id=150 op=LOAD Jan 21 01:01:32.291000 audit[3613]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3602 pid=3613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:32.291000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635383934316137633834656232306363623365643533353961646132 Jan 21 01:01:32.291000 audit: BPF prog-id=151 op=LOAD Jan 21 01:01:32.291000 audit[3613]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3602 pid=3613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:32.291000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635383934316137633834656232306363623365643533353961646132 Jan 21 01:01:32.291000 audit: BPF prog-id=151 op=UNLOAD Jan 21 01:01:32.291000 audit[3613]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3602 pid=3613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:32.291000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635383934316137633834656232306363623365643533353961646132 Jan 21 01:01:32.291000 audit: BPF prog-id=150 op=UNLOAD Jan 21 01:01:32.291000 audit[3613]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3602 pid=3613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:32.291000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635383934316137633834656232306363623365643533353961646132 Jan 21 01:01:32.291000 audit: BPF prog-id=152 op=LOAD Jan 21 01:01:32.291000 audit[3613]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3602 pid=3613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:32.291000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635383934316137633834656232306363623365643533353961646132 Jan 21 01:01:32.331274 containerd[1962]: time="2026-01-21T01:01:32.331233959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-dsbx6,Uid:611a37f6-2bab-4980-b128-e9e057421fbd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f58941a7c84eb20ccb3ed5359ada202e94cab4f54e8107c00617efda4919f643\"" Jan 21 01:01:32.333154 containerd[1962]: time="2026-01-21T01:01:32.333117302Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 21 01:01:33.962307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2809628877.mount: Deactivated successfully. Jan 21 01:01:36.183532 containerd[1962]: time="2026-01-21T01:01:36.183477734Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:01:36.200582 containerd[1962]: time="2026-01-21T01:01:36.200298929Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 21 01:01:36.212138 containerd[1962]: time="2026-01-21T01:01:36.212091477Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:01:36.252302 containerd[1962]: time="2026-01-21T01:01:36.251456627Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:01:36.252302 containerd[1962]: time="2026-01-21T01:01:36.252179116Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.919028106s" Jan 21 01:01:36.252302 containerd[1962]: time="2026-01-21T01:01:36.252221473Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 21 01:01:36.318526 containerd[1962]: time="2026-01-21T01:01:36.318480602Z" level=info msg="CreateContainer within sandbox \"f58941a7c84eb20ccb3ed5359ada202e94cab4f54e8107c00617efda4919f643\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 21 01:01:36.367000 audit[3680]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3680 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:36.367000 audit[3680]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff6816ccc0 a2=0 a3=7fff6816ccac items=0 ppid=3578 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.367000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 21 01:01:36.368000 audit[3681]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3681 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:36.368000 audit[3681]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe01944690 a2=0 a3=7ffe0194467c items=0 ppid=3578 pid=3681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.368000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 21 01:01:36.370000 audit[3682]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_chain pid=3682 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:36.370000 audit[3682]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff20137780 a2=0 a3=7fff2013776c items=0 ppid=3578 pid=3682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.370000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 21 01:01:36.372000 audit[3684]: NETFILTER_CFG table=mangle:57 family=10 entries=1 op=nft_register_chain pid=3684 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:01:36.372000 audit[3684]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc2b375d40 a2=0 a3=7ffc2b375d2c items=0 ppid=3578 pid=3684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.372000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 21 01:01:36.373000 audit[3685]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3685 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:01:36.373000 audit[3685]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8e38eeb0 a2=0 a3=7fff8e38ee9c items=0 ppid=3578 pid=3685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.373000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 21 01:01:36.374000 audit[3686]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3686 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:01:36.374000 audit[3686]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc802e5fe0 a2=0 a3=7ffc802e5fcc items=0 ppid=3578 pid=3686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.374000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 21 01:01:36.444199 containerd[1962]: time="2026-01-21T01:01:36.443723314Z" level=info msg="Container 201103dc9e7f8039a1bc1d0da0e54085e85462bd085d806f0a507f55a45eb9ea: CDI devices from CRI Config.CDIDevices: []" Jan 21 01:01:36.501000 audit[3687]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3687 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:36.501000 audit[3687]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff87e33980 a2=0 a3=7fff87e3396c items=0 ppid=3578 pid=3687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.501000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 21 01:01:36.515762 containerd[1962]: time="2026-01-21T01:01:36.515716018Z" level=info msg="CreateContainer within sandbox \"f58941a7c84eb20ccb3ed5359ada202e94cab4f54e8107c00617efda4919f643\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"201103dc9e7f8039a1bc1d0da0e54085e85462bd085d806f0a507f55a45eb9ea\"" Jan 21 01:01:36.517253 containerd[1962]: time="2026-01-21T01:01:36.516467914Z" level=info msg="StartContainer for \"201103dc9e7f8039a1bc1d0da0e54085e85462bd085d806f0a507f55a45eb9ea\"" Jan 21 01:01:36.517724 containerd[1962]: time="2026-01-21T01:01:36.517693706Z" level=info msg="connecting to shim 201103dc9e7f8039a1bc1d0da0e54085e85462bd085d806f0a507f55a45eb9ea" address="unix:///run/containerd/s/7fd096551bd9d6c67ab91c4bdfa2e855f5647926d10d32d3a02168baa963ae9c" protocol=ttrpc version=3 Jan 21 01:01:36.547958 kernel: kauditd_printk_skb: 70 callbacks suppressed Jan 21 01:01:36.548092 kernel: audit: type=1325 audit(1768957296.541:483): table=filter:61 family=2 entries=1 op=nft_register_rule pid=3701 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:36.541000 audit[3701]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3701 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:36.541000 audit[3701]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe2d4f7c30 a2=0 a3=7ffe2d4f7c1c items=0 ppid=3578 pid=3701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.558305 kernel: audit: type=1300 audit(1768957296.541:483): arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe2d4f7c30 a2=0 a3=7ffe2d4f7c1c items=0 ppid=3578 pid=3701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.541000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 21 01:01:36.564850 systemd[1]: Started cri-containerd-201103dc9e7f8039a1bc1d0da0e54085e85462bd085d806f0a507f55a45eb9ea.scope - libcontainer container 201103dc9e7f8039a1bc1d0da0e54085e85462bd085d806f0a507f55a45eb9ea. Jan 21 01:01:36.565553 kernel: audit: type=1327 audit(1768957296.541:483): proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 21 01:01:36.549000 audit[3704]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3704 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:36.571247 kernel: audit: type=1325 audit(1768957296.549:484): table=filter:62 family=2 entries=1 op=nft_register_rule pid=3704 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:36.549000 audit[3704]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe4663a8c0 a2=0 a3=7ffe4663a8ac items=0 ppid=3578 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.584005 kernel: audit: type=1300 audit(1768957296.549:484): arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe4663a8c0 a2=0 a3=7ffe4663a8ac items=0 ppid=3578 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.584125 kernel: audit: type=1327 audit(1768957296.549:484): proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 21 01:01:36.549000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 21 01:01:36.552000 audit[3705]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3705 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:36.601711 kernel: audit: type=1325 audit(1768957296.552:485): table=filter:63 family=2 entries=1 op=nft_register_chain pid=3705 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:36.601812 kernel: audit: type=1300 audit(1768957296.552:485): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd9f266b60 a2=0 a3=7ffd9f266b4c items=0 ppid=3578 pid=3705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.552000 audit[3705]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd9f266b60 a2=0 a3=7ffd9f266b4c items=0 ppid=3578 pid=3705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.607886 kernel: audit: type=1327 audit(1768957296.552:485): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 21 01:01:36.552000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 21 01:01:36.566000 audit[3707]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3707 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:36.621458 kernel: audit: type=1325 audit(1768957296.566:486): table=filter:64 family=2 entries=1 op=nft_register_rule pid=3707 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:36.566000 audit[3707]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcf4080a90 a2=0 a3=7ffcf4080a7c items=0 ppid=3578 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.566000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 21 01:01:36.573000 audit[3710]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3710 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:36.573000 audit[3710]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeffab5dd0 a2=0 a3=7ffeffab5dbc items=0 ppid=3578 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.573000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 21 01:01:36.590000 audit[3718]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3718 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:36.590000 audit[3718]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdec7049d0 a2=0 a3=7ffdec7049bc items=0 ppid=3578 pid=3718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.590000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 21 01:01:36.594000 audit: BPF prog-id=153 op=LOAD Jan 21 01:01:36.601000 audit: BPF prog-id=154 op=LOAD Jan 21 01:01:36.601000 audit[3690]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3602 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230313130336463396537663830333961316263316430646130653534 Jan 21 01:01:36.601000 audit: BPF prog-id=154 op=UNLOAD Jan 21 01:01:36.601000 audit[3690]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3602 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230313130336463396537663830333961316263316430646130653534 Jan 21 01:01:36.601000 audit: BPF prog-id=155 op=LOAD Jan 21 01:01:36.601000 audit[3690]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3602 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230313130336463396537663830333961316263316430646130653534 Jan 21 01:01:36.601000 audit: BPF prog-id=156 op=LOAD Jan 21 01:01:36.601000 audit[3690]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3602 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230313130336463396537663830333961316263316430646130653534 Jan 21 01:01:36.601000 audit: BPF prog-id=156 op=UNLOAD Jan 21 01:01:36.601000 audit[3690]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3602 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230313130336463396537663830333961316263316430646130653534 Jan 21 01:01:36.601000 audit: BPF prog-id=155 op=UNLOAD Jan 21 01:01:36.601000 audit[3690]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3602 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230313130336463396537663830333961316263316430646130653534 Jan 21 01:01:36.601000 audit: BPF prog-id=157 op=LOAD Jan 21 01:01:36.601000 audit[3690]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3602 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230313130336463396537663830333961316263316430646130653534 Jan 21 01:01:36.604000 audit[3721]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3721 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:36.604000 audit[3721]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc16a4af50 a2=0 a3=7ffc16a4af3c items=0 ppid=3578 pid=3721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.604000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 21 01:01:36.606000 audit[3722]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3722 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:36.606000 audit[3722]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd73e09c20 a2=0 a3=7ffd73e09c0c items=0 ppid=3578 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.606000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 21 01:01:36.613000 audit[3724]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3724 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:36.613000 audit[3724]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffeedd1ddb0 a2=0 a3=7ffeedd1dd9c items=0 ppid=3578 pid=3724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.613000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 21 01:01:36.617000 audit[3725]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3725 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:36.617000 audit[3725]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd17b4a180 a2=0 a3=7ffd17b4a16c items=0 ppid=3578 pid=3725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.617000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 21 01:01:36.625000 audit[3728]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3728 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:36.625000 audit[3728]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff3404de40 a2=0 a3=7fff3404de2c items=0 ppid=3578 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.625000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 21 01:01:36.633000 audit[3736]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3736 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:36.633000 audit[3736]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffefcebe1f0 a2=0 a3=7ffefcebe1dc items=0 ppid=3578 pid=3736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.633000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 21 01:01:36.643000 audit[3739]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3739 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:36.643000 audit[3739]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe3d620200 a2=0 a3=7ffe3d6201ec items=0 ppid=3578 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.643000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 21 01:01:36.646879 containerd[1962]: time="2026-01-21T01:01:36.646841454Z" level=info msg="StartContainer for \"201103dc9e7f8039a1bc1d0da0e54085e85462bd085d806f0a507f55a45eb9ea\" returns successfully" Jan 21 01:01:36.648000 audit[3744]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3744 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:36.648000 audit[3744]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe31760a20 a2=0 a3=7ffe31760a0c items=0 ppid=3578 pid=3744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.648000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 21 01:01:36.656000 audit[3746]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3746 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:36.656000 audit[3746]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffca641bdd0 a2=0 a3=7ffca641bdbc items=0 ppid=3578 pid=3746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.656000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 21 01:01:36.666000 audit[3749]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3749 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:36.666000 audit[3749]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffbf51a870 a2=0 a3=7fffbf51a85c items=0 ppid=3578 pid=3749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.666000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 21 01:01:36.668000 audit[3750]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3750 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:36.668000 audit[3750]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd3561eac0 a2=0 a3=7ffd3561eaac items=0 ppid=3578 pid=3750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.668000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 21 01:01:36.673000 audit[3752]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3752 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 01:01:36.673000 audit[3752]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fff6b030620 a2=0 a3=7fff6b03060c items=0 ppid=3578 pid=3752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.673000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 21 01:01:36.778000 audit[3759]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3759 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:01:36.778000 audit[3759]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffed76d8e70 a2=0 a3=7ffed76d8e5c items=0 ppid=3578 pid=3759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.778000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:01:36.788000 audit[3759]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3759 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:01:36.788000 audit[3759]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffed76d8e70 a2=0 a3=7ffed76d8e5c items=0 ppid=3578 pid=3759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.788000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:01:36.790000 audit[3764]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3764 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:01:36.790000 audit[3764]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffec51b7830 a2=0 a3=7ffec51b781c items=0 ppid=3578 pid=3764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.790000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 21 01:01:36.795000 audit[3766]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3766 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:01:36.795000 audit[3766]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffdefa61170 a2=0 a3=7ffdefa6115c items=0 ppid=3578 pid=3766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.795000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 21 01:01:36.799000 audit[3769]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3769 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:01:36.799000 audit[3769]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffea58f69a0 a2=0 a3=7ffea58f698c items=0 ppid=3578 pid=3769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.799000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 21 01:01:36.801000 audit[3770]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3770 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:01:36.801000 audit[3770]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff344ec670 a2=0 a3=7fff344ec65c items=0 ppid=3578 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.801000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 21 01:01:36.804000 audit[3772]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3772 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:01:36.804000 audit[3772]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff8efa0990 a2=0 a3=7fff8efa097c items=0 ppid=3578 pid=3772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.804000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 21 01:01:36.805000 audit[3773]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3773 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:01:36.805000 audit[3773]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff83488af0 a2=0 a3=7fff83488adc items=0 ppid=3578 pid=3773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.805000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 21 01:01:36.808000 audit[3775]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3775 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:01:36.808000 audit[3775]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd8f660780 a2=0 a3=7ffd8f66076c items=0 ppid=3578 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.808000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 21 01:01:36.813000 audit[3778]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3778 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:01:36.813000 audit[3778]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffcd5c2b620 a2=0 a3=7ffcd5c2b60c items=0 ppid=3578 pid=3778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.813000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 21 01:01:36.814000 audit[3779]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3779 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:01:36.814000 audit[3779]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe82156c80 a2=0 a3=7ffe82156c6c items=0 ppid=3578 pid=3779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.814000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 21 01:01:36.817000 audit[3781]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3781 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:01:36.817000 audit[3781]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc367ef8d0 a2=0 a3=7ffc367ef8bc items=0 ppid=3578 pid=3781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.817000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 21 01:01:36.818000 audit[3782]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3782 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:01:36.818000 audit[3782]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc31e144f0 a2=0 a3=7ffc31e144dc items=0 ppid=3578 pid=3782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.818000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 21 01:01:36.821000 audit[3784]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3784 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:01:36.821000 audit[3784]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffec95a22f0 a2=0 a3=7ffec95a22dc items=0 ppid=3578 pid=3784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.821000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 21 01:01:36.825000 audit[3787]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3787 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:01:36.825000 audit[3787]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe83e046b0 a2=0 a3=7ffe83e0469c items=0 ppid=3578 pid=3787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.825000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 21 01:01:36.830000 audit[3790]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3790 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:01:36.830000 audit[3790]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd4b31cc20 a2=0 a3=7ffd4b31cc0c items=0 ppid=3578 pid=3790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.830000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 21 01:01:36.831000 audit[3791]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3791 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:01:36.831000 audit[3791]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcc9241650 a2=0 a3=7ffcc924163c items=0 ppid=3578 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.831000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 21 01:01:36.834000 audit[3793]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3793 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:01:36.834000 audit[3793]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffdb5405f50 a2=0 a3=7ffdb5405f3c items=0 ppid=3578 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.834000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 21 01:01:36.839000 audit[3796]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3796 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:01:36.839000 audit[3796]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffc0caa2b0 a2=0 a3=7fffc0caa29c items=0 ppid=3578 pid=3796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.839000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 21 01:01:36.840000 audit[3797]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3797 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:01:36.840000 audit[3797]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb0162670 a2=0 a3=7ffdb016265c items=0 ppid=3578 pid=3797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.840000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 21 01:01:36.843000 audit[3799]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3799 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:01:36.843000 audit[3799]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffffd5752b0 a2=0 a3=7ffffd57529c items=0 ppid=3578 pid=3799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.843000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 21 01:01:36.845000 audit[3800]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3800 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:01:36.845000 audit[3800]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe0e2054a0 a2=0 a3=7ffe0e20548c items=0 ppid=3578 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.845000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 21 01:01:36.848000 audit[3802]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3802 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:01:36.848000 audit[3802]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc3ed01c40 a2=0 a3=7ffc3ed01c2c items=0 ppid=3578 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.848000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 21 01:01:36.852000 audit[3805]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3805 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 01:01:36.852000 audit[3805]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc8ae0bd90 a2=0 a3=7ffc8ae0bd7c items=0 ppid=3578 pid=3805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.852000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 21 01:01:36.859000 audit[3807]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3807 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 21 01:01:36.859000 audit[3807]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffeeb1f9e50 a2=0 a3=7ffeeb1f9e3c items=0 ppid=3578 pid=3807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.859000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:01:36.860000 audit[3807]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3807 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 21 01:01:36.860000 audit[3807]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffeeb1f9e50 a2=0 a3=7ffeeb1f9e3c items=0 ppid=3578 pid=3807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:36.860000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:01:37.209861 kubelet[3468]: I0121 01:01:37.208692 3468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-dsbx6" podStartSLOduration=2.288280618 podStartE2EDuration="6.208678557s" podCreationTimestamp="2026-01-21 01:01:31 +0000 UTC" firstStartedPulling="2026-01-21 01:01:32.33273488 +0000 UTC m=+6.388139085" lastFinishedPulling="2026-01-21 01:01:36.253132808 +0000 UTC m=+10.308537024" observedRunningTime="2026-01-21 01:01:37.208397948 +0000 UTC m=+11.263802164" watchObservedRunningTime="2026-01-21 01:01:37.208678557 +0000 UTC m=+11.264082782" Jan 21 01:02:11.733396 sudo[2329]: pam_unix(sudo:session): session closed for user root Jan 21 01:02:11.764812 kernel: kauditd_printk_skb: 144 callbacks suppressed Jan 21 01:02:11.764906 kernel: audit: type=1106 audit(1768957331.732:535): pid=2329 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 01:02:11.764948 kernel: audit: type=1104 audit(1768957331.733:536): pid=2329 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 01:02:11.732000 audit[2329]: USER_END pid=2329 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 01:02:11.733000 audit[2329]: CRED_DISP pid=2329 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 01:02:11.817649 sshd[2328]: Connection closed by 68.220.241.50 port 51386 Jan 21 01:02:11.818107 sshd-session[2324]: pam_unix(sshd:session): session closed for user core Jan 21 01:02:11.820000 audit[2324]: USER_END pid=2324 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:02:11.830250 kernel: audit: type=1106 audit(1768957331.820:537): pid=2324 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:02:11.832383 systemd[1]: sshd@6-172.31.28.215:22-68.220.241.50:51386.service: Deactivated successfully. Jan 21 01:02:11.836225 systemd[1]: session-8.scope: Deactivated successfully. Jan 21 01:02:11.820000 audit[2324]: CRED_DISP pid=2324 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:02:11.843344 kernel: audit: type=1104 audit(1768957331.820:538): pid=2324 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:02:11.842138 systemd[1]: session-8.scope: Consumed 4.990s CPU time, 150.2M memory peak. Jan 21 01:02:11.844251 systemd-logind[1938]: Session 8 logged out. Waiting for processes to exit. Jan 21 01:02:11.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.28.215:22-68.220.241.50:51386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:02:11.887114 kernel: audit: type=1131 audit(1768957331.831:539): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.28.215:22-68.220.241.50:51386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:02:11.853786 systemd-logind[1938]: Removed session 8. Jan 21 01:02:12.473000 audit[3862]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3862 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:12.479316 kernel: audit: type=1325 audit(1768957332.473:540): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3862 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:12.473000 audit[3862]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffb69484c0 a2=0 a3=7fffb69484ac items=0 ppid=3578 pid=3862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:12.488243 kernel: audit: type=1300 audit(1768957332.473:540): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffb69484c0 a2=0 a3=7fffb69484ac items=0 ppid=3578 pid=3862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:12.473000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:02:12.497369 kernel: audit: type=1327 audit(1768957332.473:540): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:02:12.497510 kernel: audit: type=1325 audit(1768957332.480:541): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3862 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:12.480000 audit[3862]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3862 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:12.511257 kernel: audit: type=1300 audit(1768957332.480:541): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffb69484c0 a2=0 a3=0 items=0 ppid=3578 pid=3862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:12.480000 audit[3862]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffb69484c0 a2=0 a3=0 items=0 ppid=3578 pid=3862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:12.480000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:02:12.540000 audit[3864]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3864 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:12.540000 audit[3864]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc5ce8cba0 a2=0 a3=7ffc5ce8cb8c items=0 ppid=3578 pid=3864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:12.540000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:02:12.546000 audit[3864]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3864 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:12.546000 audit[3864]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc5ce8cba0 a2=0 a3=0 items=0 ppid=3578 pid=3864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:12.546000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:02:16.278000 audit[3866]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3866 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:16.278000 audit[3866]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc5a818430 a2=0 a3=7ffc5a81841c items=0 ppid=3578 pid=3866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:16.278000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:02:16.283000 audit[3866]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3866 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:16.283000 audit[3866]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc5a818430 a2=0 a3=0 items=0 ppid=3578 pid=3866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:16.283000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:02:16.314000 audit[3868]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3868 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:16.314000 audit[3868]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffce7384ee0 a2=0 a3=7ffce7384ecc items=0 ppid=3578 pid=3868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:16.314000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:02:16.316000 audit[3868]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3868 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:16.316000 audit[3868]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffce7384ee0 a2=0 a3=0 items=0 ppid=3578 pid=3868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:16.316000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:02:17.354270 kernel: kauditd_printk_skb: 19 callbacks suppressed Jan 21 01:02:17.354411 kernel: audit: type=1325 audit(1768957337.348:548): table=filter:113 family=2 entries=19 op=nft_register_rule pid=3870 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:17.348000 audit[3870]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3870 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:17.348000 audit[3870]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd6785a710 a2=0 a3=7ffd6785a6fc items=0 ppid=3578 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:17.357000 kernel: audit: type=1300 audit(1768957337.348:548): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd6785a710 a2=0 a3=7ffd6785a6fc items=0 ppid=3578 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:17.348000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:02:17.362813 kernel: audit: type=1327 audit(1768957337.348:548): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:02:17.364000 audit[3870]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3870 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:17.364000 audit[3870]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd6785a710 a2=0 a3=0 items=0 ppid=3578 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:17.371058 kernel: audit: type=1325 audit(1768957337.364:549): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3870 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:17.371149 kernel: audit: type=1300 audit(1768957337.364:549): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd6785a710 a2=0 a3=0 items=0 ppid=3578 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:17.364000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:02:17.376655 kernel: audit: type=1327 audit(1768957337.364:549): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:02:18.327000 audit[3872]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3872 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:18.327000 audit[3872]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd5ae39380 a2=0 a3=7ffd5ae3936c items=0 ppid=3578 pid=3872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:18.333145 kernel: audit: type=1325 audit(1768957338.327:550): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3872 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:18.333442 kernel: audit: type=1300 audit(1768957338.327:550): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd5ae39380 a2=0 a3=7ffd5ae3936c items=0 ppid=3578 pid=3872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:18.338246 kernel: audit: type=1327 audit(1768957338.327:550): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:02:18.327000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:02:18.339000 audit[3872]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3872 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:18.341745 kernel: audit: type=1325 audit(1768957338.339:551): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3872 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:18.339000 audit[3872]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd5ae39380 a2=0 a3=0 items=0 ppid=3578 pid=3872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:18.339000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:02:18.389355 systemd[1]: Created slice kubepods-besteffort-pod7be49ad9_f941_4ce6_8938_2235c5cf3938.slice - libcontainer container kubepods-besteffort-pod7be49ad9_f941_4ce6_8938_2235c5cf3938.slice. Jan 21 01:02:18.479608 kubelet[3468]: I0121 01:02:18.479559 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cpwk\" (UniqueName: \"kubernetes.io/projected/7be49ad9-f941-4ce6-8938-2235c5cf3938-kube-api-access-7cpwk\") pod \"calico-typha-6684dd4789-g9kfn\" (UID: \"7be49ad9-f941-4ce6-8938-2235c5cf3938\") " pod="calico-system/calico-typha-6684dd4789-g9kfn" Jan 21 01:02:18.480258 kubelet[3468]: I0121 01:02:18.479670 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7be49ad9-f941-4ce6-8938-2235c5cf3938-tigera-ca-bundle\") pod \"calico-typha-6684dd4789-g9kfn\" (UID: \"7be49ad9-f941-4ce6-8938-2235c5cf3938\") " pod="calico-system/calico-typha-6684dd4789-g9kfn" Jan 21 01:02:18.480258 kubelet[3468]: I0121 01:02:18.479704 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7be49ad9-f941-4ce6-8938-2235c5cf3938-typha-certs\") pod \"calico-typha-6684dd4789-g9kfn\" (UID: \"7be49ad9-f941-4ce6-8938-2235c5cf3938\") " pod="calico-system/calico-typha-6684dd4789-g9kfn" Jan 21 01:02:18.652419 systemd[1]: Created slice kubepods-besteffort-podbc1ac585_7f06_4a00_82b5_d4dbcb147458.slice - libcontainer container kubepods-besteffort-podbc1ac585_7f06_4a00_82b5_d4dbcb147458.slice. Jan 21 01:02:18.682651 kubelet[3468]: I0121 01:02:18.682370 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bc1ac585-7f06-4a00-82b5-d4dbcb147458-node-certs\") pod \"calico-node-jprlx\" (UID: \"bc1ac585-7f06-4a00-82b5-d4dbcb147458\") " pod="calico-system/calico-node-jprlx" Jan 21 01:02:18.682651 kubelet[3468]: I0121 01:02:18.682404 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bc1ac585-7f06-4a00-82b5-d4dbcb147458-policysync\") pod \"calico-node-jprlx\" (UID: \"bc1ac585-7f06-4a00-82b5-d4dbcb147458\") " pod="calico-system/calico-node-jprlx" Jan 21 01:02:18.682651 kubelet[3468]: I0121 01:02:18.682425 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc1ac585-7f06-4a00-82b5-d4dbcb147458-lib-modules\") pod \"calico-node-jprlx\" (UID: \"bc1ac585-7f06-4a00-82b5-d4dbcb147458\") " pod="calico-system/calico-node-jprlx" Jan 21 01:02:18.682651 kubelet[3468]: I0121 01:02:18.682448 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bc1ac585-7f06-4a00-82b5-d4dbcb147458-cni-net-dir\") pod \"calico-node-jprlx\" (UID: \"bc1ac585-7f06-4a00-82b5-d4dbcb147458\") " pod="calico-system/calico-node-jprlx" Jan 21 01:02:18.682651 kubelet[3468]: I0121 01:02:18.682464 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc1ac585-7f06-4a00-82b5-d4dbcb147458-tigera-ca-bundle\") pod \"calico-node-jprlx\" (UID: \"bc1ac585-7f06-4a00-82b5-d4dbcb147458\") " pod="calico-system/calico-node-jprlx" Jan 21 01:02:18.682884 kubelet[3468]: I0121 01:02:18.682481 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bc1ac585-7f06-4a00-82b5-d4dbcb147458-cni-log-dir\") pod \"calico-node-jprlx\" (UID: \"bc1ac585-7f06-4a00-82b5-d4dbcb147458\") " pod="calico-system/calico-node-jprlx" Jan 21 01:02:18.682884 kubelet[3468]: I0121 01:02:18.682496 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bc1ac585-7f06-4a00-82b5-d4dbcb147458-var-lib-calico\") pod \"calico-node-jprlx\" (UID: \"bc1ac585-7f06-4a00-82b5-d4dbcb147458\") " pod="calico-system/calico-node-jprlx" Jan 21 01:02:18.682884 kubelet[3468]: I0121 01:02:18.682513 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bc1ac585-7f06-4a00-82b5-d4dbcb147458-var-run-calico\") pod \"calico-node-jprlx\" (UID: \"bc1ac585-7f06-4a00-82b5-d4dbcb147458\") " pod="calico-system/calico-node-jprlx" Jan 21 01:02:18.682884 kubelet[3468]: I0121 01:02:18.682550 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc1ac585-7f06-4a00-82b5-d4dbcb147458-xtables-lock\") pod \"calico-node-jprlx\" (UID: \"bc1ac585-7f06-4a00-82b5-d4dbcb147458\") " pod="calico-system/calico-node-jprlx" Jan 21 01:02:18.682884 kubelet[3468]: I0121 01:02:18.682573 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7xz8\" (UniqueName: \"kubernetes.io/projected/bc1ac585-7f06-4a00-82b5-d4dbcb147458-kube-api-access-j7xz8\") pod \"calico-node-jprlx\" (UID: \"bc1ac585-7f06-4a00-82b5-d4dbcb147458\") " pod="calico-system/calico-node-jprlx" Jan 21 01:02:18.683007 kubelet[3468]: I0121 01:02:18.682591 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bc1ac585-7f06-4a00-82b5-d4dbcb147458-cni-bin-dir\") pod \"calico-node-jprlx\" (UID: \"bc1ac585-7f06-4a00-82b5-d4dbcb147458\") " pod="calico-system/calico-node-jprlx" Jan 21 01:02:18.683007 kubelet[3468]: I0121 01:02:18.682606 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bc1ac585-7f06-4a00-82b5-d4dbcb147458-flexvol-driver-host\") pod \"calico-node-jprlx\" (UID: \"bc1ac585-7f06-4a00-82b5-d4dbcb147458\") " pod="calico-system/calico-node-jprlx" Jan 21 01:02:18.695907 containerd[1962]: time="2026-01-21T01:02:18.695442057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6684dd4789-g9kfn,Uid:7be49ad9-f941-4ce6-8938-2235c5cf3938,Namespace:calico-system,Attempt:0,}" Jan 21 01:02:18.752882 containerd[1962]: time="2026-01-21T01:02:18.752781383Z" level=info msg="connecting to shim 56e3d56b34b4f48d9f91222f300d0d1e2bb07c354be1440458ddad37d2ea470c" address="unix:///run/containerd/s/ae2d86cf33092bed3b611746faf64d071fca780c295a0784ac36ffac7dcfbf45" namespace=k8s.io protocol=ttrpc version=3 Jan 21 01:02:18.826528 systemd[1]: Started cri-containerd-56e3d56b34b4f48d9f91222f300d0d1e2bb07c354be1440458ddad37d2ea470c.scope - libcontainer container 56e3d56b34b4f48d9f91222f300d0d1e2bb07c354be1440458ddad37d2ea470c. Jan 21 01:02:18.852409 kubelet[3468]: E0121 01:02:18.852346 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9xsfz" podUID="76d94f9b-071e-45b7-9881-314d22adc37f" Jan 21 01:02:18.885285 kubelet[3468]: E0121 01:02:18.885199 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.885285 kubelet[3468]: W0121 01:02:18.885286 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.885614 kubelet[3468]: E0121 01:02:18.885322 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.887334 kubelet[3468]: E0121 01:02:18.887261 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.887671 kubelet[3468]: W0121 01:02:18.887287 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.887671 kubelet[3468]: E0121 01:02:18.887444 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.888493 kubelet[3468]: E0121 01:02:18.888397 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.888493 kubelet[3468]: W0121 01:02:18.888416 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.888493 kubelet[3468]: E0121 01:02:18.888434 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.889370 kubelet[3468]: E0121 01:02:18.889100 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.889370 kubelet[3468]: W0121 01:02:18.889117 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.889370 kubelet[3468]: E0121 01:02:18.889136 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.890504 kubelet[3468]: E0121 01:02:18.890416 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.890504 kubelet[3468]: W0121 01:02:18.890435 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.890504 kubelet[3468]: E0121 01:02:18.890451 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.890991 kubelet[3468]: E0121 01:02:18.890897 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.890991 kubelet[3468]: W0121 01:02:18.890912 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.890991 kubelet[3468]: E0121 01:02:18.890927 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.891464 kubelet[3468]: E0121 01:02:18.891442 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.891464 kubelet[3468]: W0121 01:02:18.891458 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.891629 kubelet[3468]: E0121 01:02:18.891473 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.891882 kubelet[3468]: E0121 01:02:18.891862 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.891882 kubelet[3468]: W0121 01:02:18.891880 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.891987 kubelet[3468]: E0121 01:02:18.891895 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.894227 kubelet[3468]: E0121 01:02:18.893516 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.894227 kubelet[3468]: W0121 01:02:18.893533 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.894227 kubelet[3468]: E0121 01:02:18.893550 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.894227 kubelet[3468]: E0121 01:02:18.893756 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.894227 kubelet[3468]: W0121 01:02:18.893765 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.894227 kubelet[3468]: E0121 01:02:18.893775 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.894227 kubelet[3468]: E0121 01:02:18.894002 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.894227 kubelet[3468]: W0121 01:02:18.894011 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.894227 kubelet[3468]: E0121 01:02:18.894023 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.894227 kubelet[3468]: E0121 01:02:18.894203 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.894708 kubelet[3468]: W0121 01:02:18.894233 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.894708 kubelet[3468]: E0121 01:02:18.894247 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.894708 kubelet[3468]: E0121 01:02:18.894509 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.894708 kubelet[3468]: W0121 01:02:18.894520 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.894708 kubelet[3468]: E0121 01:02:18.894532 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.894932 kubelet[3468]: E0121 01:02:18.894727 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.894932 kubelet[3468]: W0121 01:02:18.894737 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.894932 kubelet[3468]: E0121 01:02:18.894748 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.895052 kubelet[3468]: E0121 01:02:18.894938 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.895052 kubelet[3468]: W0121 01:02:18.894947 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.895052 kubelet[3468]: E0121 01:02:18.894959 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.895184 kubelet[3468]: E0121 01:02:18.895147 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.895184 kubelet[3468]: W0121 01:02:18.895156 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.895184 kubelet[3468]: E0121 01:02:18.895168 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.896243 kubelet[3468]: E0121 01:02:18.895461 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.896243 kubelet[3468]: W0121 01:02:18.895474 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.896243 kubelet[3468]: E0121 01:02:18.895487 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.896243 kubelet[3468]: E0121 01:02:18.895676 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.896243 kubelet[3468]: W0121 01:02:18.895685 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.896243 kubelet[3468]: E0121 01:02:18.895696 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.896243 kubelet[3468]: E0121 01:02:18.895881 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.896243 kubelet[3468]: W0121 01:02:18.895893 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.896243 kubelet[3468]: E0121 01:02:18.895904 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.896243 kubelet[3468]: E0121 01:02:18.896101 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.896814 kubelet[3468]: W0121 01:02:18.896111 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.896814 kubelet[3468]: E0121 01:02:18.896122 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.896814 kubelet[3468]: E0121 01:02:18.896502 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.896814 kubelet[3468]: W0121 01:02:18.896516 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.896814 kubelet[3468]: E0121 01:02:18.896531 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.896814 kubelet[3468]: I0121 01:02:18.896577 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/76d94f9b-071e-45b7-9881-314d22adc37f-registration-dir\") pod \"csi-node-driver-9xsfz\" (UID: \"76d94f9b-071e-45b7-9881-314d22adc37f\") " pod="calico-system/csi-node-driver-9xsfz" Jan 21 01:02:18.896814 kubelet[3468]: E0121 01:02:18.896805 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.896814 kubelet[3468]: W0121 01:02:18.896815 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.897166 kubelet[3468]: E0121 01:02:18.896842 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.897166 kubelet[3468]: E0121 01:02:18.897048 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.897166 kubelet[3468]: W0121 01:02:18.897057 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.897166 kubelet[3468]: E0121 01:02:18.897081 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.897363 kubelet[3468]: E0121 01:02:18.897322 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.897363 kubelet[3468]: W0121 01:02:18.897332 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.897363 kubelet[3468]: E0121 01:02:18.897344 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.897574 kubelet[3468]: I0121 01:02:18.897383 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76d94f9b-071e-45b7-9881-314d22adc37f-kubelet-dir\") pod \"csi-node-driver-9xsfz\" (UID: \"76d94f9b-071e-45b7-9881-314d22adc37f\") " pod="calico-system/csi-node-driver-9xsfz" Jan 21 01:02:18.897634 kubelet[3468]: E0121 01:02:18.897612 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.897634 kubelet[3468]: W0121 01:02:18.897624 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.897722 kubelet[3468]: E0121 01:02:18.897671 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.898892 kubelet[3468]: I0121 01:02:18.897774 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7shk\" (UniqueName: \"kubernetes.io/projected/76d94f9b-071e-45b7-9881-314d22adc37f-kube-api-access-z7shk\") pod \"csi-node-driver-9xsfz\" (UID: \"76d94f9b-071e-45b7-9881-314d22adc37f\") " pod="calico-system/csi-node-driver-9xsfz" Jan 21 01:02:18.898892 kubelet[3468]: E0121 01:02:18.898867 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.898892 kubelet[3468]: W0121 01:02:18.898880 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.899051 kubelet[3468]: E0121 01:02:18.898901 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.899543 kubelet[3468]: E0121 01:02:18.899410 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.899543 kubelet[3468]: W0121 01:02:18.899428 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.899843 kubelet[3468]: E0121 01:02:18.899816 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.900110 kubelet[3468]: E0121 01:02:18.899926 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.900110 kubelet[3468]: W0121 01:02:18.899940 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.900110 kubelet[3468]: E0121 01:02:18.900070 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.900110 kubelet[3468]: I0121 01:02:18.900100 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/76d94f9b-071e-45b7-9881-314d22adc37f-varrun\") pod \"csi-node-driver-9xsfz\" (UID: \"76d94f9b-071e-45b7-9881-314d22adc37f\") " pod="calico-system/csi-node-driver-9xsfz" Jan 21 01:02:18.900657 kubelet[3468]: E0121 01:02:18.900493 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.900657 kubelet[3468]: W0121 01:02:18.900507 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.900657 kubelet[3468]: E0121 01:02:18.900613 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.901499 kubelet[3468]: E0121 01:02:18.901284 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.901499 kubelet[3468]: W0121 01:02:18.901318 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.901499 kubelet[3468]: E0121 01:02:18.901335 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.901658 kubelet[3468]: E0121 01:02:18.901539 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.901658 kubelet[3468]: W0121 01:02:18.901550 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.901658 kubelet[3468]: E0121 01:02:18.901565 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.901658 kubelet[3468]: I0121 01:02:18.901590 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/76d94f9b-071e-45b7-9881-314d22adc37f-socket-dir\") pod \"csi-node-driver-9xsfz\" (UID: \"76d94f9b-071e-45b7-9881-314d22adc37f\") " pod="calico-system/csi-node-driver-9xsfz" Jan 21 01:02:18.901847 kubelet[3468]: E0121 01:02:18.901797 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.901847 kubelet[3468]: W0121 01:02:18.901807 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.901847 kubelet[3468]: E0121 01:02:18.901822 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.904234 kubelet[3468]: E0121 01:02:18.901987 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.904234 kubelet[3468]: W0121 01:02:18.901997 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.904234 kubelet[3468]: E0121 01:02:18.902007 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.904234 kubelet[3468]: E0121 01:02:18.902176 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.904234 kubelet[3468]: W0121 01:02:18.902185 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.904234 kubelet[3468]: E0121 01:02:18.902196 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.904234 kubelet[3468]: E0121 01:02:18.902405 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:18.904234 kubelet[3468]: W0121 01:02:18.902415 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:18.904234 kubelet[3468]: E0121 01:02:18.902428 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:18.913000 audit: BPF prog-id=158 op=LOAD Jan 21 01:02:18.914000 audit: BPF prog-id=159 op=LOAD Jan 21 01:02:18.914000 audit[3896]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3883 pid=3896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:18.914000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536653364353662333462346634386439663931323232663330306430 Jan 21 01:02:18.914000 audit: BPF prog-id=159 op=UNLOAD Jan 21 01:02:18.914000 audit[3896]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3883 pid=3896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:18.914000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536653364353662333462346634386439663931323232663330306430 Jan 21 01:02:18.916000 audit: BPF prog-id=160 op=LOAD Jan 21 01:02:18.916000 audit[3896]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3883 pid=3896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:18.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536653364353662333462346634386439663931323232663330306430 Jan 21 01:02:18.916000 audit: BPF prog-id=161 op=LOAD Jan 21 01:02:18.916000 audit[3896]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3883 pid=3896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:18.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536653364353662333462346634386439663931323232663330306430 Jan 21 01:02:18.916000 audit: BPF prog-id=161 op=UNLOAD Jan 21 01:02:18.916000 audit[3896]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3883 pid=3896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:18.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536653364353662333462346634386439663931323232663330306430 Jan 21 01:02:18.916000 audit: BPF prog-id=160 op=UNLOAD Jan 21 01:02:18.916000 audit[3896]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3883 pid=3896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:18.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536653364353662333462346634386439663931323232663330306430 Jan 21 01:02:18.916000 audit: BPF prog-id=162 op=LOAD Jan 21 01:02:18.916000 audit[3896]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3883 pid=3896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:18.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536653364353662333462346634386439663931323232663330306430 Jan 21 01:02:18.959176 containerd[1962]: time="2026-01-21T01:02:18.959122711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jprlx,Uid:bc1ac585-7f06-4a00-82b5-d4dbcb147458,Namespace:calico-system,Attempt:0,}" Jan 21 01:02:18.997649 containerd[1962]: time="2026-01-21T01:02:18.997591599Z" level=info msg="connecting to shim bcbf799d71f8f39e30cc1ca4727f4b7f5494de1d776bdaa069761a4214f98808" address="unix:///run/containerd/s/2a5b9f64761f6b0d9a1d54533a1dafe90fcd8f25fdae720aa724526661304293" namespace=k8s.io protocol=ttrpc version=3 Jan 21 01:02:19.003476 kubelet[3468]: E0121 01:02:19.003439 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:19.003627 kubelet[3468]: W0121 01:02:19.003571 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:19.003760 kubelet[3468]: E0121 01:02:19.003742 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:19.004488 kubelet[3468]: E0121 01:02:19.004404 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:19.004488 kubelet[3468]: W0121 01:02:19.004418 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:19.004488 kubelet[3468]: E0121 01:02:19.004438 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:19.005031 kubelet[3468]: E0121 01:02:19.005011 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:19.005266 kubelet[3468]: W0121 01:02:19.005132 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:19.005266 kubelet[3468]: E0121 01:02:19.005168 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:19.006403 kubelet[3468]: E0121 01:02:19.005761 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:19.006403 kubelet[3468]: W0121 01:02:19.005792 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:19.006403 kubelet[3468]: E0121 01:02:19.005808 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:19.006403 kubelet[3468]: E0121 01:02:19.006270 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:19.006403 kubelet[3468]: W0121 01:02:19.006282 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:19.006634 kubelet[3468]: E0121 01:02:19.006421 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:19.008677 kubelet[3468]: E0121 01:02:19.006809 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:19.008677 kubelet[3468]: W0121 01:02:19.006825 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:19.008677 kubelet[3468]: E0121 01:02:19.006839 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:19.008677 kubelet[3468]: E0121 01:02:19.007998 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:19.008677 kubelet[3468]: W0121 01:02:19.008022 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:19.008677 kubelet[3468]: E0121 01:02:19.008037 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:19.008677 kubelet[3468]: E0121 01:02:19.008415 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:19.008677 kubelet[3468]: W0121 01:02:19.008498 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:19.008677 kubelet[3468]: E0121 01:02:19.008517 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:19.009096 kubelet[3468]: E0121 01:02:19.008896 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:19.009096 kubelet[3468]: W0121 01:02:19.008909 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:19.009096 kubelet[3468]: E0121 01:02:19.008924 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:19.010463 kubelet[3468]: E0121 01:02:19.009409 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:19.010463 kubelet[3468]: W0121 01:02:19.009423 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:19.010463 kubelet[3468]: E0121 01:02:19.009437 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:19.010463 kubelet[3468]: E0121 01:02:19.010018 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:19.010463 kubelet[3468]: W0121 01:02:19.010052 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:19.010463 kubelet[3468]: E0121 01:02:19.010069 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:19.010735 kubelet[3468]: E0121 01:02:19.010668 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:19.010735 kubelet[3468]: W0121 01:02:19.010700 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:19.010735 kubelet[3468]: E0121 01:02:19.010717 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:19.011343 kubelet[3468]: E0121 01:02:19.011324 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:19.011343 kubelet[3468]: W0121 01:02:19.011342 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:19.011461 kubelet[3468]: E0121 01:02:19.011357 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:19.013278 kubelet[3468]: E0121 01:02:19.011936 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:19.013278 kubelet[3468]: W0121 01:02:19.012054 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:19.013278 kubelet[3468]: E0121 01:02:19.012479 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:19.013278 kubelet[3468]: W0121 01:02:19.012491 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:19.013278 kubelet[3468]: E0121 01:02:19.012505 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:19.013278 kubelet[3468]: E0121 01:02:19.013188 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:19.013278 kubelet[3468]: W0121 01:02:19.013199 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:19.013278 kubelet[3468]: E0121 01:02:19.013232 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:19.013699 kubelet[3468]: E0121 01:02:19.013417 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:19.013699 kubelet[3468]: W0121 01:02:19.013427 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:19.013699 kubelet[3468]: E0121 01:02:19.013439 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:19.013824 kubelet[3468]: E0121 01:02:19.013745 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:19.013824 kubelet[3468]: W0121 01:02:19.013756 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:19.013824 kubelet[3468]: E0121 01:02:19.013770 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:19.014683 kubelet[3468]: E0121 01:02:19.014127 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:19.014683 kubelet[3468]: E0121 01:02:19.014279 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:19.014683 kubelet[3468]: W0121 01:02:19.014370 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:19.014683 kubelet[3468]: E0121 01:02:19.014393 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:19.014888 kubelet[3468]: E0121 01:02:19.014833 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:19.014888 kubelet[3468]: W0121 01:02:19.014845 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:19.014888 kubelet[3468]: E0121 01:02:19.014864 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:19.015562 kubelet[3468]: E0121 01:02:19.015284 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:19.015562 kubelet[3468]: W0121 01:02:19.015300 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:19.015562 kubelet[3468]: E0121 01:02:19.015326 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:19.016225 kubelet[3468]: E0121 01:02:19.016157 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:19.016225 kubelet[3468]: W0121 01:02:19.016173 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:19.016351 kubelet[3468]: E0121 01:02:19.016290 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:19.017404 kubelet[3468]: E0121 01:02:19.017349 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:19.018064 kubelet[3468]: W0121 01:02:19.017363 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:19.018064 kubelet[3468]: E0121 01:02:19.017759 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:19.018387 kubelet[3468]: E0121 01:02:19.018369 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:19.018387 kubelet[3468]: W0121 01:02:19.018387 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:19.018504 kubelet[3468]: E0121 01:02:19.018423 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:19.019431 kubelet[3468]: E0121 01:02:19.019409 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:19.019431 kubelet[3468]: W0121 01:02:19.019425 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:19.019554 kubelet[3468]: E0121 01:02:19.019440 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:19.045930 containerd[1962]: time="2026-01-21T01:02:19.045859035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6684dd4789-g9kfn,Uid:7be49ad9-f941-4ce6-8938-2235c5cf3938,Namespace:calico-system,Attempt:0,} returns sandbox id \"56e3d56b34b4f48d9f91222f300d0d1e2bb07c354be1440458ddad37d2ea470c\"" Jan 21 01:02:19.053230 containerd[1962]: time="2026-01-21T01:02:19.052959852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 21 01:02:19.055447 kubelet[3468]: E0121 01:02:19.055380 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:19.055447 kubelet[3468]: W0121 01:02:19.055401 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:19.055749 kubelet[3468]: E0121 01:02:19.055687 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:19.072763 systemd[1]: Started cri-containerd-bcbf799d71f8f39e30cc1ca4727f4b7f5494de1d776bdaa069761a4214f98808.scope - libcontainer container bcbf799d71f8f39e30cc1ca4727f4b7f5494de1d776bdaa069761a4214f98808. Jan 21 01:02:19.085000 audit: BPF prog-id=163 op=LOAD Jan 21 01:02:19.089000 audit: BPF prog-id=164 op=LOAD Jan 21 01:02:19.089000 audit[4015]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3977 pid=4015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:19.089000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263626637393964373166386633396533306363316361343732376634 Jan 21 01:02:19.089000 audit: BPF prog-id=164 op=UNLOAD Jan 21 01:02:19.089000 audit[4015]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3977 pid=4015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:19.089000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263626637393964373166386633396533306363316361343732376634 Jan 21 01:02:19.091000 audit: BPF prog-id=165 op=LOAD Jan 21 01:02:19.091000 audit[4015]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3977 pid=4015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:19.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263626637393964373166386633396533306363316361343732376634 Jan 21 01:02:19.091000 audit: BPF prog-id=166 op=LOAD Jan 21 01:02:19.091000 audit[4015]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3977 pid=4015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:19.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263626637393964373166386633396533306363316361343732376634 Jan 21 01:02:19.091000 audit: BPF prog-id=166 op=UNLOAD Jan 21 01:02:19.091000 audit[4015]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3977 pid=4015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:19.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263626637393964373166386633396533306363316361343732376634 Jan 21 01:02:19.091000 audit: BPF prog-id=165 op=UNLOAD Jan 21 01:02:19.091000 audit[4015]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3977 pid=4015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:19.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263626637393964373166386633396533306363316361343732376634 Jan 21 01:02:19.091000 audit: BPF prog-id=167 op=LOAD Jan 21 01:02:19.091000 audit[4015]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3977 pid=4015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:19.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263626637393964373166386633396533306363316361343732376634 Jan 21 01:02:19.119199 containerd[1962]: time="2026-01-21T01:02:19.119130080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jprlx,Uid:bc1ac585-7f06-4a00-82b5-d4dbcb147458,Namespace:calico-system,Attempt:0,} returns sandbox id \"bcbf799d71f8f39e30cc1ca4727f4b7f5494de1d776bdaa069761a4214f98808\"" Jan 21 01:02:19.349000 audit[4045]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=4045 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:19.349000 audit[4045]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe2e709e00 a2=0 a3=7ffe2e709dec items=0 ppid=3578 pid=4045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:19.349000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:02:19.356000 audit[4045]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=4045 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:19.356000 audit[4045]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe2e709e00 a2=0 a3=0 items=0 ppid=3578 pid=4045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:19.356000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:02:20.132234 kubelet[3468]: E0121 01:02:20.131266 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9xsfz" podUID="76d94f9b-071e-45b7-9881-314d22adc37f" Jan 21 01:02:20.401875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3520980189.mount: Deactivated successfully. Jan 21 01:02:21.627813 containerd[1962]: time="2026-01-21T01:02:21.627741570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:02:21.629051 containerd[1962]: time="2026-01-21T01:02:21.629004405Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 21 01:02:21.630309 containerd[1962]: time="2026-01-21T01:02:21.630254591Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:02:21.632705 containerd[1962]: time="2026-01-21T01:02:21.632666090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:02:21.633339 containerd[1962]: time="2026-01-21T01:02:21.633266071Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.58026473s" Jan 21 01:02:21.633339 containerd[1962]: time="2026-01-21T01:02:21.633294510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 21 01:02:21.634629 containerd[1962]: time="2026-01-21T01:02:21.634595557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 21 01:02:21.649256 containerd[1962]: time="2026-01-21T01:02:21.648776157Z" level=info msg="CreateContainer within sandbox \"56e3d56b34b4f48d9f91222f300d0d1e2bb07c354be1440458ddad37d2ea470c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 21 01:02:21.656851 containerd[1962]: time="2026-01-21T01:02:21.656802787Z" level=info msg="Container f3c579d8ddc327d982b4119b60d61443727efd99bc707f9af06f4d2d6040f747: CDI devices from CRI Config.CDIDevices: []" Jan 21 01:02:21.668983 containerd[1962]: time="2026-01-21T01:02:21.668939106Z" level=info msg="CreateContainer within sandbox \"56e3d56b34b4f48d9f91222f300d0d1e2bb07c354be1440458ddad37d2ea470c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f3c579d8ddc327d982b4119b60d61443727efd99bc707f9af06f4d2d6040f747\"" Jan 21 01:02:21.670596 containerd[1962]: time="2026-01-21T01:02:21.669667168Z" level=info msg="StartContainer for \"f3c579d8ddc327d982b4119b60d61443727efd99bc707f9af06f4d2d6040f747\"" Jan 21 01:02:21.670949 containerd[1962]: time="2026-01-21T01:02:21.670917522Z" level=info msg="connecting to shim f3c579d8ddc327d982b4119b60d61443727efd99bc707f9af06f4d2d6040f747" address="unix:///run/containerd/s/ae2d86cf33092bed3b611746faf64d071fca780c295a0784ac36ffac7dcfbf45" protocol=ttrpc version=3 Jan 21 01:02:21.719571 systemd[1]: Started cri-containerd-f3c579d8ddc327d982b4119b60d61443727efd99bc707f9af06f4d2d6040f747.scope - libcontainer container f3c579d8ddc327d982b4119b60d61443727efd99bc707f9af06f4d2d6040f747. Jan 21 01:02:21.740000 audit: BPF prog-id=168 op=LOAD Jan 21 01:02:21.740000 audit: BPF prog-id=169 op=LOAD Jan 21 01:02:21.740000 audit[4056]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3883 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:21.740000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633633537396438646463333237643938326234313139623630643631 Jan 21 01:02:21.741000 audit: BPF prog-id=169 op=UNLOAD Jan 21 01:02:21.741000 audit[4056]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3883 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:21.741000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633633537396438646463333237643938326234313139623630643631 Jan 21 01:02:21.741000 audit: BPF prog-id=170 op=LOAD Jan 21 01:02:21.741000 audit[4056]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3883 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:21.741000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633633537396438646463333237643938326234313139623630643631 Jan 21 01:02:21.741000 audit: BPF prog-id=171 op=LOAD Jan 21 01:02:21.741000 audit[4056]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3883 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:21.741000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633633537396438646463333237643938326234313139623630643631 Jan 21 01:02:21.741000 audit: BPF prog-id=171 op=UNLOAD Jan 21 01:02:21.741000 audit[4056]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3883 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:21.741000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633633537396438646463333237643938326234313139623630643631 Jan 21 01:02:21.741000 audit: BPF prog-id=170 op=UNLOAD Jan 21 01:02:21.741000 audit[4056]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3883 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:21.741000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633633537396438646463333237643938326234313139623630643631 Jan 21 01:02:21.742000 audit: BPF prog-id=172 op=LOAD Jan 21 01:02:21.742000 audit[4056]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3883 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:21.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633633537396438646463333237643938326234313139623630643631 Jan 21 01:02:21.795870 containerd[1962]: time="2026-01-21T01:02:21.795829023Z" level=info msg="StartContainer for \"f3c579d8ddc327d982b4119b60d61443727efd99bc707f9af06f4d2d6040f747\" returns successfully" Jan 21 01:02:22.131559 kubelet[3468]: E0121 01:02:22.131515 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9xsfz" podUID="76d94f9b-071e-45b7-9881-314d22adc37f" Jan 21 01:02:22.322275 kubelet[3468]: E0121 01:02:22.322010 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:22.322275 kubelet[3468]: W0121 01:02:22.322056 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:22.322275 kubelet[3468]: E0121 01:02:22.322076 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:22.322924 kubelet[3468]: E0121 01:02:22.322901 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:22.322924 kubelet[3468]: W0121 01:02:22.322920 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:22.323138 kubelet[3468]: E0121 01:02:22.322935 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:22.323406 kubelet[3468]: E0121 01:02:22.323390 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:22.323406 kubelet[3468]: W0121 01:02:22.323405 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:22.323498 kubelet[3468]: E0121 01:02:22.323418 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:22.326748 kubelet[3468]: E0121 01:02:22.326724 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:22.326748 kubelet[3468]: W0121 01:02:22.326744 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:22.326896 kubelet[3468]: E0121 01:02:22.326761 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:22.328571 kubelet[3468]: E0121 01:02:22.328231 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:22.328571 kubelet[3468]: W0121 01:02:22.328247 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:22.328571 kubelet[3468]: E0121 01:02:22.328263 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:22.328571 kubelet[3468]: E0121 01:02:22.328527 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:22.328571 kubelet[3468]: W0121 01:02:22.328537 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:22.328571 kubelet[3468]: E0121 01:02:22.328548 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:22.358699 kubelet[3468]: E0121 01:02:22.328973 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:22.358699 kubelet[3468]: W0121 01:02:22.328984 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:22.358699 kubelet[3468]: E0121 01:02:22.328995 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:22.358699 kubelet[3468]: E0121 01:02:22.329269 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:22.358699 kubelet[3468]: W0121 01:02:22.329277 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:22.358699 kubelet[3468]: E0121 01:02:22.329286 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:22.358699 kubelet[3468]: E0121 01:02:22.329496 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:22.358699 kubelet[3468]: W0121 01:02:22.329506 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:22.358699 kubelet[3468]: E0121 01:02:22.329515 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:22.358699 kubelet[3468]: E0121 01:02:22.329764 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:22.359579 kubelet[3468]: W0121 01:02:22.329778 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:22.359579 kubelet[3468]: E0121 01:02:22.329791 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:22.359579 kubelet[3468]: E0121 01:02:22.330025 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:22.359579 kubelet[3468]: W0121 01:02:22.330033 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:22.359579 kubelet[3468]: E0121 01:02:22.330050 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:22.359579 kubelet[3468]: E0121 01:02:22.330275 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:22.359579 kubelet[3468]: W0121 01:02:22.330284 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:22.359579 kubelet[3468]: E0121 01:02:22.330293 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:22.359579 kubelet[3468]: E0121 01:02:22.330488 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:22.359579 kubelet[3468]: W0121 01:02:22.330510 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:22.359940 kubelet[3468]: E0121 01:02:22.330519 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:22.359940 kubelet[3468]: E0121 01:02:22.330702 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:22.359940 kubelet[3468]: W0121 01:02:22.330710 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:22.359940 kubelet[3468]: E0121 01:02:22.330718 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:22.359940 kubelet[3468]: E0121 01:02:22.330892 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:22.359940 kubelet[3468]: W0121 01:02:22.330900 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:22.359940 kubelet[3468]: E0121 01:02:22.330915 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:22.359940 kubelet[3468]: I0121 01:02:22.339137 3468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6684dd4789-g9kfn" podStartSLOduration=1.7561212670000002 podStartE2EDuration="4.339108853s" podCreationTimestamp="2026-01-21 01:02:18 +0000 UTC" firstStartedPulling="2026-01-21 01:02:19.051139281 +0000 UTC m=+53.106543504" lastFinishedPulling="2026-01-21 01:02:21.634126874 +0000 UTC m=+55.689531090" observedRunningTime="2026-01-21 01:02:22.324088653 +0000 UTC m=+56.379492901" watchObservedRunningTime="2026-01-21 01:02:22.339108853 +0000 UTC m=+56.394513077" Jan 21 01:02:22.360794 kubelet[3468]: E0121 01:02:22.342158 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:22.360794 kubelet[3468]: W0121 01:02:22.342177 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:22.360794 kubelet[3468]: E0121 01:02:22.342289 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:22.360794 kubelet[3468]: E0121 01:02:22.354490 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:22.360794 kubelet[3468]: W0121 01:02:22.354560 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:22.360794 kubelet[3468]: E0121 01:02:22.354597 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:22.360794 kubelet[3468]: E0121 01:02:22.355021 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:22.360794 kubelet[3468]: W0121 01:02:22.355035 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:22.360794 kubelet[3468]: E0121 01:02:22.355054 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:22.360794 kubelet[3468]: E0121 01:02:22.355354 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:22.361496 kubelet[3468]: W0121 01:02:22.355365 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:22.361496 kubelet[3468]: E0121 01:02:22.355378 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:22.361496 kubelet[3468]: E0121 01:02:22.355637 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:22.361496 kubelet[3468]: W0121 01:02:22.355648 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:22.361496 kubelet[3468]: E0121 01:02:22.355661 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:22.361496 kubelet[3468]: E0121 01:02:22.355953 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:22.361496 kubelet[3468]: W0121 01:02:22.355964 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:22.361496 kubelet[3468]: E0121 01:02:22.355978 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:22.361496 kubelet[3468]: E0121 01:02:22.356560 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:22.361496 kubelet[3468]: W0121 01:02:22.356573 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:22.362000 kubelet[3468]: E0121 01:02:22.356603 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:22.362000 kubelet[3468]: E0121 01:02:22.356875 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:22.362000 kubelet[3468]: W0121 01:02:22.356888 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:22.362000 kubelet[3468]: E0121 01:02:22.356902 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:22.362000 kubelet[3468]: E0121 01:02:22.357252 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:22.362000 kubelet[3468]: W0121 01:02:22.357277 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:22.362000 kubelet[3468]: E0121 01:02:22.357290 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:22.362000 kubelet[3468]: E0121 01:02:22.357520 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:22.362000 kubelet[3468]: W0121 01:02:22.357529 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:22.362000 kubelet[3468]: E0121 01:02:22.357542 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:22.363304 kubelet[3468]: E0121 01:02:22.357856 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:22.363304 kubelet[3468]: W0121 01:02:22.357866 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:22.363304 kubelet[3468]: E0121 01:02:22.357878 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:22.363304 kubelet[3468]: E0121 01:02:22.358182 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:22.363304 kubelet[3468]: W0121 01:02:22.358193 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:22.363304 kubelet[3468]: E0121 01:02:22.358206 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:22.363304 kubelet[3468]: E0121 01:02:22.359283 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:22.363304 kubelet[3468]: W0121 01:02:22.359296 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:22.363304 kubelet[3468]: E0121 01:02:22.359317 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:22.363304 kubelet[3468]: E0121 01:02:22.360392 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:22.363678 kubelet[3468]: W0121 01:02:22.360405 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:22.363678 kubelet[3468]: E0121 01:02:22.360432 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:22.363678 kubelet[3468]: E0121 01:02:22.360670 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:22.363678 kubelet[3468]: W0121 01:02:22.360682 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:22.363678 kubelet[3468]: E0121 01:02:22.360705 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:22.363678 kubelet[3468]: E0121 01:02:22.362325 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:22.363678 kubelet[3468]: W0121 01:02:22.362343 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:22.363678 kubelet[3468]: E0121 01:02:22.362369 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:22.363678 kubelet[3468]: E0121 01:02:22.362733 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:22.363678 kubelet[3468]: W0121 01:02:22.362745 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:22.364074 kubelet[3468]: E0121 01:02:22.362773 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:22.364639 kubelet[3468]: E0121 01:02:22.364577 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 01:02:22.364639 kubelet[3468]: W0121 01:02:22.364592 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 01:02:22.364639 kubelet[3468]: E0121 01:02:22.364606 3468 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 01:02:22.380000 audit[4130]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=4130 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:22.382745 kernel: kauditd_printk_skb: 74 callbacks suppressed Jan 21 01:02:22.382878 kernel: audit: type=1325 audit(1768957342.380:578): table=filter:119 family=2 entries=21 op=nft_register_rule pid=4130 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:22.380000 audit[4130]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd58201fb0 a2=0 a3=7ffd58201f9c items=0 ppid=3578 pid=4130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:22.395249 kernel: audit: type=1300 audit(1768957342.380:578): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd58201fb0 a2=0 a3=7ffd58201f9c items=0 ppid=3578 pid=4130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:22.395479 kernel: audit: type=1327 audit(1768957342.380:578): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:02:22.380000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:02:22.388000 audit[4130]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=4130 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:22.399614 kernel: audit: type=1325 audit(1768957342.388:579): table=nat:120 family=2 entries=19 op=nft_register_chain pid=4130 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:22.388000 audit[4130]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd58201fb0 a2=0 a3=7ffd58201f9c items=0 ppid=3578 pid=4130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:22.403048 kernel: audit: type=1300 audit(1768957342.388:579): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd58201fb0 a2=0 a3=7ffd58201f9c items=0 ppid=3578 pid=4130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:22.388000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:02:22.408270 kernel: audit: type=1327 audit(1768957342.388:579): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:02:22.934375 containerd[1962]: time="2026-01-21T01:02:22.934326428Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:02:22.935524 containerd[1962]: time="2026-01-21T01:02:22.935359231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 21 01:02:22.936460 containerd[1962]: time="2026-01-21T01:02:22.936428801Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:02:22.940263 containerd[1962]: time="2026-01-21T01:02:22.938936879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:02:22.940263 containerd[1962]: time="2026-01-21T01:02:22.939546296Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.304830077s" Jan 21 01:02:22.940263 containerd[1962]: time="2026-01-21T01:02:22.939574307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 21 01:02:22.944705 containerd[1962]: time="2026-01-21T01:02:22.942995216Z" level=info msg="CreateContainer within sandbox \"bcbf799d71f8f39e30cc1ca4727f4b7f5494de1d776bdaa069761a4214f98808\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 21 01:02:22.977391 containerd[1962]: time="2026-01-21T01:02:22.973601239Z" level=info msg="Container 7a846e6b878b7b0d39b788cc3bc430b6fb88018087221109c763904a299a3a72: CDI devices from CRI Config.CDIDevices: []" Jan 21 01:02:22.989482 containerd[1962]: time="2026-01-21T01:02:22.989438977Z" level=info msg="CreateContainer within sandbox \"bcbf799d71f8f39e30cc1ca4727f4b7f5494de1d776bdaa069761a4214f98808\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7a846e6b878b7b0d39b788cc3bc430b6fb88018087221109c763904a299a3a72\"" Jan 21 01:02:22.990160 containerd[1962]: time="2026-01-21T01:02:22.990131591Z" level=info msg="StartContainer for \"7a846e6b878b7b0d39b788cc3bc430b6fb88018087221109c763904a299a3a72\"" Jan 21 01:02:22.991778 containerd[1962]: time="2026-01-21T01:02:22.991748022Z" level=info msg="connecting to shim 7a846e6b878b7b0d39b788cc3bc430b6fb88018087221109c763904a299a3a72" address="unix:///run/containerd/s/2a5b9f64761f6b0d9a1d54533a1dafe90fcd8f25fdae720aa724526661304293" protocol=ttrpc version=3 Jan 21 01:02:23.033035 systemd[1]: Started cri-containerd-7a846e6b878b7b0d39b788cc3bc430b6fb88018087221109c763904a299a3a72.scope - libcontainer container 7a846e6b878b7b0d39b788cc3bc430b6fb88018087221109c763904a299a3a72. Jan 21 01:02:23.090000 audit: BPF prog-id=173 op=LOAD Jan 21 01:02:23.098681 kernel: audit: type=1334 audit(1768957343.090:580): prog-id=173 op=LOAD Jan 21 01:02:23.098832 kernel: audit: type=1300 audit(1768957343.090:580): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=3977 pid=4135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:23.090000 audit[4135]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=3977 pid=4135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:23.090000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761383436653662383738623762306433396237383863633362633433 Jan 21 01:02:23.101327 kernel: audit: type=1327 audit(1768957343.090:580): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761383436653662383738623762306433396237383863633362633433 Jan 21 01:02:23.107717 kernel: audit: type=1334 audit(1768957343.090:581): prog-id=174 op=LOAD Jan 21 01:02:23.090000 audit: BPF prog-id=174 op=LOAD Jan 21 01:02:23.090000 audit[4135]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=3977 pid=4135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:23.090000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761383436653662383738623762306433396237383863633362633433 Jan 21 01:02:23.090000 audit: BPF prog-id=174 op=UNLOAD Jan 21 01:02:23.090000 audit[4135]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3977 pid=4135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:23.090000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761383436653662383738623762306433396237383863633362633433 Jan 21 01:02:23.090000 audit: BPF prog-id=173 op=UNLOAD Jan 21 01:02:23.090000 audit[4135]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3977 pid=4135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:23.090000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761383436653662383738623762306433396237383863633362633433 Jan 21 01:02:23.090000 audit: BPF prog-id=175 op=LOAD Jan 21 01:02:23.090000 audit[4135]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=3977 pid=4135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:23.090000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761383436653662383738623762306433396237383863633362633433 Jan 21 01:02:23.127105 containerd[1962]: time="2026-01-21T01:02:23.127027376Z" level=info msg="StartContainer for \"7a846e6b878b7b0d39b788cc3bc430b6fb88018087221109c763904a299a3a72\" returns successfully" Jan 21 01:02:23.136403 systemd[1]: cri-containerd-7a846e6b878b7b0d39b788cc3bc430b6fb88018087221109c763904a299a3a72.scope: Deactivated successfully. Jan 21 01:02:23.137000 audit: BPF prog-id=175 op=UNLOAD Jan 21 01:02:23.155922 containerd[1962]: time="2026-01-21T01:02:23.155850680Z" level=info msg="received container exit event container_id:\"7a846e6b878b7b0d39b788cc3bc430b6fb88018087221109c763904a299a3a72\" id:\"7a846e6b878b7b0d39b788cc3bc430b6fb88018087221109c763904a299a3a72\" pid:4148 exited_at:{seconds:1768957343 nanos:138392237}" Jan 21 01:02:23.185876 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a846e6b878b7b0d39b788cc3bc430b6fb88018087221109c763904a299a3a72-rootfs.mount: Deactivated successfully. Jan 21 01:02:24.131843 kubelet[3468]: E0121 01:02:24.131292 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9xsfz" podUID="76d94f9b-071e-45b7-9881-314d22adc37f" Jan 21 01:02:24.316934 containerd[1962]: time="2026-01-21T01:02:24.316814135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 21 01:02:26.132958 kubelet[3468]: E0121 01:02:26.131708 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9xsfz" podUID="76d94f9b-071e-45b7-9881-314d22adc37f" Jan 21 01:02:28.131549 kubelet[3468]: E0121 01:02:28.130985 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9xsfz" podUID="76d94f9b-071e-45b7-9881-314d22adc37f" Jan 21 01:02:30.132240 kubelet[3468]: E0121 01:02:30.131703 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9xsfz" podUID="76d94f9b-071e-45b7-9881-314d22adc37f" Jan 21 01:02:31.644560 containerd[1962]: time="2026-01-21T01:02:31.644508407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:02:31.646938 containerd[1962]: time="2026-01-21T01:02:31.646827550Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 21 01:02:31.649736 containerd[1962]: time="2026-01-21T01:02:31.648796659Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:02:31.651406 containerd[1962]: time="2026-01-21T01:02:31.651371001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:02:31.652332 containerd[1962]: time="2026-01-21T01:02:31.652302897Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 7.335270762s" Jan 21 01:02:31.652415 containerd[1962]: time="2026-01-21T01:02:31.652335791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 21 01:02:31.665836 containerd[1962]: time="2026-01-21T01:02:31.665786465Z" level=info msg="CreateContainer within sandbox \"bcbf799d71f8f39e30cc1ca4727f4b7f5494de1d776bdaa069761a4214f98808\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 21 01:02:31.685659 containerd[1962]: time="2026-01-21T01:02:31.685603221Z" level=info msg="Container 01629a515c14961d3bf2d421c17db623838c48c3dba196edfef933bc2251c2a7: CDI devices from CRI Config.CDIDevices: []" Jan 21 01:02:31.703688 containerd[1962]: time="2026-01-21T01:02:31.703646965Z" level=info msg="CreateContainer within sandbox \"bcbf799d71f8f39e30cc1ca4727f4b7f5494de1d776bdaa069761a4214f98808\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"01629a515c14961d3bf2d421c17db623838c48c3dba196edfef933bc2251c2a7\"" Jan 21 01:02:31.704668 containerd[1962]: time="2026-01-21T01:02:31.704635991Z" level=info msg="StartContainer for \"01629a515c14961d3bf2d421c17db623838c48c3dba196edfef933bc2251c2a7\"" Jan 21 01:02:31.707415 containerd[1962]: time="2026-01-21T01:02:31.707371234Z" level=info msg="connecting to shim 01629a515c14961d3bf2d421c17db623838c48c3dba196edfef933bc2251c2a7" address="unix:///run/containerd/s/2a5b9f64761f6b0d9a1d54533a1dafe90fcd8f25fdae720aa724526661304293" protocol=ttrpc version=3 Jan 21 01:02:31.739930 systemd[1]: Started cri-containerd-01629a515c14961d3bf2d421c17db623838c48c3dba196edfef933bc2251c2a7.scope - libcontainer container 01629a515c14961d3bf2d421c17db623838c48c3dba196edfef933bc2251c2a7. Jan 21 01:02:31.802000 audit: BPF prog-id=176 op=LOAD Jan 21 01:02:31.805911 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 21 01:02:31.832722 kernel: audit: type=1334 audit(1768957351.802:586): prog-id=176 op=LOAD Jan 21 01:02:31.832777 kernel: audit: type=1300 audit(1768957351.802:586): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3977 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:31.832801 kernel: audit: type=1327 audit(1768957351.802:586): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031363239613531356331343936316433626632643432316331376462 Jan 21 01:02:31.832826 kernel: audit: type=1334 audit(1768957351.805:587): prog-id=177 op=LOAD Jan 21 01:02:31.832845 kernel: audit: type=1300 audit(1768957351.805:587): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3977 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:31.832863 kernel: audit: type=1327 audit(1768957351.805:587): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031363239613531356331343936316433626632643432316331376462 Jan 21 01:02:31.832883 kernel: audit: type=1334 audit(1768957351.805:588): prog-id=177 op=UNLOAD Jan 21 01:02:31.833376 kernel: audit: type=1300 audit(1768957351.805:588): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3977 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:31.802000 audit[4194]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3977 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:31.802000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031363239613531356331343936316433626632643432316331376462 Jan 21 01:02:31.805000 audit: BPF prog-id=177 op=LOAD Jan 21 01:02:31.805000 audit[4194]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3977 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:31.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031363239613531356331343936316433626632643432316331376462 Jan 21 01:02:31.805000 audit: BPF prog-id=177 op=UNLOAD Jan 21 01:02:31.805000 audit[4194]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3977 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:31.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031363239613531356331343936316433626632643432316331376462 Jan 21 01:02:31.836681 kernel: audit: type=1327 audit(1768957351.805:588): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031363239613531356331343936316433626632643432316331376462 Jan 21 01:02:31.805000 audit: BPF prog-id=176 op=UNLOAD Jan 21 01:02:31.842235 kernel: audit: type=1334 audit(1768957351.805:589): prog-id=176 op=UNLOAD Jan 21 01:02:31.805000 audit[4194]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3977 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:31.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031363239613531356331343936316433626632643432316331376462 Jan 21 01:02:31.805000 audit: BPF prog-id=178 op=LOAD Jan 21 01:02:31.805000 audit[4194]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3977 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:31.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031363239613531356331343936316433626632643432316331376462 Jan 21 01:02:31.850442 containerd[1962]: time="2026-01-21T01:02:31.850401410Z" level=info msg="StartContainer for \"01629a515c14961d3bf2d421c17db623838c48c3dba196edfef933bc2251c2a7\" returns successfully" Jan 21 01:02:32.131349 kubelet[3468]: E0121 01:02:32.130998 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9xsfz" podUID="76d94f9b-071e-45b7-9881-314d22adc37f" Jan 21 01:02:33.433737 systemd[1]: cri-containerd-01629a515c14961d3bf2d421c17db623838c48c3dba196edfef933bc2251c2a7.scope: Deactivated successfully. Jan 21 01:02:33.434429 systemd[1]: cri-containerd-01629a515c14961d3bf2d421c17db623838c48c3dba196edfef933bc2251c2a7.scope: Consumed 573ms CPU time, 165.8M memory peak, 5.5M read from disk, 171.3M written to disk. Jan 21 01:02:33.436000 audit: BPF prog-id=178 op=UNLOAD Jan 21 01:02:33.468296 containerd[1962]: time="2026-01-21T01:02:33.439115813Z" level=info msg="received container exit event container_id:\"01629a515c14961d3bf2d421c17db623838c48c3dba196edfef933bc2251c2a7\" id:\"01629a515c14961d3bf2d421c17db623838c48c3dba196edfef933bc2251c2a7\" pid:4207 exited_at:{seconds:1768957353 nanos:438183482}" Jan 21 01:02:33.505020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01629a515c14961d3bf2d421c17db623838c48c3dba196edfef933bc2251c2a7-rootfs.mount: Deactivated successfully. Jan 21 01:02:33.510388 kubelet[3468]: I0121 01:02:33.507401 3468 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 21 01:02:33.571746 systemd[1]: Created slice kubepods-burstable-pod50c0e75e_1727_4fb1_ab60_7bbb5f8cfadf.slice - libcontainer container kubepods-burstable-pod50c0e75e_1727_4fb1_ab60_7bbb5f8cfadf.slice. Jan 21 01:02:33.590619 kubelet[3468]: W0121 01:02:33.589393 3468 reflector.go:569] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ip-172-31-28-215" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-28-215' and this object Jan 21 01:02:33.590619 kubelet[3468]: E0121 01:02:33.589435 3468 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ip-172-31-28-215\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-28-215' and this object" logger="UnhandledError" Jan 21 01:02:33.594272 systemd[1]: Created slice kubepods-besteffort-pod5e4fb692_d42b_4062_b09a_d89e5a5a14cf.slice - libcontainer container kubepods-besteffort-pod5e4fb692_d42b_4062_b09a_d89e5a5a14cf.slice. Jan 21 01:02:33.595877 kubelet[3468]: W0121 01:02:33.595844 3468 reflector.go:569] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ip-172-31-28-215" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-28-215' and this object Jan 21 01:02:33.596725 kubelet[3468]: E0121 01:02:33.596702 3468 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ip-172-31-28-215\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-28-215' and this object" logger="UnhandledError" Jan 21 01:02:33.607933 systemd[1]: Created slice kubepods-besteffort-pod9a9d063f_0537_429d_990e_09d458a586d7.slice - libcontainer container kubepods-besteffort-pod9a9d063f_0537_429d_990e_09d458a586d7.slice. Jan 21 01:02:33.623695 systemd[1]: Created slice kubepods-besteffort-podbec76da0_2399_4e6c_952b_9dc7ad88a302.slice - libcontainer container kubepods-besteffort-podbec76da0_2399_4e6c_952b_9dc7ad88a302.slice. Jan 21 01:02:33.633351 systemd[1]: Created slice kubepods-besteffort-pod35190377_3d75_48f7_8c27_98f24edff14f.slice - libcontainer container kubepods-besteffort-pod35190377_3d75_48f7_8c27_98f24edff14f.slice. Jan 21 01:02:33.655108 kubelet[3468]: I0121 01:02:33.642026 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0aa53f42-4d9d-4bec-8857-7799d5876dce-calico-apiserver-certs\") pod \"calico-apiserver-6b9ccb8fff-hdchl\" (UID: \"0aa53f42-4d9d-4bec-8857-7799d5876dce\") " pod="calico-apiserver/calico-apiserver-6b9ccb8fff-hdchl" Jan 21 01:02:33.655108 kubelet[3468]: I0121 01:02:33.642163 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8dde797d-8111-4b2e-a7ed-5cc610d0e8e0-config-volume\") pod \"coredns-668d6bf9bc-m6nqt\" (UID: \"8dde797d-8111-4b2e-a7ed-5cc610d0e8e0\") " pod="kube-system/coredns-668d6bf9bc-m6nqt" Jan 21 01:02:33.655108 kubelet[3468]: I0121 01:02:33.642796 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e4fb692-d42b-4062-b09a-d89e5a5a14cf-tigera-ca-bundle\") pod \"calico-kube-controllers-764d9fb657-cw7sh\" (UID: \"5e4fb692-d42b-4062-b09a-d89e5a5a14cf\") " pod="calico-system/calico-kube-controllers-764d9fb657-cw7sh" Jan 21 01:02:33.655108 kubelet[3468]: I0121 01:02:33.642848 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57wl7\" (UniqueName: \"kubernetes.io/projected/0aa53f42-4d9d-4bec-8857-7799d5876dce-kube-api-access-57wl7\") pod \"calico-apiserver-6b9ccb8fff-hdchl\" (UID: \"0aa53f42-4d9d-4bec-8857-7799d5876dce\") " pod="calico-apiserver/calico-apiserver-6b9ccb8fff-hdchl" Jan 21 01:02:33.655108 kubelet[3468]: I0121 01:02:33.642874 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnb9g\" (UniqueName: \"kubernetes.io/projected/8dde797d-8111-4b2e-a7ed-5cc610d0e8e0-kube-api-access-xnb9g\") pod \"coredns-668d6bf9bc-m6nqt\" (UID: \"8dde797d-8111-4b2e-a7ed-5cc610d0e8e0\") " pod="kube-system/coredns-668d6bf9bc-m6nqt" Jan 21 01:02:33.646038 systemd[1]: Created slice kubepods-besteffort-pod0aa53f42_4d9d_4bec_8857_7799d5876dce.slice - libcontainer container kubepods-besteffort-pod0aa53f42_4d9d_4bec_8857_7799d5876dce.slice. Jan 21 01:02:33.655723 kubelet[3468]: I0121 01:02:33.642898 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6nw7\" (UniqueName: \"kubernetes.io/projected/35190377-3d75-48f7-8c27-98f24edff14f-kube-api-access-c6nw7\") pod \"calico-apiserver-95fb8fc69-kstsq\" (UID: \"35190377-3d75-48f7-8c27-98f24edff14f\") " pod="calico-apiserver/calico-apiserver-95fb8fc69-kstsq" Jan 21 01:02:33.655723 kubelet[3468]: I0121 01:02:33.642921 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50c0e75e-1727-4fb1-ab60-7bbb5f8cfadf-config-volume\") pod \"coredns-668d6bf9bc-hfnnv\" (UID: \"50c0e75e-1727-4fb1-ab60-7bbb5f8cfadf\") " pod="kube-system/coredns-668d6bf9bc-hfnnv" Jan 21 01:02:33.655723 kubelet[3468]: I0121 01:02:33.642948 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hppl\" (UniqueName: \"kubernetes.io/projected/9a9d063f-0537-429d-990e-09d458a586d7-kube-api-access-4hppl\") pod \"whisker-9b8549b48-lhgk7\" (UID: \"9a9d063f-0537-429d-990e-09d458a586d7\") " pod="calico-system/whisker-9b8549b48-lhgk7" Jan 21 01:02:33.655723 kubelet[3468]: I0121 01:02:33.642974 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqxsf\" (UniqueName: \"kubernetes.io/projected/b0fd67ff-2b5d-470f-b242-daa718038f97-kube-api-access-pqxsf\") pod \"goldmane-666569f655-vhxxc\" (UID: \"b0fd67ff-2b5d-470f-b242-daa718038f97\") " pod="calico-system/goldmane-666569f655-vhxxc" Jan 21 01:02:33.655723 kubelet[3468]: I0121 01:02:33.643007 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a9d063f-0537-429d-990e-09d458a586d7-whisker-backend-key-pair\") pod \"whisker-9b8549b48-lhgk7\" (UID: \"9a9d063f-0537-429d-990e-09d458a586d7\") " pod="calico-system/whisker-9b8549b48-lhgk7" Jan 21 01:02:33.655595 systemd[1]: Created slice kubepods-burstable-pod8dde797d_8111_4b2e_a7ed_5cc610d0e8e0.slice - libcontainer container kubepods-burstable-pod8dde797d_8111_4b2e_a7ed_5cc610d0e8e0.slice. Jan 21 01:02:33.656012 kubelet[3468]: I0121 01:02:33.643038 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bec76da0-2399-4e6c-952b-9dc7ad88a302-calico-apiserver-certs\") pod \"calico-apiserver-6b9ccb8fff-8h9ht\" (UID: \"bec76da0-2399-4e6c-952b-9dc7ad88a302\") " pod="calico-apiserver/calico-apiserver-6b9ccb8fff-8h9ht" Jan 21 01:02:33.656012 kubelet[3468]: I0121 01:02:33.643066 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98vll\" (UniqueName: \"kubernetes.io/projected/bec76da0-2399-4e6c-952b-9dc7ad88a302-kube-api-access-98vll\") pod \"calico-apiserver-6b9ccb8fff-8h9ht\" (UID: \"bec76da0-2399-4e6c-952b-9dc7ad88a302\") " pod="calico-apiserver/calico-apiserver-6b9ccb8fff-8h9ht" Jan 21 01:02:33.656012 kubelet[3468]: I0121 01:02:33.643094 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b0fd67ff-2b5d-470f-b242-daa718038f97-goldmane-key-pair\") pod \"goldmane-666569f655-vhxxc\" (UID: \"b0fd67ff-2b5d-470f-b242-daa718038f97\") " pod="calico-system/goldmane-666569f655-vhxxc" Jan 21 01:02:33.656012 kubelet[3468]: I0121 01:02:33.643127 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a9d063f-0537-429d-990e-09d458a586d7-whisker-ca-bundle\") pod \"whisker-9b8549b48-lhgk7\" (UID: \"9a9d063f-0537-429d-990e-09d458a586d7\") " pod="calico-system/whisker-9b8549b48-lhgk7" Jan 21 01:02:33.656012 kubelet[3468]: I0121 01:02:33.643151 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0fd67ff-2b5d-470f-b242-daa718038f97-config\") pod \"goldmane-666569f655-vhxxc\" (UID: \"b0fd67ff-2b5d-470f-b242-daa718038f97\") " pod="calico-system/goldmane-666569f655-vhxxc" Jan 21 01:02:33.656365 kubelet[3468]: I0121 01:02:33.643177 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0fd67ff-2b5d-470f-b242-daa718038f97-goldmane-ca-bundle\") pod \"goldmane-666569f655-vhxxc\" (UID: \"b0fd67ff-2b5d-470f-b242-daa718038f97\") " pod="calico-system/goldmane-666569f655-vhxxc" Jan 21 01:02:33.656365 kubelet[3468]: I0121 01:02:33.643891 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg2hw\" (UniqueName: \"kubernetes.io/projected/50c0e75e-1727-4fb1-ab60-7bbb5f8cfadf-kube-api-access-lg2hw\") pod \"coredns-668d6bf9bc-hfnnv\" (UID: \"50c0e75e-1727-4fb1-ab60-7bbb5f8cfadf\") " pod="kube-system/coredns-668d6bf9bc-hfnnv" Jan 21 01:02:33.656365 kubelet[3468]: I0121 01:02:33.643926 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxm28\" (UniqueName: \"kubernetes.io/projected/5e4fb692-d42b-4062-b09a-d89e5a5a14cf-kube-api-access-vxm28\") pod \"calico-kube-controllers-764d9fb657-cw7sh\" (UID: \"5e4fb692-d42b-4062-b09a-d89e5a5a14cf\") " pod="calico-system/calico-kube-controllers-764d9fb657-cw7sh" Jan 21 01:02:33.656365 kubelet[3468]: I0121 01:02:33.643978 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/35190377-3d75-48f7-8c27-98f24edff14f-calico-apiserver-certs\") pod \"calico-apiserver-95fb8fc69-kstsq\" (UID: \"35190377-3d75-48f7-8c27-98f24edff14f\") " pod="calico-apiserver/calico-apiserver-95fb8fc69-kstsq" Jan 21 01:02:33.666754 systemd[1]: Created slice kubepods-besteffort-podb0fd67ff_2b5d_470f_b242_daa718038f97.slice - libcontainer container kubepods-besteffort-podb0fd67ff_2b5d_470f_b242_daa718038f97.slice. Jan 21 01:02:33.889242 containerd[1962]: time="2026-01-21T01:02:33.889131019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hfnnv,Uid:50c0e75e-1727-4fb1-ab60-7bbb5f8cfadf,Namespace:kube-system,Attempt:0,}" Jan 21 01:02:33.903703 containerd[1962]: time="2026-01-21T01:02:33.903501285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-764d9fb657-cw7sh,Uid:5e4fb692-d42b-4062-b09a-d89e5a5a14cf,Namespace:calico-system,Attempt:0,}" Jan 21 01:02:33.939254 containerd[1962]: time="2026-01-21T01:02:33.938747608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b9ccb8fff-8h9ht,Uid:bec76da0-2399-4e6c-952b-9dc7ad88a302,Namespace:calico-apiserver,Attempt:0,}" Jan 21 01:02:33.954671 containerd[1962]: time="2026-01-21T01:02:33.954580750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b9ccb8fff-hdchl,Uid:0aa53f42-4d9d-4bec-8857-7799d5876dce,Namespace:calico-apiserver,Attempt:0,}" Jan 21 01:02:33.958285 containerd[1962]: time="2026-01-21T01:02:33.956925314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95fb8fc69-kstsq,Uid:35190377-3d75-48f7-8c27-98f24edff14f,Namespace:calico-apiserver,Attempt:0,}" Jan 21 01:02:33.964689 containerd[1962]: time="2026-01-21T01:02:33.963872233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m6nqt,Uid:8dde797d-8111-4b2e-a7ed-5cc610d0e8e0,Namespace:kube-system,Attempt:0,}" Jan 21 01:02:33.974737 containerd[1962]: time="2026-01-21T01:02:33.974694955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vhxxc,Uid:b0fd67ff-2b5d-470f-b242-daa718038f97,Namespace:calico-system,Attempt:0,}" Jan 21 01:02:34.166353 systemd[1]: Created slice kubepods-besteffort-pod76d94f9b_071e_45b7_9881_314d22adc37f.slice - libcontainer container kubepods-besteffort-pod76d94f9b_071e_45b7_9881_314d22adc37f.slice. Jan 21 01:02:34.170603 containerd[1962]: time="2026-01-21T01:02:34.170507762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9xsfz,Uid:76d94f9b-071e-45b7-9881-314d22adc37f,Namespace:calico-system,Attempt:0,}" Jan 21 01:02:34.409715 containerd[1962]: time="2026-01-21T01:02:34.409409733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 21 01:02:34.477927 containerd[1962]: time="2026-01-21T01:02:34.476656728Z" level=error msg="Failed to destroy network for sandbox \"4cf064557a4f61ae58190ed7473654bdc0eca85ecad6f68fd5ff3db417ca38ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:34.491191 containerd[1962]: time="2026-01-21T01:02:34.491049713Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vhxxc,Uid:b0fd67ff-2b5d-470f-b242-daa718038f97,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cf064557a4f61ae58190ed7473654bdc0eca85ecad6f68fd5ff3db417ca38ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:34.509654 kubelet[3468]: E0121 01:02:34.508630 3468 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cf064557a4f61ae58190ed7473654bdc0eca85ecad6f68fd5ff3db417ca38ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:34.509654 kubelet[3468]: E0121 01:02:34.508743 3468 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cf064557a4f61ae58190ed7473654bdc0eca85ecad6f68fd5ff3db417ca38ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-vhxxc" Jan 21 01:02:34.509654 kubelet[3468]: E0121 01:02:34.508776 3468 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cf064557a4f61ae58190ed7473654bdc0eca85ecad6f68fd5ff3db417ca38ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-vhxxc" Jan 21 01:02:34.513356 kubelet[3468]: E0121 01:02:34.508832 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-vhxxc_calico-system(b0fd67ff-2b5d-470f-b242-daa718038f97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-vhxxc_calico-system(b0fd67ff-2b5d-470f-b242-daa718038f97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4cf064557a4f61ae58190ed7473654bdc0eca85ecad6f68fd5ff3db417ca38ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-vhxxc" podUID="b0fd67ff-2b5d-470f-b242-daa718038f97" Jan 21 01:02:34.564251 containerd[1962]: time="2026-01-21T01:02:34.563630842Z" level=error msg="Failed to destroy network for sandbox \"672084c1d20250006254aafaad67b16138b69aecba2acad6f5600162697b5f82\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:34.570570 systemd[1]: run-netns-cni\x2d4115872b\x2d660d\x2d54ff\x2dea78\x2d4fc2843bf170.mount: Deactivated successfully. Jan 21 01:02:34.573178 containerd[1962]: time="2026-01-21T01:02:34.573129321Z" level=error msg="Failed to destroy network for sandbox \"77af73704a65755b2a6bbe9a19ef25e163fbce3c15612a6fc2873989dd066b88\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:34.578471 containerd[1962]: time="2026-01-21T01:02:34.578416713Z" level=error msg="Failed to destroy network for sandbox \"b13b92e5c73fa101bcedece136dc36d4ec725f1f4a5842506a0e8b38731f5edb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:34.579108 systemd[1]: run-netns-cni\x2dc4d31c6f\x2d8ac3\x2d6417\x2db278\x2d49d10fffd087.mount: Deactivated successfully. Jan 21 01:02:34.580984 containerd[1962]: time="2026-01-21T01:02:34.580927497Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hfnnv,Uid:50c0e75e-1727-4fb1-ab60-7bbb5f8cfadf,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"672084c1d20250006254aafaad67b16138b69aecba2acad6f5600162697b5f82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:34.582254 kubelet[3468]: E0121 01:02:34.581253 3468 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"672084c1d20250006254aafaad67b16138b69aecba2acad6f5600162697b5f82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:34.582254 kubelet[3468]: E0121 01:02:34.581321 3468 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"672084c1d20250006254aafaad67b16138b69aecba2acad6f5600162697b5f82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hfnnv" Jan 21 01:02:34.582254 kubelet[3468]: E0121 01:02:34.581348 3468 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"672084c1d20250006254aafaad67b16138b69aecba2acad6f5600162697b5f82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hfnnv" Jan 21 01:02:34.582471 kubelet[3468]: E0121 01:02:34.581403 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-hfnnv_kube-system(50c0e75e-1727-4fb1-ab60-7bbb5f8cfadf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-hfnnv_kube-system(50c0e75e-1727-4fb1-ab60-7bbb5f8cfadf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"672084c1d20250006254aafaad67b16138b69aecba2acad6f5600162697b5f82\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hfnnv" podUID="50c0e75e-1727-4fb1-ab60-7bbb5f8cfadf" Jan 21 01:02:34.588240 containerd[1962]: time="2026-01-21T01:02:34.585299107Z" level=error msg="Failed to destroy network for sandbox \"df0f6009ef9de8882b911e36cd1330b954de9da59b9a66a24becce73ea6a1629\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:34.588240 containerd[1962]: time="2026-01-21T01:02:34.586044164Z" level=error msg="Failed to destroy network for sandbox \"994d39ed30804a48d69182207905c83189a68c41feb8551c8b276706ee982a30\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:34.588240 containerd[1962]: time="2026-01-21T01:02:34.587063623Z" level=error msg="Failed to destroy network for sandbox \"ca55acf0827c906c18b32af421b2c2e8ae33627a501bd8925554280baadae3bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:34.588120 systemd[1]: run-netns-cni\x2d9062769c\x2d1e1d\x2d54aa\x2db9bb\x2de290b6a95eb0.mount: Deactivated successfully. Jan 21 01:02:34.594248 containerd[1962]: time="2026-01-21T01:02:34.594039098Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m6nqt,Uid:8dde797d-8111-4b2e-a7ed-5cc610d0e8e0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b13b92e5c73fa101bcedece136dc36d4ec725f1f4a5842506a0e8b38731f5edb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:34.596378 containerd[1962]: time="2026-01-21T01:02:34.595557387Z" level=error msg="Failed to destroy network for sandbox \"b47c376778dbd218b9cb7ddfd486523ec44e69d15c79e667496418d103532c2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:34.596822 systemd[1]: run-netns-cni\x2d83926a73\x2d7335\x2d2ce0\x2d16bd\x2d477dfb8a26bc.mount: Deactivated successfully. Jan 21 01:02:34.596936 systemd[1]: run-netns-cni\x2d5c7af5b0\x2dc016\x2d70a0\x2d0c10\x2d002d97082155.mount: Deactivated successfully. Jan 21 01:02:34.597003 systemd[1]: run-netns-cni\x2d627d15e0\x2dc32a\x2d0035\x2d1fe0\x2dee52892fb5ae.mount: Deactivated successfully. Jan 21 01:02:34.626925 kubelet[3468]: E0121 01:02:34.597645 3468 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b13b92e5c73fa101bcedece136dc36d4ec725f1f4a5842506a0e8b38731f5edb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:34.626925 kubelet[3468]: E0121 01:02:34.598009 3468 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b13b92e5c73fa101bcedece136dc36d4ec725f1f4a5842506a0e8b38731f5edb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-m6nqt" Jan 21 01:02:34.626925 kubelet[3468]: E0121 01:02:34.598074 3468 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b13b92e5c73fa101bcedece136dc36d4ec725f1f4a5842506a0e8b38731f5edb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-m6nqt" Jan 21 01:02:34.627087 containerd[1962]: time="2026-01-21T01:02:34.599886537Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-764d9fb657-cw7sh,Uid:5e4fb692-d42b-4062-b09a-d89e5a5a14cf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"77af73704a65755b2a6bbe9a19ef25e163fbce3c15612a6fc2873989dd066b88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:34.627087 containerd[1962]: time="2026-01-21T01:02:34.602469275Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95fb8fc69-kstsq,Uid:35190377-3d75-48f7-8c27-98f24edff14f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"994d39ed30804a48d69182207905c83189a68c41feb8551c8b276706ee982a30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:34.627087 containerd[1962]: time="2026-01-21T01:02:34.605968039Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b9ccb8fff-hdchl,Uid:0aa53f42-4d9d-4bec-8857-7799d5876dce,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"df0f6009ef9de8882b911e36cd1330b954de9da59b9a66a24becce73ea6a1629\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:34.627277 kubelet[3468]: E0121 01:02:34.598163 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-m6nqt_kube-system(8dde797d-8111-4b2e-a7ed-5cc610d0e8e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-m6nqt_kube-system(8dde797d-8111-4b2e-a7ed-5cc610d0e8e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b13b92e5c73fa101bcedece136dc36d4ec725f1f4a5842506a0e8b38731f5edb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-m6nqt" podUID="8dde797d-8111-4b2e-a7ed-5cc610d0e8e0" Jan 21 01:02:34.627277 kubelet[3468]: E0121 01:02:34.600367 3468 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77af73704a65755b2a6bbe9a19ef25e163fbce3c15612a6fc2873989dd066b88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:34.627277 kubelet[3468]: E0121 01:02:34.600448 3468 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77af73704a65755b2a6bbe9a19ef25e163fbce3c15612a6fc2873989dd066b88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-764d9fb657-cw7sh" Jan 21 01:02:34.627383 containerd[1962]: time="2026-01-21T01:02:34.607789071Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b9ccb8fff-8h9ht,Uid:bec76da0-2399-4e6c-952b-9dc7ad88a302,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b47c376778dbd218b9cb7ddfd486523ec44e69d15c79e667496418d103532c2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:34.627383 containerd[1962]: time="2026-01-21T01:02:34.609297482Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9xsfz,Uid:76d94f9b-071e-45b7-9881-314d22adc37f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca55acf0827c906c18b32af421b2c2e8ae33627a501bd8925554280baadae3bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:34.627466 kubelet[3468]: E0121 01:02:34.601996 3468 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77af73704a65755b2a6bbe9a19ef25e163fbce3c15612a6fc2873989dd066b88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-764d9fb657-cw7sh" Jan 21 01:02:34.627466 kubelet[3468]: E0121 01:02:34.602068 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-764d9fb657-cw7sh_calico-system(5e4fb692-d42b-4062-b09a-d89e5a5a14cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-764d9fb657-cw7sh_calico-system(5e4fb692-d42b-4062-b09a-d89e5a5a14cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77af73704a65755b2a6bbe9a19ef25e163fbce3c15612a6fc2873989dd066b88\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-764d9fb657-cw7sh" podUID="5e4fb692-d42b-4062-b09a-d89e5a5a14cf" Jan 21 01:02:34.627466 kubelet[3468]: E0121 01:02:34.602753 3468 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"994d39ed30804a48d69182207905c83189a68c41feb8551c8b276706ee982a30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:34.627565 kubelet[3468]: E0121 01:02:34.602829 3468 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"994d39ed30804a48d69182207905c83189a68c41feb8551c8b276706ee982a30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95fb8fc69-kstsq" Jan 21 01:02:34.627565 kubelet[3468]: E0121 01:02:34.602846 3468 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"994d39ed30804a48d69182207905c83189a68c41feb8551c8b276706ee982a30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95fb8fc69-kstsq" Jan 21 01:02:34.627565 kubelet[3468]: E0121 01:02:34.602913 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-95fb8fc69-kstsq_calico-apiserver(35190377-3d75-48f7-8c27-98f24edff14f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-95fb8fc69-kstsq_calico-apiserver(35190377-3d75-48f7-8c27-98f24edff14f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"994d39ed30804a48d69182207905c83189a68c41feb8551c8b276706ee982a30\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-95fb8fc69-kstsq" podUID="35190377-3d75-48f7-8c27-98f24edff14f" Jan 21 01:02:34.627669 kubelet[3468]: E0121 01:02:34.606168 3468 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df0f6009ef9de8882b911e36cd1330b954de9da59b9a66a24becce73ea6a1629\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:34.627669 kubelet[3468]: E0121 01:02:34.606245 3468 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df0f6009ef9de8882b911e36cd1330b954de9da59b9a66a24becce73ea6a1629\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b9ccb8fff-hdchl" Jan 21 01:02:34.627669 kubelet[3468]: E0121 01:02:34.606276 3468 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df0f6009ef9de8882b911e36cd1330b954de9da59b9a66a24becce73ea6a1629\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b9ccb8fff-hdchl" Jan 21 01:02:34.627752 kubelet[3468]: E0121 01:02:34.606323 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b9ccb8fff-hdchl_calico-apiserver(0aa53f42-4d9d-4bec-8857-7799d5876dce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b9ccb8fff-hdchl_calico-apiserver(0aa53f42-4d9d-4bec-8857-7799d5876dce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df0f6009ef9de8882b911e36cd1330b954de9da59b9a66a24becce73ea6a1629\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b9ccb8fff-hdchl" podUID="0aa53f42-4d9d-4bec-8857-7799d5876dce" Jan 21 01:02:34.627752 kubelet[3468]: E0121 01:02:34.609535 3468 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b47c376778dbd218b9cb7ddfd486523ec44e69d15c79e667496418d103532c2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:34.627752 kubelet[3468]: E0121 01:02:34.609587 3468 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b47c376778dbd218b9cb7ddfd486523ec44e69d15c79e667496418d103532c2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b9ccb8fff-8h9ht" Jan 21 01:02:34.627845 kubelet[3468]: E0121 01:02:34.609612 3468 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b47c376778dbd218b9cb7ddfd486523ec44e69d15c79e667496418d103532c2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b9ccb8fff-8h9ht" Jan 21 01:02:34.627845 kubelet[3468]: E0121 01:02:34.609668 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b9ccb8fff-8h9ht_calico-apiserver(bec76da0-2399-4e6c-952b-9dc7ad88a302)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b9ccb8fff-8h9ht_calico-apiserver(bec76da0-2399-4e6c-952b-9dc7ad88a302)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b47c376778dbd218b9cb7ddfd486523ec44e69d15c79e667496418d103532c2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b9ccb8fff-8h9ht" podUID="bec76da0-2399-4e6c-952b-9dc7ad88a302" Jan 21 01:02:34.627845 kubelet[3468]: E0121 01:02:34.610379 3468 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca55acf0827c906c18b32af421b2c2e8ae33627a501bd8925554280baadae3bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:34.627948 kubelet[3468]: E0121 01:02:34.610424 3468 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca55acf0827c906c18b32af421b2c2e8ae33627a501bd8925554280baadae3bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9xsfz" Jan 21 01:02:34.627948 kubelet[3468]: E0121 01:02:34.610445 3468 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca55acf0827c906c18b32af421b2c2e8ae33627a501bd8925554280baadae3bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9xsfz" Jan 21 01:02:34.627948 kubelet[3468]: E0121 01:02:34.610497 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9xsfz_calico-system(76d94f9b-071e-45b7-9881-314d22adc37f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9xsfz_calico-system(76d94f9b-071e-45b7-9881-314d22adc37f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca55acf0827c906c18b32af421b2c2e8ae33627a501bd8925554280baadae3bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9xsfz" podUID="76d94f9b-071e-45b7-9881-314d22adc37f" Jan 21 01:02:34.746640 kubelet[3468]: E0121 01:02:34.746534 3468 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 21 01:02:34.747857 kubelet[3468]: E0121 01:02:34.747832 3468 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9a9d063f-0537-429d-990e-09d458a586d7-whisker-ca-bundle podName:9a9d063f-0537-429d-990e-09d458a586d7 nodeName:}" failed. No retries permitted until 2026-01-21 01:02:35.247801047 +0000 UTC m=+69.303205252 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/9a9d063f-0537-429d-990e-09d458a586d7-whisker-ca-bundle") pod "whisker-9b8549b48-lhgk7" (UID: "9a9d063f-0537-429d-990e-09d458a586d7") : failed to sync configmap cache: timed out waiting for the condition Jan 21 01:02:34.750564 kubelet[3468]: E0121 01:02:34.750522 3468 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition Jan 21 01:02:34.750767 kubelet[3468]: E0121 01:02:34.750604 3468 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a9d063f-0537-429d-990e-09d458a586d7-whisker-backend-key-pair podName:9a9d063f-0537-429d-990e-09d458a586d7 nodeName:}" failed. No retries permitted until 2026-01-21 01:02:35.250585576 +0000 UTC m=+69.305989781 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/9a9d063f-0537-429d-990e-09d458a586d7-whisker-backend-key-pair") pod "whisker-9b8549b48-lhgk7" (UID: "9a9d063f-0537-429d-990e-09d458a586d7") : failed to sync secret cache: timed out waiting for the condition Jan 21 01:02:35.415686 containerd[1962]: time="2026-01-21T01:02:35.415637206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9b8549b48-lhgk7,Uid:9a9d063f-0537-429d-990e-09d458a586d7,Namespace:calico-system,Attempt:0,}" Jan 21 01:02:35.473069 containerd[1962]: time="2026-01-21T01:02:35.473011286Z" level=error msg="Failed to destroy network for sandbox \"2d76f8a8fe3aabc935ffb8994dab054740262752857df45decd732a408f2fffe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:35.475224 containerd[1962]: time="2026-01-21T01:02:35.475141989Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9b8549b48-lhgk7,Uid:9a9d063f-0537-429d-990e-09d458a586d7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d76f8a8fe3aabc935ffb8994dab054740262752857df45decd732a408f2fffe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:35.475556 kubelet[3468]: E0121 01:02:35.475516 3468 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d76f8a8fe3aabc935ffb8994dab054740262752857df45decd732a408f2fffe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:35.475648 kubelet[3468]: E0121 01:02:35.475600 3468 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d76f8a8fe3aabc935ffb8994dab054740262752857df45decd732a408f2fffe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9b8549b48-lhgk7" Jan 21 01:02:35.475648 kubelet[3468]: E0121 01:02:35.475634 3468 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d76f8a8fe3aabc935ffb8994dab054740262752857df45decd732a408f2fffe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9b8549b48-lhgk7" Jan 21 01:02:35.475771 kubelet[3468]: E0121 01:02:35.475709 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-9b8549b48-lhgk7_calico-system(9a9d063f-0537-429d-990e-09d458a586d7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-9b8549b48-lhgk7_calico-system(9a9d063f-0537-429d-990e-09d458a586d7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d76f8a8fe3aabc935ffb8994dab054740262752857df45decd732a408f2fffe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-9b8549b48-lhgk7" podUID="9a9d063f-0537-429d-990e-09d458a586d7" Jan 21 01:02:35.502472 systemd[1]: run-netns-cni\x2dae70d123\x2d1acc\x2ddf7d\x2d541e\x2d030cf6d88078.mount: Deactivated successfully. Jan 21 01:02:41.968904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3224780439.mount: Deactivated successfully. Jan 21 01:02:42.004565 containerd[1962]: time="2026-01-21T01:02:42.001461911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:02:42.036252 containerd[1962]: time="2026-01-21T01:02:42.035504302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 21 01:02:42.078352 containerd[1962]: time="2026-01-21T01:02:42.078308853Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:02:42.080696 containerd[1962]: time="2026-01-21T01:02:42.080632872Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 01:02:42.081489 containerd[1962]: time="2026-01-21T01:02:42.081138772Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.671650092s" Jan 21 01:02:42.091419 containerd[1962]: time="2026-01-21T01:02:42.091334253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 21 01:02:42.118281 containerd[1962]: time="2026-01-21T01:02:42.118221053Z" level=info msg="CreateContainer within sandbox \"bcbf799d71f8f39e30cc1ca4727f4b7f5494de1d776bdaa069761a4214f98808\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 21 01:02:42.219583 containerd[1962]: time="2026-01-21T01:02:42.219399796Z" level=info msg="Container de58c1d61888673f8d1088ef46fefce39f93112dbbf02c2c1a5651ba1296bcb1: CDI devices from CRI Config.CDIDevices: []" Jan 21 01:02:42.221086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3868668556.mount: Deactivated successfully. Jan 21 01:02:42.294411 containerd[1962]: time="2026-01-21T01:02:42.294360924Z" level=info msg="CreateContainer within sandbox \"bcbf799d71f8f39e30cc1ca4727f4b7f5494de1d776bdaa069761a4214f98808\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"de58c1d61888673f8d1088ef46fefce39f93112dbbf02c2c1a5651ba1296bcb1\"" Jan 21 01:02:42.296822 containerd[1962]: time="2026-01-21T01:02:42.296781295Z" level=info msg="StartContainer for \"de58c1d61888673f8d1088ef46fefce39f93112dbbf02c2c1a5651ba1296bcb1\"" Jan 21 01:02:42.300376 containerd[1962]: time="2026-01-21T01:02:42.299814166Z" level=info msg="connecting to shim de58c1d61888673f8d1088ef46fefce39f93112dbbf02c2c1a5651ba1296bcb1" address="unix:///run/containerd/s/2a5b9f64761f6b0d9a1d54533a1dafe90fcd8f25fdae720aa724526661304293" protocol=ttrpc version=3 Jan 21 01:02:42.438578 systemd[1]: Started cri-containerd-de58c1d61888673f8d1088ef46fefce39f93112dbbf02c2c1a5651ba1296bcb1.scope - libcontainer container de58c1d61888673f8d1088ef46fefce39f93112dbbf02c2c1a5651ba1296bcb1. Jan 21 01:02:42.519000 audit: BPF prog-id=179 op=LOAD Jan 21 01:02:42.557012 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 21 01:02:42.557098 kernel: audit: type=1334 audit(1768957362.519:592): prog-id=179 op=LOAD Jan 21 01:02:42.557125 kernel: audit: type=1300 audit(1768957362.519:592): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00016c488 a2=98 a3=0 items=0 ppid=3977 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:42.557148 kernel: audit: type=1327 audit(1768957362.519:592): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465353863316436313838383637336638643130383865663436666566 Jan 21 01:02:42.557170 kernel: audit: type=1334 audit(1768957362.519:593): prog-id=180 op=LOAD Jan 21 01:02:42.557197 kernel: audit: type=1300 audit(1768957362.519:593): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00016c218 a2=98 a3=0 items=0 ppid=3977 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:42.557245 kernel: audit: type=1327 audit(1768957362.519:593): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465353863316436313838383637336638643130383865663436666566 Jan 21 01:02:42.519000 audit[4487]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00016c488 a2=98 a3=0 items=0 ppid=3977 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:42.569264 kernel: audit: type=1334 audit(1768957362.519:594): prog-id=180 op=UNLOAD Jan 21 01:02:42.569354 kernel: audit: type=1300 audit(1768957362.519:594): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3977 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:42.569398 kernel: audit: type=1327 audit(1768957362.519:594): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465353863316436313838383637336638643130383865663436666566 Jan 21 01:02:42.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465353863316436313838383637336638643130383865663436666566 Jan 21 01:02:42.519000 audit: BPF prog-id=180 op=LOAD Jan 21 01:02:42.519000 audit[4487]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00016c218 a2=98 a3=0 items=0 ppid=3977 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:42.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465353863316436313838383637336638643130383865663436666566 Jan 21 01:02:42.519000 audit: BPF prog-id=180 op=UNLOAD Jan 21 01:02:42.519000 audit[4487]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3977 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:42.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465353863316436313838383637336638643130383865663436666566 Jan 21 01:02:42.576251 kernel: audit: type=1334 audit(1768957362.519:595): prog-id=179 op=UNLOAD Jan 21 01:02:42.519000 audit: BPF prog-id=179 op=UNLOAD Jan 21 01:02:42.519000 audit[4487]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3977 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:42.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465353863316436313838383637336638643130383865663436666566 Jan 21 01:02:42.519000 audit: BPF prog-id=181 op=LOAD Jan 21 01:02:42.519000 audit[4487]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00016c6e8 a2=98 a3=0 items=0 ppid=3977 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:42.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465353863316436313838383637336638643130383865663436666566 Jan 21 01:02:42.681906 containerd[1962]: time="2026-01-21T01:02:42.681864596Z" level=info msg="StartContainer for \"de58c1d61888673f8d1088ef46fefce39f93112dbbf02c2c1a5651ba1296bcb1\" returns successfully" Jan 21 01:02:43.707914 kubelet[3468]: I0121 01:02:43.707799 3468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jprlx" podStartSLOduration=2.7362929769999997 podStartE2EDuration="25.707779031s" podCreationTimestamp="2026-01-21 01:02:18 +0000 UTC" firstStartedPulling="2026-01-21 01:02:19.120855973 +0000 UTC m=+53.176260189" lastFinishedPulling="2026-01-21 01:02:42.092342039 +0000 UTC m=+76.147746243" observedRunningTime="2026-01-21 01:02:43.70452793 +0000 UTC m=+77.759932156" watchObservedRunningTime="2026-01-21 01:02:43.707779031 +0000 UTC m=+77.763183256" Jan 21 01:02:46.159469 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 21 01:02:46.161428 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 21 01:02:46.174991 containerd[1962]: time="2026-01-21T01:02:46.174959343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hfnnv,Uid:50c0e75e-1727-4fb1-ab60-7bbb5f8cfadf,Namespace:kube-system,Attempt:0,}" Jan 21 01:02:46.180374 containerd[1962]: time="2026-01-21T01:02:46.180258995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9xsfz,Uid:76d94f9b-071e-45b7-9881-314d22adc37f,Namespace:calico-system,Attempt:0,}" Jan 21 01:02:46.182022 containerd[1962]: time="2026-01-21T01:02:46.181811158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95fb8fc69-kstsq,Uid:35190377-3d75-48f7-8c27-98f24edff14f,Namespace:calico-apiserver,Attempt:0,}" Jan 21 01:02:46.399323 containerd[1962]: time="2026-01-21T01:02:46.399276176Z" level=error msg="Failed to destroy network for sandbox \"ebe3cb4e3b62d7b6e2910cb6720204e3783e4a452f7dd8ae94d72d88ae1ecb43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:46.404571 systemd[1]: run-netns-cni\x2d0969c346\x2d366e\x2d2189\x2d1134\x2dec397aa97a11.mount: Deactivated successfully. Jan 21 01:02:46.422897 containerd[1962]: time="2026-01-21T01:02:46.422323568Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9xsfz,Uid:76d94f9b-071e-45b7-9881-314d22adc37f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebe3cb4e3b62d7b6e2910cb6720204e3783e4a452f7dd8ae94d72d88ae1ecb43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:46.424410 kubelet[3468]: E0121 01:02:46.422570 3468 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebe3cb4e3b62d7b6e2910cb6720204e3783e4a452f7dd8ae94d72d88ae1ecb43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:46.424410 kubelet[3468]: E0121 01:02:46.422640 3468 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebe3cb4e3b62d7b6e2910cb6720204e3783e4a452f7dd8ae94d72d88ae1ecb43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9xsfz" Jan 21 01:02:46.424410 kubelet[3468]: E0121 01:02:46.422672 3468 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebe3cb4e3b62d7b6e2910cb6720204e3783e4a452f7dd8ae94d72d88ae1ecb43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9xsfz" Jan 21 01:02:46.426704 kubelet[3468]: E0121 01:02:46.424231 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9xsfz_calico-system(76d94f9b-071e-45b7-9881-314d22adc37f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9xsfz_calico-system(76d94f9b-071e-45b7-9881-314d22adc37f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ebe3cb4e3b62d7b6e2910cb6720204e3783e4a452f7dd8ae94d72d88ae1ecb43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9xsfz" podUID="76d94f9b-071e-45b7-9881-314d22adc37f" Jan 21 01:02:46.450393 containerd[1962]: time="2026-01-21T01:02:46.450340846Z" level=error msg="Failed to destroy network for sandbox \"5c0a7f68c2e98721959ba855988ceebb8e8bf0d2d61d8694383c88353c9b4118\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:46.455038 systemd[1]: run-netns-cni\x2d05912d0c\x2dbd28\x2d9318\x2daeb5\x2de52c4ccdfa98.mount: Deactivated successfully. Jan 21 01:02:46.456422 containerd[1962]: time="2026-01-21T01:02:46.456235081Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hfnnv,Uid:50c0e75e-1727-4fb1-ab60-7bbb5f8cfadf,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c0a7f68c2e98721959ba855988ceebb8e8bf0d2d61d8694383c88353c9b4118\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:46.458356 kubelet[3468]: E0121 01:02:46.457579 3468 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c0a7f68c2e98721959ba855988ceebb8e8bf0d2d61d8694383c88353c9b4118\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:46.458356 kubelet[3468]: E0121 01:02:46.457653 3468 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c0a7f68c2e98721959ba855988ceebb8e8bf0d2d61d8694383c88353c9b4118\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hfnnv" Jan 21 01:02:46.458356 kubelet[3468]: E0121 01:02:46.457683 3468 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c0a7f68c2e98721959ba855988ceebb8e8bf0d2d61d8694383c88353c9b4118\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hfnnv" Jan 21 01:02:46.458563 kubelet[3468]: E0121 01:02:46.457730 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-hfnnv_kube-system(50c0e75e-1727-4fb1-ab60-7bbb5f8cfadf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-hfnnv_kube-system(50c0e75e-1727-4fb1-ab60-7bbb5f8cfadf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c0a7f68c2e98721959ba855988ceebb8e8bf0d2d61d8694383c88353c9b4118\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hfnnv" podUID="50c0e75e-1727-4fb1-ab60-7bbb5f8cfadf" Jan 21 01:02:46.476409 containerd[1962]: time="2026-01-21T01:02:46.476306172Z" level=error msg="Failed to destroy network for sandbox \"a087146084b1591ec00713ec9f6ee94bdc240c800cd55066290bc40b2bc81d59\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:46.482803 containerd[1962]: time="2026-01-21T01:02:46.482680939Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95fb8fc69-kstsq,Uid:35190377-3d75-48f7-8c27-98f24edff14f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a087146084b1591ec00713ec9f6ee94bdc240c800cd55066290bc40b2bc81d59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:46.483269 kubelet[3468]: E0121 01:02:46.483194 3468 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a087146084b1591ec00713ec9f6ee94bdc240c800cd55066290bc40b2bc81d59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 01:02:46.483483 kubelet[3468]: E0121 01:02:46.483362 3468 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a087146084b1591ec00713ec9f6ee94bdc240c800cd55066290bc40b2bc81d59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95fb8fc69-kstsq" Jan 21 01:02:46.483612 kubelet[3468]: E0121 01:02:46.483579 3468 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a087146084b1591ec00713ec9f6ee94bdc240c800cd55066290bc40b2bc81d59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95fb8fc69-kstsq" Jan 21 01:02:46.484041 kubelet[3468]: E0121 01:02:46.483698 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-95fb8fc69-kstsq_calico-apiserver(35190377-3d75-48f7-8c27-98f24edff14f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-95fb8fc69-kstsq_calico-apiserver(35190377-3d75-48f7-8c27-98f24edff14f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a087146084b1591ec00713ec9f6ee94bdc240c800cd55066290bc40b2bc81d59\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-95fb8fc69-kstsq" podUID="35190377-3d75-48f7-8c27-98f24edff14f" Jan 21 01:02:47.078051 kubelet[3468]: I0121 01:02:47.078001 3468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a9d063f-0537-429d-990e-09d458a586d7-whisker-ca-bundle\") pod \"9a9d063f-0537-429d-990e-09d458a586d7\" (UID: \"9a9d063f-0537-429d-990e-09d458a586d7\") " Jan 21 01:02:47.078051 kubelet[3468]: I0121 01:02:47.078120 3468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a9d063f-0537-429d-990e-09d458a586d7-whisker-backend-key-pair\") pod \"9a9d063f-0537-429d-990e-09d458a586d7\" (UID: \"9a9d063f-0537-429d-990e-09d458a586d7\") " Jan 21 01:02:47.078051 kubelet[3468]: I0121 01:02:47.078157 3468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hppl\" (UniqueName: \"kubernetes.io/projected/9a9d063f-0537-429d-990e-09d458a586d7-kube-api-access-4hppl\") pod \"9a9d063f-0537-429d-990e-09d458a586d7\" (UID: \"9a9d063f-0537-429d-990e-09d458a586d7\") " Jan 21 01:02:47.079646 kubelet[3468]: I0121 01:02:47.079388 3468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a9d063f-0537-429d-990e-09d458a586d7-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9a9d063f-0537-429d-990e-09d458a586d7" (UID: "9a9d063f-0537-429d-990e-09d458a586d7"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 01:02:47.085414 kubelet[3468]: I0121 01:02:47.085309 3468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a9d063f-0537-429d-990e-09d458a586d7-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9a9d063f-0537-429d-990e-09d458a586d7" (UID: "9a9d063f-0537-429d-990e-09d458a586d7"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 01:02:47.086327 kubelet[3468]: I0121 01:02:47.086289 3468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a9d063f-0537-429d-990e-09d458a586d7-kube-api-access-4hppl" (OuterVolumeSpecName: "kube-api-access-4hppl") pod "9a9d063f-0537-429d-990e-09d458a586d7" (UID: "9a9d063f-0537-429d-990e-09d458a586d7"). InnerVolumeSpecName "kube-api-access-4hppl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 01:02:47.133172 containerd[1962]: time="2026-01-21T01:02:47.133060477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b9ccb8fff-hdchl,Uid:0aa53f42-4d9d-4bec-8857-7799d5876dce,Namespace:calico-apiserver,Attempt:0,}" Jan 21 01:02:47.133535 containerd[1962]: time="2026-01-21T01:02:47.133430601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vhxxc,Uid:b0fd67ff-2b5d-470f-b242-daa718038f97,Namespace:calico-system,Attempt:0,}" Jan 21 01:02:47.133535 containerd[1962]: time="2026-01-21T01:02:47.133060511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b9ccb8fff-8h9ht,Uid:bec76da0-2399-4e6c-952b-9dc7ad88a302,Namespace:calico-apiserver,Attempt:0,}" Jan 21 01:02:47.178938 kubelet[3468]: I0121 01:02:47.178843 3468 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a9d063f-0537-429d-990e-09d458a586d7-whisker-ca-bundle\") on node \"ip-172-31-28-215\" DevicePath \"\"" Jan 21 01:02:47.178938 kubelet[3468]: I0121 01:02:47.178871 3468 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a9d063f-0537-429d-990e-09d458a586d7-whisker-backend-key-pair\") on node \"ip-172-31-28-215\" DevicePath \"\"" Jan 21 01:02:47.178938 kubelet[3468]: I0121 01:02:47.178882 3468 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hppl\" (UniqueName: \"kubernetes.io/projected/9a9d063f-0537-429d-990e-09d458a586d7-kube-api-access-4hppl\") on node \"ip-172-31-28-215\" DevicePath \"\"" Jan 21 01:02:47.189073 systemd[1]: run-netns-cni\x2d94739eb0\x2dd0f3\x2dc601\x2d077e\x2d0bfd1441ea99.mount: Deactivated successfully. Jan 21 01:02:47.189164 systemd[1]: var-lib-kubelet-pods-9a9d063f\x2d0537\x2d429d\x2d990e\x2d09d458a586d7-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 21 01:02:47.192867 systemd[1]: var-lib-kubelet-pods-9a9d063f\x2d0537\x2d429d\x2d990e\x2d09d458a586d7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4hppl.mount: Deactivated successfully. Jan 21 01:02:47.691355 systemd[1]: Removed slice kubepods-besteffort-pod9a9d063f_0537_429d_990e_09d458a586d7.slice - libcontainer container kubepods-besteffort-pod9a9d063f_0537_429d_990e_09d458a586d7.slice. Jan 21 01:02:47.760382 systemd-networkd[1555]: cali4b5de6b026b: Link UP Jan 21 01:02:47.761252 systemd-networkd[1555]: cali4b5de6b026b: Gained carrier Jan 21 01:02:47.791427 containerd[1962]: 2026-01-21 01:02:47.212 [INFO][4668] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 21 01:02:47.791427 containerd[1962]: 2026-01-21 01:02:47.333 [INFO][4668] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--215-k8s-calico--apiserver--6b9ccb8fff--hdchl-eth0 calico-apiserver-6b9ccb8fff- calico-apiserver 0aa53f42-4d9d-4bec-8857-7799d5876dce 875 0 2026-01-21 01:02:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b9ccb8fff projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-215 calico-apiserver-6b9ccb8fff-hdchl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4b5de6b026b [] [] }} ContainerID="44d1028e8a4dc88a12d9280ddb455d051ac97bd528dca7807970d577959f4124" Namespace="calico-apiserver" Pod="calico-apiserver-6b9ccb8fff-hdchl" WorkloadEndpoint="ip--172--31--28--215-k8s-calico--apiserver--6b9ccb8fff--hdchl-" Jan 21 01:02:47.791427 containerd[1962]: 2026-01-21 01:02:47.334 [INFO][4668] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="44d1028e8a4dc88a12d9280ddb455d051ac97bd528dca7807970d577959f4124" Namespace="calico-apiserver" Pod="calico-apiserver-6b9ccb8fff-hdchl" WorkloadEndpoint="ip--172--31--28--215-k8s-calico--apiserver--6b9ccb8fff--hdchl-eth0" Jan 21 01:02:47.791427 containerd[1962]: 2026-01-21 01:02:47.646 [INFO][4714] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="44d1028e8a4dc88a12d9280ddb455d051ac97bd528dca7807970d577959f4124" HandleID="k8s-pod-network.44d1028e8a4dc88a12d9280ddb455d051ac97bd528dca7807970d577959f4124" Workload="ip--172--31--28--215-k8s-calico--apiserver--6b9ccb8fff--hdchl-eth0" Jan 21 01:02:47.792167 containerd[1962]: 2026-01-21 01:02:47.650 [INFO][4714] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="44d1028e8a4dc88a12d9280ddb455d051ac97bd528dca7807970d577959f4124" HandleID="k8s-pod-network.44d1028e8a4dc88a12d9280ddb455d051ac97bd528dca7807970d577959f4124" Workload="ip--172--31--28--215-k8s-calico--apiserver--6b9ccb8fff--hdchl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103cb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-28-215", "pod":"calico-apiserver-6b9ccb8fff-hdchl", "timestamp":"2026-01-21 01:02:47.646569123 +0000 UTC"}, Hostname:"ip-172-31-28-215", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 01:02:47.792167 containerd[1962]: 2026-01-21 01:02:47.650 [INFO][4714] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 01:02:47.792167 containerd[1962]: 2026-01-21 01:02:47.650 [INFO][4714] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 01:02:47.792167 containerd[1962]: 2026-01-21 01:02:47.651 [INFO][4714] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-215' Jan 21 01:02:47.792167 containerd[1962]: 2026-01-21 01:02:47.675 [INFO][4714] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.44d1028e8a4dc88a12d9280ddb455d051ac97bd528dca7807970d577959f4124" host="ip-172-31-28-215" Jan 21 01:02:47.792167 containerd[1962]: 2026-01-21 01:02:47.698 [INFO][4714] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-215" Jan 21 01:02:47.792167 containerd[1962]: 2026-01-21 01:02:47.704 [INFO][4714] ipam/ipam.go 511: Trying affinity for 192.168.122.64/26 host="ip-172-31-28-215" Jan 21 01:02:47.792167 containerd[1962]: 2026-01-21 01:02:47.707 [INFO][4714] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.64/26 host="ip-172-31-28-215" Jan 21 01:02:47.792167 containerd[1962]: 2026-01-21 01:02:47.709 [INFO][4714] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.64/26 host="ip-172-31-28-215" Jan 21 01:02:47.796639 containerd[1962]: 2026-01-21 01:02:47.709 [INFO][4714] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.64/26 handle="k8s-pod-network.44d1028e8a4dc88a12d9280ddb455d051ac97bd528dca7807970d577959f4124" host="ip-172-31-28-215" Jan 21 01:02:47.796639 containerd[1962]: 2026-01-21 01:02:47.712 [INFO][4714] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.44d1028e8a4dc88a12d9280ddb455d051ac97bd528dca7807970d577959f4124 Jan 21 01:02:47.796639 containerd[1962]: 2026-01-21 01:02:47.719 [INFO][4714] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.64/26 handle="k8s-pod-network.44d1028e8a4dc88a12d9280ddb455d051ac97bd528dca7807970d577959f4124" host="ip-172-31-28-215" Jan 21 01:02:47.796639 containerd[1962]: 2026-01-21 01:02:47.730 [INFO][4714] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.65/26] block=192.168.122.64/26 handle="k8s-pod-network.44d1028e8a4dc88a12d9280ddb455d051ac97bd528dca7807970d577959f4124" host="ip-172-31-28-215" Jan 21 01:02:47.796639 containerd[1962]: 2026-01-21 01:02:47.730 [INFO][4714] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.65/26] handle="k8s-pod-network.44d1028e8a4dc88a12d9280ddb455d051ac97bd528dca7807970d577959f4124" host="ip-172-31-28-215" Jan 21 01:02:47.796639 containerd[1962]: 2026-01-21 01:02:47.731 [INFO][4714] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 01:02:47.796639 containerd[1962]: 2026-01-21 01:02:47.731 [INFO][4714] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.65/26] IPv6=[] ContainerID="44d1028e8a4dc88a12d9280ddb455d051ac97bd528dca7807970d577959f4124" HandleID="k8s-pod-network.44d1028e8a4dc88a12d9280ddb455d051ac97bd528dca7807970d577959f4124" Workload="ip--172--31--28--215-k8s-calico--apiserver--6b9ccb8fff--hdchl-eth0" Jan 21 01:02:47.797024 containerd[1962]: 2026-01-21 01:02:47.737 [INFO][4668] cni-plugin/k8s.go 418: Populated endpoint ContainerID="44d1028e8a4dc88a12d9280ddb455d051ac97bd528dca7807970d577959f4124" Namespace="calico-apiserver" Pod="calico-apiserver-6b9ccb8fff-hdchl" WorkloadEndpoint="ip--172--31--28--215-k8s-calico--apiserver--6b9ccb8fff--hdchl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--215-k8s-calico--apiserver--6b9ccb8fff--hdchl-eth0", GenerateName:"calico-apiserver-6b9ccb8fff-", Namespace:"calico-apiserver", SelfLink:"", UID:"0aa53f42-4d9d-4bec-8857-7799d5876dce", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 1, 2, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b9ccb8fff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-215", ContainerID:"", Pod:"calico-apiserver-6b9ccb8fff-hdchl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4b5de6b026b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 01:02:47.797179 containerd[1962]: 2026-01-21 01:02:47.738 [INFO][4668] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.65/32] ContainerID="44d1028e8a4dc88a12d9280ddb455d051ac97bd528dca7807970d577959f4124" Namespace="calico-apiserver" Pod="calico-apiserver-6b9ccb8fff-hdchl" WorkloadEndpoint="ip--172--31--28--215-k8s-calico--apiserver--6b9ccb8fff--hdchl-eth0" Jan 21 01:02:47.797179 containerd[1962]: 2026-01-21 01:02:47.739 [INFO][4668] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b5de6b026b ContainerID="44d1028e8a4dc88a12d9280ddb455d051ac97bd528dca7807970d577959f4124" Namespace="calico-apiserver" Pod="calico-apiserver-6b9ccb8fff-hdchl" WorkloadEndpoint="ip--172--31--28--215-k8s-calico--apiserver--6b9ccb8fff--hdchl-eth0" Jan 21 01:02:47.797179 containerd[1962]: 2026-01-21 01:02:47.753 [INFO][4668] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="44d1028e8a4dc88a12d9280ddb455d051ac97bd528dca7807970d577959f4124" Namespace="calico-apiserver" Pod="calico-apiserver-6b9ccb8fff-hdchl" WorkloadEndpoint="ip--172--31--28--215-k8s-calico--apiserver--6b9ccb8fff--hdchl-eth0" Jan 21 01:02:47.797431 containerd[1962]: 2026-01-21 01:02:47.753 [INFO][4668] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="44d1028e8a4dc88a12d9280ddb455d051ac97bd528dca7807970d577959f4124" Namespace="calico-apiserver" Pod="calico-apiserver-6b9ccb8fff-hdchl" WorkloadEndpoint="ip--172--31--28--215-k8s-calico--apiserver--6b9ccb8fff--hdchl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--215-k8s-calico--apiserver--6b9ccb8fff--hdchl-eth0", GenerateName:"calico-apiserver-6b9ccb8fff-", Namespace:"calico-apiserver", SelfLink:"", UID:"0aa53f42-4d9d-4bec-8857-7799d5876dce", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 1, 2, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b9ccb8fff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-215", ContainerID:"44d1028e8a4dc88a12d9280ddb455d051ac97bd528dca7807970d577959f4124", Pod:"calico-apiserver-6b9ccb8fff-hdchl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4b5de6b026b", MAC:"7a:db:69:ed:34:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 01:02:47.797642 containerd[1962]: 2026-01-21 01:02:47.779 [INFO][4668] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="44d1028e8a4dc88a12d9280ddb455d051ac97bd528dca7807970d577959f4124" Namespace="calico-apiserver" Pod="calico-apiserver-6b9ccb8fff-hdchl" WorkloadEndpoint="ip--172--31--28--215-k8s-calico--apiserver--6b9ccb8fff--hdchl-eth0" Jan 21 01:02:47.816151 (udev-worker)[4547]: Network interface NamePolicy= disabled on kernel command line. Jan 21 01:02:47.934868 (udev-worker)[4549]: Network interface NamePolicy= disabled on kernel command line. Jan 21 01:02:47.940571 systemd-networkd[1555]: caliccafe319875: Link UP Jan 21 01:02:47.946091 systemd-networkd[1555]: caliccafe319875: Gained carrier Jan 21 01:02:47.967328 systemd[1]: Created slice kubepods-besteffort-poda6a90027_9f22_4c0c_9ffd_5b7564ee55c8.slice - libcontainer container kubepods-besteffort-poda6a90027_9f22_4c0c_9ffd_5b7564ee55c8.slice. Jan 21 01:02:47.993142 kubelet[3468]: I0121 01:02:47.993093 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a6a90027-9f22-4c0c-9ffd-5b7564ee55c8-whisker-backend-key-pair\") pod \"whisker-66b4f466fd-6gfxt\" (UID: \"a6a90027-9f22-4c0c-9ffd-5b7564ee55c8\") " pod="calico-system/whisker-66b4f466fd-6gfxt" Jan 21 01:02:47.994551 kubelet[3468]: I0121 01:02:47.994513 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6a90027-9f22-4c0c-9ffd-5b7564ee55c8-whisker-ca-bundle\") pod \"whisker-66b4f466fd-6gfxt\" (UID: \"a6a90027-9f22-4c0c-9ffd-5b7564ee55c8\") " pod="calico-system/whisker-66b4f466fd-6gfxt" Jan 21 01:02:47.996051 kubelet[3468]: I0121 01:02:47.995884 3468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssm6m\" (UniqueName: \"kubernetes.io/projected/a6a90027-9f22-4c0c-9ffd-5b7564ee55c8-kube-api-access-ssm6m\") pod \"whisker-66b4f466fd-6gfxt\" (UID: \"a6a90027-9f22-4c0c-9ffd-5b7564ee55c8\") " pod="calico-system/whisker-66b4f466fd-6gfxt" Jan 21 01:02:48.000693 containerd[1962]: 2026-01-21 01:02:47.210 [INFO][4674] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 21 01:02:48.000693 containerd[1962]: 2026-01-21 01:02:47.333 [INFO][4674] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--215-k8s-calico--apiserver--6b9ccb8fff--8h9ht-eth0 calico-apiserver-6b9ccb8fff- calico-apiserver bec76da0-2399-4e6c-952b-9dc7ad88a302 865 0 2026-01-21 01:02:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b9ccb8fff projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-215 calico-apiserver-6b9ccb8fff-8h9ht eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliccafe319875 [] [] }} ContainerID="59cb94bb00e12978a89e8d572d9187cd950d290fa40d2708af96ef3f0ab55619" Namespace="calico-apiserver" Pod="calico-apiserver-6b9ccb8fff-8h9ht" WorkloadEndpoint="ip--172--31--28--215-k8s-calico--apiserver--6b9ccb8fff--8h9ht-" Jan 21 01:02:48.000693 containerd[1962]: 2026-01-21 01:02:47.334 [INFO][4674] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="59cb94bb00e12978a89e8d572d9187cd950d290fa40d2708af96ef3f0ab55619" Namespace="calico-apiserver" Pod="calico-apiserver-6b9ccb8fff-8h9ht" WorkloadEndpoint="ip--172--31--28--215-k8s-calico--apiserver--6b9ccb8fff--8h9ht-eth0" Jan 21 01:02:48.000693 containerd[1962]: 2026-01-21 01:02:47.646 [INFO][4716] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="59cb94bb00e12978a89e8d572d9187cd950d290fa40d2708af96ef3f0ab55619" HandleID="k8s-pod-network.59cb94bb00e12978a89e8d572d9187cd950d290fa40d2708af96ef3f0ab55619" Workload="ip--172--31--28--215-k8s-calico--apiserver--6b9ccb8fff--8h9ht-eth0" Jan 21 01:02:48.001072 containerd[1962]: 2026-01-21 01:02:47.651 [INFO][4716] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="59cb94bb00e12978a89e8d572d9187cd950d290fa40d2708af96ef3f0ab55619" HandleID="k8s-pod-network.59cb94bb00e12978a89e8d572d9187cd950d290fa40d2708af96ef3f0ab55619" Workload="ip--172--31--28--215-k8s-calico--apiserver--6b9ccb8fff--8h9ht-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319a50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-28-215", "pod":"calico-apiserver-6b9ccb8fff-8h9ht", "timestamp":"2026-01-21 01:02:47.646831762 +0000 UTC"}, Hostname:"ip-172-31-28-215", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 01:02:48.001072 containerd[1962]: 2026-01-21 01:02:47.651 [INFO][4716] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 01:02:48.001072 containerd[1962]: 2026-01-21 01:02:47.731 [INFO][4716] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 01:02:48.001072 containerd[1962]: 2026-01-21 01:02:47.731 [INFO][4716] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-215' Jan 21 01:02:48.001072 containerd[1962]: 2026-01-21 01:02:47.777 [INFO][4716] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.59cb94bb00e12978a89e8d572d9187cd950d290fa40d2708af96ef3f0ab55619" host="ip-172-31-28-215" Jan 21 01:02:48.001072 containerd[1962]: 2026-01-21 01:02:47.807 [INFO][4716] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-215" Jan 21 01:02:48.001072 containerd[1962]: 2026-01-21 01:02:47.836 [INFO][4716] ipam/ipam.go 511: Trying affinity for 192.168.122.64/26 host="ip-172-31-28-215" Jan 21 01:02:48.001072 containerd[1962]: 2026-01-21 01:02:47.860 [INFO][4716] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.64/26 host="ip-172-31-28-215" Jan 21 01:02:48.001072 containerd[1962]: 2026-01-21 01:02:47.866 [INFO][4716] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.64/26 host="ip-172-31-28-215" Jan 21 01:02:48.003700 containerd[1962]: 2026-01-21 01:02:47.868 [INFO][4716] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.64/26 handle="k8s-pod-network.59cb94bb00e12978a89e8d572d9187cd950d290fa40d2708af96ef3f0ab55619" host="ip-172-31-28-215" Jan 21 01:02:48.003700 containerd[1962]: 2026-01-21 01:02:47.878 [INFO][4716] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.59cb94bb00e12978a89e8d572d9187cd950d290fa40d2708af96ef3f0ab55619 Jan 21 01:02:48.003700 containerd[1962]: 2026-01-21 01:02:47.897 [INFO][4716] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.64/26 handle="k8s-pod-network.59cb94bb00e12978a89e8d572d9187cd950d290fa40d2708af96ef3f0ab55619" host="ip-172-31-28-215" Jan 21 01:02:48.003700 containerd[1962]: 2026-01-21 01:02:47.913 [INFO][4716] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.66/26] block=192.168.122.64/26 handle="k8s-pod-network.59cb94bb00e12978a89e8d572d9187cd950d290fa40d2708af96ef3f0ab55619" host="ip-172-31-28-215" Jan 21 01:02:48.003700 containerd[1962]: 2026-01-21 01:02:47.913 [INFO][4716] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.66/26] handle="k8s-pod-network.59cb94bb00e12978a89e8d572d9187cd950d290fa40d2708af96ef3f0ab55619" host="ip-172-31-28-215" Jan 21 01:02:48.003700 containerd[1962]: 2026-01-21 01:02:47.913 [INFO][4716] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 01:02:48.003700 containerd[1962]: 2026-01-21 01:02:47.913 [INFO][4716] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.66/26] IPv6=[] ContainerID="59cb94bb00e12978a89e8d572d9187cd950d290fa40d2708af96ef3f0ab55619" HandleID="k8s-pod-network.59cb94bb00e12978a89e8d572d9187cd950d290fa40d2708af96ef3f0ab55619" Workload="ip--172--31--28--215-k8s-calico--apiserver--6b9ccb8fff--8h9ht-eth0" Jan 21 01:02:48.004261 containerd[1962]: 2026-01-21 01:02:47.927 [INFO][4674] cni-plugin/k8s.go 418: Populated endpoint ContainerID="59cb94bb00e12978a89e8d572d9187cd950d290fa40d2708af96ef3f0ab55619" Namespace="calico-apiserver" Pod="calico-apiserver-6b9ccb8fff-8h9ht" WorkloadEndpoint="ip--172--31--28--215-k8s-calico--apiserver--6b9ccb8fff--8h9ht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--215-k8s-calico--apiserver--6b9ccb8fff--8h9ht-eth0", GenerateName:"calico-apiserver-6b9ccb8fff-", Namespace:"calico-apiserver", SelfLink:"", UID:"bec76da0-2399-4e6c-952b-9dc7ad88a302", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 1, 2, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b9ccb8fff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-215", ContainerID:"", Pod:"calico-apiserver-6b9ccb8fff-8h9ht", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliccafe319875", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 01:02:48.004372 containerd[1962]: 2026-01-21 01:02:47.927 [INFO][4674] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.66/32] ContainerID="59cb94bb00e12978a89e8d572d9187cd950d290fa40d2708af96ef3f0ab55619" Namespace="calico-apiserver" Pod="calico-apiserver-6b9ccb8fff-8h9ht" WorkloadEndpoint="ip--172--31--28--215-k8s-calico--apiserver--6b9ccb8fff--8h9ht-eth0" Jan 21 01:02:48.004372 containerd[1962]: 2026-01-21 01:02:47.927 [INFO][4674] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliccafe319875 ContainerID="59cb94bb00e12978a89e8d572d9187cd950d290fa40d2708af96ef3f0ab55619" Namespace="calico-apiserver" Pod="calico-apiserver-6b9ccb8fff-8h9ht" WorkloadEndpoint="ip--172--31--28--215-k8s-calico--apiserver--6b9ccb8fff--8h9ht-eth0" Jan 21 01:02:48.004372 containerd[1962]: 2026-01-21 01:02:47.953 [INFO][4674] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="59cb94bb00e12978a89e8d572d9187cd950d290fa40d2708af96ef3f0ab55619" Namespace="calico-apiserver" Pod="calico-apiserver-6b9ccb8fff-8h9ht" WorkloadEndpoint="ip--172--31--28--215-k8s-calico--apiserver--6b9ccb8fff--8h9ht-eth0" Jan 21 01:02:48.004511 containerd[1962]: 2026-01-21 01:02:47.957 [INFO][4674] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="59cb94bb00e12978a89e8d572d9187cd950d290fa40d2708af96ef3f0ab55619" Namespace="calico-apiserver" Pod="calico-apiserver-6b9ccb8fff-8h9ht" WorkloadEndpoint="ip--172--31--28--215-k8s-calico--apiserver--6b9ccb8fff--8h9ht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--215-k8s-calico--apiserver--6b9ccb8fff--8h9ht-eth0", GenerateName:"calico-apiserver-6b9ccb8fff-", Namespace:"calico-apiserver", SelfLink:"", UID:"bec76da0-2399-4e6c-952b-9dc7ad88a302", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 1, 2, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b9ccb8fff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-215", ContainerID:"59cb94bb00e12978a89e8d572d9187cd950d290fa40d2708af96ef3f0ab55619", Pod:"calico-apiserver-6b9ccb8fff-8h9ht", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliccafe319875", MAC:"ce:b4:a2:ad:02:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 01:02:48.004602 containerd[1962]: 2026-01-21 01:02:47.992 [INFO][4674] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="59cb94bb00e12978a89e8d572d9187cd950d290fa40d2708af96ef3f0ab55619" Namespace="calico-apiserver" Pod="calico-apiserver-6b9ccb8fff-8h9ht" WorkloadEndpoint="ip--172--31--28--215-k8s-calico--apiserver--6b9ccb8fff--8h9ht-eth0" Jan 21 01:02:48.075066 systemd-networkd[1555]: cali2a8f735ee4f: Link UP Jan 21 01:02:48.075412 systemd-networkd[1555]: cali2a8f735ee4f: Gained carrier Jan 21 01:02:48.141488 containerd[1962]: time="2026-01-21T01:02:48.141446364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-764d9fb657-cw7sh,Uid:5e4fb692-d42b-4062-b09a-d89e5a5a14cf,Namespace:calico-system,Attempt:0,}" Jan 21 01:02:48.152245 kubelet[3468]: I0121 01:02:48.149917 3468 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a9d063f-0537-429d-990e-09d458a586d7" path="/var/lib/kubelet/pods/9a9d063f-0537-429d-990e-09d458a586d7/volumes" Jan 21 01:02:48.163388 containerd[1962]: 2026-01-21 01:02:47.224 [INFO][4663] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 21 01:02:48.163388 containerd[1962]: 2026-01-21 01:02:47.333 [INFO][4663] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--215-k8s-goldmane--666569f655--vhxxc-eth0 goldmane-666569f655- calico-system b0fd67ff-2b5d-470f-b242-daa718038f97 873 0 2026-01-21 01:02:16 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-28-215 goldmane-666569f655-vhxxc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2a8f735ee4f [] [] }} ContainerID="13daf896e78d8de5908aba99f3a29d254ea275483534de91be3807e137f809cc" Namespace="calico-system" Pod="goldmane-666569f655-vhxxc" WorkloadEndpoint="ip--172--31--28--215-k8s-goldmane--666569f655--vhxxc-" Jan 21 01:02:48.163388 containerd[1962]: 2026-01-21 01:02:47.333 [INFO][4663] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="13daf896e78d8de5908aba99f3a29d254ea275483534de91be3807e137f809cc" Namespace="calico-system" Pod="goldmane-666569f655-vhxxc" WorkloadEndpoint="ip--172--31--28--215-k8s-goldmane--666569f655--vhxxc-eth0" Jan 21 01:02:48.163388 containerd[1962]: 2026-01-21 01:02:47.646 [INFO][4712] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="13daf896e78d8de5908aba99f3a29d254ea275483534de91be3807e137f809cc" HandleID="k8s-pod-network.13daf896e78d8de5908aba99f3a29d254ea275483534de91be3807e137f809cc" Workload="ip--172--31--28--215-k8s-goldmane--666569f655--vhxxc-eth0" Jan 21 01:02:48.163774 containerd[1962]: 2026-01-21 01:02:47.651 [INFO][4712] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="13daf896e78d8de5908aba99f3a29d254ea275483534de91be3807e137f809cc" HandleID="k8s-pod-network.13daf896e78d8de5908aba99f3a29d254ea275483534de91be3807e137f809cc" Workload="ip--172--31--28--215-k8s-goldmane--666569f655--vhxxc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e850), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-215", "pod":"goldmane-666569f655-vhxxc", "timestamp":"2026-01-21 01:02:47.64647964 +0000 UTC"}, Hostname:"ip-172-31-28-215", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 01:02:48.163774 containerd[1962]: 2026-01-21 01:02:47.651 [INFO][4712] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 01:02:48.163774 containerd[1962]: 2026-01-21 01:02:47.914 [INFO][4712] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 01:02:48.163774 containerd[1962]: 2026-01-21 01:02:47.916 [INFO][4712] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-215' Jan 21 01:02:48.163774 containerd[1962]: 2026-01-21 01:02:47.934 [INFO][4712] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.13daf896e78d8de5908aba99f3a29d254ea275483534de91be3807e137f809cc" host="ip-172-31-28-215" Jan 21 01:02:48.163774 containerd[1962]: 2026-01-21 01:02:47.958 [INFO][4712] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-215" Jan 21 01:02:48.163774 containerd[1962]: 2026-01-21 01:02:47.985 [INFO][4712] ipam/ipam.go 511: Trying affinity for 192.168.122.64/26 host="ip-172-31-28-215" Jan 21 01:02:48.163774 containerd[1962]: 2026-01-21 01:02:48.005 [INFO][4712] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.64/26 host="ip-172-31-28-215" Jan 21 01:02:48.163774 containerd[1962]: 2026-01-21 01:02:48.013 [INFO][4712] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.64/26 host="ip-172-31-28-215" Jan 21 01:02:48.164256 containerd[1962]: 2026-01-21 01:02:48.014 [INFO][4712] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.64/26 handle="k8s-pod-network.13daf896e78d8de5908aba99f3a29d254ea275483534de91be3807e137f809cc" host="ip-172-31-28-215" Jan 21 01:02:48.164256 containerd[1962]: 2026-01-21 01:02:48.026 [INFO][4712] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.13daf896e78d8de5908aba99f3a29d254ea275483534de91be3807e137f809cc Jan 21 01:02:48.164256 containerd[1962]: 2026-01-21 01:02:48.035 [INFO][4712] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.64/26 handle="k8s-pod-network.13daf896e78d8de5908aba99f3a29d254ea275483534de91be3807e137f809cc" host="ip-172-31-28-215" Jan 21 01:02:48.164256 containerd[1962]: 2026-01-21 01:02:48.049 [INFO][4712] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.67/26] block=192.168.122.64/26 handle="k8s-pod-network.13daf896e78d8de5908aba99f3a29d254ea275483534de91be3807e137f809cc" host="ip-172-31-28-215" Jan 21 01:02:48.164256 containerd[1962]: 2026-01-21 01:02:48.049 [INFO][4712] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.67/26] handle="k8s-pod-network.13daf896e78d8de5908aba99f3a29d254ea275483534de91be3807e137f809cc" host="ip-172-31-28-215" Jan 21 01:02:48.164256 containerd[1962]: 2026-01-21 01:02:48.049 [INFO][4712] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 01:02:48.164256 containerd[1962]: 2026-01-21 01:02:48.049 [INFO][4712] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.67/26] IPv6=[] ContainerID="13daf896e78d8de5908aba99f3a29d254ea275483534de91be3807e137f809cc" HandleID="k8s-pod-network.13daf896e78d8de5908aba99f3a29d254ea275483534de91be3807e137f809cc" Workload="ip--172--31--28--215-k8s-goldmane--666569f655--vhxxc-eth0" Jan 21 01:02:48.164529 containerd[1962]: 2026-01-21 01:02:48.061 [INFO][4663] cni-plugin/k8s.go 418: Populated endpoint ContainerID="13daf896e78d8de5908aba99f3a29d254ea275483534de91be3807e137f809cc" Namespace="calico-system" Pod="goldmane-666569f655-vhxxc" WorkloadEndpoint="ip--172--31--28--215-k8s-goldmane--666569f655--vhxxc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--215-k8s-goldmane--666569f655--vhxxc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b0fd67ff-2b5d-470f-b242-daa718038f97", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 1, 2, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-215", ContainerID:"", Pod:"goldmane-666569f655-vhxxc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.122.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2a8f735ee4f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 01:02:48.164529 containerd[1962]: 2026-01-21 01:02:48.061 [INFO][4663] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.67/32] ContainerID="13daf896e78d8de5908aba99f3a29d254ea275483534de91be3807e137f809cc" Namespace="calico-system" Pod="goldmane-666569f655-vhxxc" WorkloadEndpoint="ip--172--31--28--215-k8s-goldmane--666569f655--vhxxc-eth0" Jan 21 01:02:48.164677 containerd[1962]: 2026-01-21 01:02:48.061 [INFO][4663] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2a8f735ee4f ContainerID="13daf896e78d8de5908aba99f3a29d254ea275483534de91be3807e137f809cc" Namespace="calico-system" Pod="goldmane-666569f655-vhxxc" WorkloadEndpoint="ip--172--31--28--215-k8s-goldmane--666569f655--vhxxc-eth0" Jan 21 01:02:48.164677 containerd[1962]: 2026-01-21 01:02:48.081 [INFO][4663] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="13daf896e78d8de5908aba99f3a29d254ea275483534de91be3807e137f809cc" Namespace="calico-system" Pod="goldmane-666569f655-vhxxc" WorkloadEndpoint="ip--172--31--28--215-k8s-goldmane--666569f655--vhxxc-eth0" Jan 21 01:02:48.164755 containerd[1962]: 2026-01-21 01:02:48.096 [INFO][4663] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="13daf896e78d8de5908aba99f3a29d254ea275483534de91be3807e137f809cc" Namespace="calico-system" Pod="goldmane-666569f655-vhxxc" WorkloadEndpoint="ip--172--31--28--215-k8s-goldmane--666569f655--vhxxc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--215-k8s-goldmane--666569f655--vhxxc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b0fd67ff-2b5d-470f-b242-daa718038f97", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 1, 2, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-215", ContainerID:"13daf896e78d8de5908aba99f3a29d254ea275483534de91be3807e137f809cc", Pod:"goldmane-666569f655-vhxxc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.122.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2a8f735ee4f", MAC:"16:e7:a7:e4:39:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 01:02:48.164849 containerd[1962]: 2026-01-21 01:02:48.134 [INFO][4663] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="13daf896e78d8de5908aba99f3a29d254ea275483534de91be3807e137f809cc" Namespace="calico-system" Pod="goldmane-666569f655-vhxxc" WorkloadEndpoint="ip--172--31--28--215-k8s-goldmane--666569f655--vhxxc-eth0" Jan 21 01:02:48.237708 containerd[1962]: time="2026-01-21T01:02:48.235611721Z" level=info msg="connecting to shim 44d1028e8a4dc88a12d9280ddb455d051ac97bd528dca7807970d577959f4124" address="unix:///run/containerd/s/1ac337705f219364047e5a24d89ea79f2d355ace6d29bcf06f99e08c24f774c0" namespace=k8s.io protocol=ttrpc version=3 Jan 21 01:02:48.245844 containerd[1962]: time="2026-01-21T01:02:48.245791264Z" level=info msg="connecting to shim 59cb94bb00e12978a89e8d572d9187cd950d290fa40d2708af96ef3f0ab55619" address="unix:///run/containerd/s/031527c98845a7f58562609f346c133540f9dcb0e9261863dc16bd98a705cc30" namespace=k8s.io protocol=ttrpc version=3 Jan 21 01:02:48.286790 containerd[1962]: time="2026-01-21T01:02:48.286737565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66b4f466fd-6gfxt,Uid:a6a90027-9f22-4c0c-9ffd-5b7564ee55c8,Namespace:calico-system,Attempt:0,}" Jan 21 01:02:48.317956 containerd[1962]: time="2026-01-21T01:02:48.317848558Z" level=info msg="connecting to shim 13daf896e78d8de5908aba99f3a29d254ea275483534de91be3807e137f809cc" address="unix:///run/containerd/s/75703605606c09531bd5dd5d1d1c93e05ea88c3246c79acb8cfca6b2b3a6b877" namespace=k8s.io protocol=ttrpc version=3 Jan 21 01:02:48.356165 systemd[1]: Started cri-containerd-44d1028e8a4dc88a12d9280ddb455d051ac97bd528dca7807970d577959f4124.scope - libcontainer container 44d1028e8a4dc88a12d9280ddb455d051ac97bd528dca7807970d577959f4124. Jan 21 01:02:48.402552 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 21 01:02:48.402691 kernel: audit: type=1334 audit(1768957368.397:597): prog-id=182 op=LOAD Jan 21 01:02:48.397000 audit: BPF prog-id=182 op=LOAD Jan 21 01:02:48.406511 kernel: audit: type=1334 audit(1768957368.398:598): prog-id=183 op=LOAD Jan 21 01:02:48.398000 audit: BPF prog-id=183 op=LOAD Jan 21 01:02:48.412846 kernel: audit: type=1300 audit(1768957368.398:598): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4782 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:48.398000 audit[4802]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4782 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:48.424387 kernel: audit: type=1327 audit(1768957368.398:598): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434643130323865386134646338386131326439323830646462343535 Jan 21 01:02:48.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434643130323865386134646338386131326439323830646462343535 Jan 21 01:02:48.398000 audit: BPF prog-id=183 op=UNLOAD Jan 21 01:02:48.432234 kernel: audit: type=1334 audit(1768957368.398:599): prog-id=183 op=UNLOAD Jan 21 01:02:48.398000 audit[4802]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4782 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:48.440273 kernel: audit: type=1300 audit(1768957368.398:599): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4782 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:48.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434643130323865386134646338386131326439323830646462343535 Jan 21 01:02:48.451659 kernel: audit: type=1327 audit(1768957368.398:599): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434643130323865386134646338386131326439323830646462343535 Jan 21 01:02:48.451802 kernel: audit: type=1334 audit(1768957368.423:600): prog-id=184 op=LOAD Jan 21 01:02:48.423000 audit: BPF prog-id=184 op=LOAD Jan 21 01:02:48.458012 kernel: audit: type=1300 audit(1768957368.423:600): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4782 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:48.423000 audit[4802]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4782 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:48.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434643130323865386134646338386131326439323830646462343535 Jan 21 01:02:48.423000 audit: BPF prog-id=185 op=LOAD Jan 21 01:02:48.423000 audit[4802]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4782 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:48.465595 kernel: audit: type=1327 audit(1768957368.423:600): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434643130323865386134646338386131326439323830646462343535 Jan 21 01:02:48.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434643130323865386134646338386131326439323830646462343535 Jan 21 01:02:48.423000 audit: BPF prog-id=185 op=UNLOAD Jan 21 01:02:48.423000 audit[4802]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4782 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:48.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434643130323865386134646338386131326439323830646462343535 Jan 21 01:02:48.423000 audit: BPF prog-id=184 op=UNLOAD Jan 21 01:02:48.423000 audit[4802]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4782 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:48.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434643130323865386134646338386131326439323830646462343535 Jan 21 01:02:48.423000 audit: BPF prog-id=186 op=LOAD Jan 21 01:02:48.423000 audit[4802]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4782 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:48.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434643130323865386134646338386131326439323830646462343535 Jan 21 01:02:48.468557 systemd[1]: Started cri-containerd-59cb94bb00e12978a89e8d572d9187cd950d290fa40d2708af96ef3f0ab55619.scope - libcontainer container 59cb94bb00e12978a89e8d572d9187cd950d290fa40d2708af96ef3f0ab55619. Jan 21 01:02:48.540000 audit: BPF prog-id=187 op=LOAD Jan 21 01:02:48.545000 audit: BPF prog-id=188 op=LOAD Jan 21 01:02:48.545000 audit[4812]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4788 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:48.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539636239346262303065313239373861383965386435373264393138 Jan 21 01:02:48.545000 audit: BPF prog-id=188 op=UNLOAD Jan 21 01:02:48.545000 audit[4812]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4788 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:48.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539636239346262303065313239373861383965386435373264393138 Jan 21 01:02:48.546000 audit: BPF prog-id=189 op=LOAD Jan 21 01:02:48.546000 audit[4812]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4788 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:48.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539636239346262303065313239373861383965386435373264393138 Jan 21 01:02:48.546000 audit: BPF prog-id=190 op=LOAD Jan 21 01:02:48.546000 audit[4812]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4788 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:48.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539636239346262303065313239373861383965386435373264393138 Jan 21 01:02:48.547000 audit: BPF prog-id=190 op=UNLOAD Jan 21 01:02:48.547000 audit[4812]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4788 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:48.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539636239346262303065313239373861383965386435373264393138 Jan 21 01:02:48.547000 audit: BPF prog-id=189 op=UNLOAD Jan 21 01:02:48.547000 audit[4812]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4788 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:48.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539636239346262303065313239373861383965386435373264393138 Jan 21 01:02:48.547000 audit: BPF prog-id=191 op=LOAD Jan 21 01:02:48.547000 audit[4812]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4788 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:48.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539636239346262303065313239373861383965386435373264393138 Jan 21 01:02:48.576509 systemd[1]: Started cri-containerd-13daf896e78d8de5908aba99f3a29d254ea275483534de91be3807e137f809cc.scope - libcontainer container 13daf896e78d8de5908aba99f3a29d254ea275483534de91be3807e137f809cc. Jan 21 01:02:48.661000 audit: BPF prog-id=192 op=LOAD Jan 21 01:02:48.662000 audit: BPF prog-id=193 op=LOAD Jan 21 01:02:48.662000 audit[4908]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001f4238 a2=98 a3=0 items=0 ppid=4842 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:48.662000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133646166383936653738643864653539303861626139396633613239 Jan 21 01:02:48.662000 audit: BPF prog-id=193 op=UNLOAD Jan 21 01:02:48.662000 audit[4908]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4842 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:48.662000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133646166383936653738643864653539303861626139396633613239 Jan 21 01:02:48.662000 audit: BPF prog-id=194 op=LOAD Jan 21 01:02:48.662000 audit[4908]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001f4488 a2=98 a3=0 items=0 ppid=4842 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:48.662000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133646166383936653738643864653539303861626139396633613239 Jan 21 01:02:48.662000 audit: BPF prog-id=195 op=LOAD Jan 21 01:02:48.662000 audit[4908]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001f4218 a2=98 a3=0 items=0 ppid=4842 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:48.662000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133646166383936653738643864653539303861626139396633613239 Jan 21 01:02:48.662000 audit: BPF prog-id=195 op=UNLOAD Jan 21 01:02:48.662000 audit[4908]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4842 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:48.662000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133646166383936653738643864653539303861626139396633613239 Jan 21 01:02:48.662000 audit: BPF prog-id=194 op=UNLOAD Jan 21 01:02:48.662000 audit[4908]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4842 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:48.662000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133646166383936653738643864653539303861626139396633613239 Jan 21 01:02:48.663000 audit: BPF prog-id=196 op=LOAD Jan 21 01:02:48.663000 audit[4908]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001f46e8 a2=98 a3=0 items=0 ppid=4842 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:48.663000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133646166383936653738643864653539303861626139396633613239 Jan 21 01:02:48.703404 containerd[1962]: time="2026-01-21T01:02:48.703340499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b9ccb8fff-hdchl,Uid:0aa53f42-4d9d-4bec-8857-7799d5876dce,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"44d1028e8a4dc88a12d9280ddb455d051ac97bd528dca7807970d577959f4124\"" Jan 21 01:02:48.721084 containerd[1962]: time="2026-01-21T01:02:48.720795502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 01:02:48.754615 containerd[1962]: time="2026-01-21T01:02:48.754335138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b9ccb8fff-8h9ht,Uid:bec76da0-2399-4e6c-952b-9dc7ad88a302,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"59cb94bb00e12978a89e8d572d9187cd950d290fa40d2708af96ef3f0ab55619\"" Jan 21 01:02:48.793895 systemd-networkd[1555]: cali5711bd92e25: Link UP Jan 21 01:02:48.806594 systemd-networkd[1555]: cali5711bd92e25: Gained carrier Jan 21 01:02:48.830551 containerd[1962]: time="2026-01-21T01:02:48.830505198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vhxxc,Uid:b0fd67ff-2b5d-470f-b242-daa718038f97,Namespace:calico-system,Attempt:0,} returns sandbox id \"13daf896e78d8de5908aba99f3a29d254ea275483534de91be3807e137f809cc\"" Jan 21 01:02:48.837624 containerd[1962]: 2026-01-21 01:02:48.288 [INFO][4759] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 21 01:02:48.837624 containerd[1962]: 2026-01-21 01:02:48.333 [INFO][4759] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--215-k8s-calico--kube--controllers--764d9fb657--cw7sh-eth0 calico-kube-controllers-764d9fb657- calico-system 5e4fb692-d42b-4062-b09a-d89e5a5a14cf 871 0 2026-01-21 01:02:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:764d9fb657 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-28-215 calico-kube-controllers-764d9fb657-cw7sh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5711bd92e25 [] [] }} ContainerID="002cbbc4dd1f6cb51f78a1f848b7d2c33db99323c41f7302303c0674e86d00f0" Namespace="calico-system" Pod="calico-kube-controllers-764d9fb657-cw7sh" WorkloadEndpoint="ip--172--31--28--215-k8s-calico--kube--controllers--764d9fb657--cw7sh-" Jan 21 01:02:48.837624 containerd[1962]: 2026-01-21 01:02:48.339 [INFO][4759] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="002cbbc4dd1f6cb51f78a1f848b7d2c33db99323c41f7302303c0674e86d00f0" Namespace="calico-system" Pod="calico-kube-controllers-764d9fb657-cw7sh" WorkloadEndpoint="ip--172--31--28--215-k8s-calico--kube--controllers--764d9fb657--cw7sh-eth0" Jan 21 01:02:48.837624 containerd[1962]: 2026-01-21 01:02:48.679 [INFO][4884] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="002cbbc4dd1f6cb51f78a1f848b7d2c33db99323c41f7302303c0674e86d00f0" HandleID="k8s-pod-network.002cbbc4dd1f6cb51f78a1f848b7d2c33db99323c41f7302303c0674e86d00f0" Workload="ip--172--31--28--215-k8s-calico--kube--controllers--764d9fb657--cw7sh-eth0" Jan 21 01:02:48.837980 containerd[1962]: 2026-01-21 01:02:48.680 [INFO][4884] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="002cbbc4dd1f6cb51f78a1f848b7d2c33db99323c41f7302303c0674e86d00f0" HandleID="k8s-pod-network.002cbbc4dd1f6cb51f78a1f848b7d2c33db99323c41f7302303c0674e86d00f0" Workload="ip--172--31--28--215-k8s-calico--kube--controllers--764d9fb657--cw7sh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031a2c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-215", "pod":"calico-kube-controllers-764d9fb657-cw7sh", "timestamp":"2026-01-21 01:02:48.6794502 +0000 UTC"}, Hostname:"ip-172-31-28-215", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 01:02:48.837980 containerd[1962]: 2026-01-21 01:02:48.680 [INFO][4884] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 01:02:48.837980 containerd[1962]: 2026-01-21 01:02:48.680 [INFO][4884] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 01:02:48.837980 containerd[1962]: 2026-01-21 01:02:48.680 [INFO][4884] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-215' Jan 21 01:02:48.837980 containerd[1962]: 2026-01-21 01:02:48.701 [INFO][4884] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.002cbbc4dd1f6cb51f78a1f848b7d2c33db99323c41f7302303c0674e86d00f0" host="ip-172-31-28-215" Jan 21 01:02:48.837980 containerd[1962]: 2026-01-21 01:02:48.714 [INFO][4884] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-215" Jan 21 01:02:48.837980 containerd[1962]: 2026-01-21 01:02:48.726 [INFO][4884] ipam/ipam.go 511: Trying affinity for 192.168.122.64/26 host="ip-172-31-28-215" Jan 21 01:02:48.837980 containerd[1962]: 2026-01-21 01:02:48.730 [INFO][4884] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.64/26 host="ip-172-31-28-215" Jan 21 01:02:48.837980 containerd[1962]: 2026-01-21 01:02:48.735 [INFO][4884] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.64/26 host="ip-172-31-28-215" Jan 21 01:02:48.838677 containerd[1962]: 2026-01-21 01:02:48.735 [INFO][4884] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.64/26 handle="k8s-pod-network.002cbbc4dd1f6cb51f78a1f848b7d2c33db99323c41f7302303c0674e86d00f0" host="ip-172-31-28-215" Jan 21 01:02:48.838677 containerd[1962]: 2026-01-21 01:02:48.738 [INFO][4884] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.002cbbc4dd1f6cb51f78a1f848b7d2c33db99323c41f7302303c0674e86d00f0 Jan 21 01:02:48.838677 containerd[1962]: 2026-01-21 01:02:48.747 [INFO][4884] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.64/26 handle="k8s-pod-network.002cbbc4dd1f6cb51f78a1f848b7d2c33db99323c41f7302303c0674e86d00f0" host="ip-172-31-28-215" Jan 21 01:02:48.838677 containerd[1962]: 2026-01-21 01:02:48.773 [INFO][4884] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.68/26] block=192.168.122.64/26 handle="k8s-pod-network.002cbbc4dd1f6cb51f78a1f848b7d2c33db99323c41f7302303c0674e86d00f0" host="ip-172-31-28-215" Jan 21 01:02:48.838677 containerd[1962]: 2026-01-21 01:02:48.776 [INFO][4884] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.68/26] handle="k8s-pod-network.002cbbc4dd1f6cb51f78a1f848b7d2c33db99323c41f7302303c0674e86d00f0" host="ip-172-31-28-215" Jan 21 01:02:48.838677 containerd[1962]: 2026-01-21 01:02:48.776 [INFO][4884] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 01:02:48.838677 containerd[1962]: 2026-01-21 01:02:48.776 [INFO][4884] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.68/26] IPv6=[] ContainerID="002cbbc4dd1f6cb51f78a1f848b7d2c33db99323c41f7302303c0674e86d00f0" HandleID="k8s-pod-network.002cbbc4dd1f6cb51f78a1f848b7d2c33db99323c41f7302303c0674e86d00f0" Workload="ip--172--31--28--215-k8s-calico--kube--controllers--764d9fb657--cw7sh-eth0" Jan 21 01:02:48.839528 containerd[1962]: 2026-01-21 01:02:48.785 [INFO][4759] cni-plugin/k8s.go 418: Populated endpoint ContainerID="002cbbc4dd1f6cb51f78a1f848b7d2c33db99323c41f7302303c0674e86d00f0" Namespace="calico-system" Pod="calico-kube-controllers-764d9fb657-cw7sh" WorkloadEndpoint="ip--172--31--28--215-k8s-calico--kube--controllers--764d9fb657--cw7sh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--215-k8s-calico--kube--controllers--764d9fb657--cw7sh-eth0", GenerateName:"calico-kube-controllers-764d9fb657-", Namespace:"calico-system", SelfLink:"", UID:"5e4fb692-d42b-4062-b09a-d89e5a5a14cf", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 1, 2, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"764d9fb657", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-215", ContainerID:"", Pod:"calico-kube-controllers-764d9fb657-cw7sh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.122.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5711bd92e25", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 01:02:48.839651 containerd[1962]: 2026-01-21 01:02:48.786 [INFO][4759] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.68/32] ContainerID="002cbbc4dd1f6cb51f78a1f848b7d2c33db99323c41f7302303c0674e86d00f0" Namespace="calico-system" Pod="calico-kube-controllers-764d9fb657-cw7sh" WorkloadEndpoint="ip--172--31--28--215-k8s-calico--kube--controllers--764d9fb657--cw7sh-eth0" Jan 21 01:02:48.839651 containerd[1962]: 2026-01-21 01:02:48.786 [INFO][4759] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5711bd92e25 ContainerID="002cbbc4dd1f6cb51f78a1f848b7d2c33db99323c41f7302303c0674e86d00f0" Namespace="calico-system" Pod="calico-kube-controllers-764d9fb657-cw7sh" WorkloadEndpoint="ip--172--31--28--215-k8s-calico--kube--controllers--764d9fb657--cw7sh-eth0" Jan 21 01:02:48.839651 containerd[1962]: 2026-01-21 01:02:48.809 [INFO][4759] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="002cbbc4dd1f6cb51f78a1f848b7d2c33db99323c41f7302303c0674e86d00f0" Namespace="calico-system" Pod="calico-kube-controllers-764d9fb657-cw7sh" WorkloadEndpoint="ip--172--31--28--215-k8s-calico--kube--controllers--764d9fb657--cw7sh-eth0" Jan 21 01:02:48.839759 containerd[1962]: 2026-01-21 01:02:48.810 [INFO][4759] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="002cbbc4dd1f6cb51f78a1f848b7d2c33db99323c41f7302303c0674e86d00f0" Namespace="calico-system" Pod="calico-kube-controllers-764d9fb657-cw7sh" WorkloadEndpoint="ip--172--31--28--215-k8s-calico--kube--controllers--764d9fb657--cw7sh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--215-k8s-calico--kube--controllers--764d9fb657--cw7sh-eth0", GenerateName:"calico-kube-controllers-764d9fb657-", Namespace:"calico-system", SelfLink:"", UID:"5e4fb692-d42b-4062-b09a-d89e5a5a14cf", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 1, 2, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"764d9fb657", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-215", ContainerID:"002cbbc4dd1f6cb51f78a1f848b7d2c33db99323c41f7302303c0674e86d00f0", Pod:"calico-kube-controllers-764d9fb657-cw7sh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.122.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5711bd92e25", MAC:"7a:79:70:cd:c2:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 01:02:48.839855 containerd[1962]: 2026-01-21 01:02:48.834 [INFO][4759] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="002cbbc4dd1f6cb51f78a1f848b7d2c33db99323c41f7302303c0674e86d00f0" Namespace="calico-system" Pod="calico-kube-controllers-764d9fb657-cw7sh" WorkloadEndpoint="ip--172--31--28--215-k8s-calico--kube--controllers--764d9fb657--cw7sh-eth0" Jan 21 01:02:48.943542 containerd[1962]: time="2026-01-21T01:02:48.943482380Z" level=info msg="connecting to shim 002cbbc4dd1f6cb51f78a1f848b7d2c33db99323c41f7302303c0674e86d00f0" address="unix:///run/containerd/s/057192f20822601d7adef863f34af0aee1f4da6685122ea0825efd31d170b3cd" namespace=k8s.io protocol=ttrpc version=3 Jan 21 01:02:48.988842 containerd[1962]: time="2026-01-21T01:02:48.988657458Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:02:48.991520 containerd[1962]: time="2026-01-21T01:02:48.991431101Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 01:02:49.016849 containerd[1962]: time="2026-01-21T01:02:48.991762057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 01:02:49.026423 kubelet[3468]: E0121 01:02:49.026248 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:02:49.032271 kubelet[3468]: E0121 01:02:49.032008 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:02:49.033673 systemd[1]: Started cri-containerd-002cbbc4dd1f6cb51f78a1f848b7d2c33db99323c41f7302303c0674e86d00f0.scope - libcontainer container 002cbbc4dd1f6cb51f78a1f848b7d2c33db99323c41f7302303c0674e86d00f0. Jan 21 01:02:49.036266 containerd[1962]: time="2026-01-21T01:02:49.035580419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 01:02:49.053729 kubelet[3468]: E0121 01:02:49.052249 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-57wl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b9ccb8fff-hdchl_calico-apiserver(0aa53f42-4d9d-4bec-8857-7799d5876dce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 01:02:49.059255 kubelet[3468]: E0121 01:02:49.057464 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b9ccb8fff-hdchl" podUID="0aa53f42-4d9d-4bec-8857-7799d5876dce" Jan 21 01:02:49.074375 systemd-networkd[1555]: cali8bbdd3edcdb: Link UP Jan 21 01:02:49.078395 systemd-networkd[1555]: cali8bbdd3edcdb: Gained carrier Jan 21 01:02:49.088000 audit: BPF prog-id=197 op=LOAD Jan 21 01:02:49.089000 audit: BPF prog-id=198 op=LOAD Jan 21 01:02:49.089000 audit[5039]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5027 pid=5039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:49.089000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030326362626334646431663663623531663738613166383438623764 Jan 21 01:02:49.090000 audit: BPF prog-id=198 op=UNLOAD Jan 21 01:02:49.090000 audit[5039]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5027 pid=5039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:49.090000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030326362626334646431663663623531663738613166383438623764 Jan 21 01:02:49.090000 audit: BPF prog-id=199 op=LOAD Jan 21 01:02:49.090000 audit[5039]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5027 pid=5039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:49.090000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030326362626334646431663663623531663738613166383438623764 Jan 21 01:02:49.091000 audit: BPF prog-id=200 op=LOAD Jan 21 01:02:49.091000 audit[5039]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5027 pid=5039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:49.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030326362626334646431663663623531663738613166383438623764 Jan 21 01:02:49.091000 audit: BPF prog-id=200 op=UNLOAD Jan 21 01:02:49.091000 audit[5039]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5027 pid=5039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:49.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030326362626334646431663663623531663738613166383438623764 Jan 21 01:02:49.091000 audit: BPF prog-id=199 op=UNLOAD Jan 21 01:02:49.091000 audit[5039]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5027 pid=5039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:49.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030326362626334646431663663623531663738613166383438623764 Jan 21 01:02:49.091000 audit: BPF prog-id=201 op=LOAD Jan 21 01:02:49.091000 audit[5039]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5027 pid=5039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:49.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030326362626334646431663663623531663738613166383438623764 Jan 21 01:02:49.154699 containerd[1962]: time="2026-01-21T01:02:49.154474810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m6nqt,Uid:8dde797d-8111-4b2e-a7ed-5cc610d0e8e0,Namespace:kube-system,Attempt:0,}" Jan 21 01:02:49.154936 containerd[1962]: 2026-01-21 01:02:48.624 [INFO][4854] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 21 01:02:49.154936 containerd[1962]: 2026-01-21 01:02:48.692 [INFO][4854] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--215-k8s-whisker--66b4f466fd--6gfxt-eth0 whisker-66b4f466fd- calico-system a6a90027-9f22-4c0c-9ffd-5b7564ee55c8 965 0 2026-01-21 01:02:47 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:66b4f466fd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-28-215 whisker-66b4f466fd-6gfxt eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8bbdd3edcdb [] [] }} ContainerID="bb07c3a7102c0e6fe1c0fe0f957fa34f8b7a88e82359c2bfb2aa753071788d2a" Namespace="calico-system" Pod="whisker-66b4f466fd-6gfxt" WorkloadEndpoint="ip--172--31--28--215-k8s-whisker--66b4f466fd--6gfxt-" Jan 21 01:02:49.154936 containerd[1962]: 2026-01-21 01:02:48.693 [INFO][4854] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bb07c3a7102c0e6fe1c0fe0f957fa34f8b7a88e82359c2bfb2aa753071788d2a" Namespace="calico-system" Pod="whisker-66b4f466fd-6gfxt" WorkloadEndpoint="ip--172--31--28--215-k8s-whisker--66b4f466fd--6gfxt-eth0" Jan 21 01:02:49.154936 containerd[1962]: 2026-01-21 01:02:48.905 [INFO][4989] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bb07c3a7102c0e6fe1c0fe0f957fa34f8b7a88e82359c2bfb2aa753071788d2a" HandleID="k8s-pod-network.bb07c3a7102c0e6fe1c0fe0f957fa34f8b7a88e82359c2bfb2aa753071788d2a" Workload="ip--172--31--28--215-k8s-whisker--66b4f466fd--6gfxt-eth0" Jan 21 01:02:49.155193 containerd[1962]: 2026-01-21 01:02:48.907 [INFO][4989] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bb07c3a7102c0e6fe1c0fe0f957fa34f8b7a88e82359c2bfb2aa753071788d2a" HandleID="k8s-pod-network.bb07c3a7102c0e6fe1c0fe0f957fa34f8b7a88e82359c2bfb2aa753071788d2a" Workload="ip--172--31--28--215-k8s-whisker--66b4f466fd--6gfxt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000387c60), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-215", "pod":"whisker-66b4f466fd-6gfxt", "timestamp":"2026-01-21 01:02:48.905517672 +0000 UTC"}, Hostname:"ip-172-31-28-215", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 01:02:49.155193 containerd[1962]: 2026-01-21 01:02:48.907 [INFO][4989] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 01:02:49.155193 containerd[1962]: 2026-01-21 01:02:48.907 [INFO][4989] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 01:02:49.155193 containerd[1962]: 2026-01-21 01:02:48.907 [INFO][4989] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-215' Jan 21 01:02:49.155193 containerd[1962]: 2026-01-21 01:02:48.922 [INFO][4989] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bb07c3a7102c0e6fe1c0fe0f957fa34f8b7a88e82359c2bfb2aa753071788d2a" host="ip-172-31-28-215" Jan 21 01:02:49.155193 containerd[1962]: 2026-01-21 01:02:48.942 [INFO][4989] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-215" Jan 21 01:02:49.155193 containerd[1962]: 2026-01-21 01:02:48.963 [INFO][4989] ipam/ipam.go 511: Trying affinity for 192.168.122.64/26 host="ip-172-31-28-215" Jan 21 01:02:49.155193 containerd[1962]: 2026-01-21 01:02:48.971 [INFO][4989] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.64/26 host="ip-172-31-28-215" Jan 21 01:02:49.155193 containerd[1962]: 2026-01-21 01:02:48.977 [INFO][4989] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.64/26 host="ip-172-31-28-215" Jan 21 01:02:49.155663 containerd[1962]: 2026-01-21 01:02:48.977 [INFO][4989] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.64/26 handle="k8s-pod-network.bb07c3a7102c0e6fe1c0fe0f957fa34f8b7a88e82359c2bfb2aa753071788d2a" host="ip-172-31-28-215" Jan 21 01:02:49.155663 containerd[1962]: 2026-01-21 01:02:48.981 [INFO][4989] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bb07c3a7102c0e6fe1c0fe0f957fa34f8b7a88e82359c2bfb2aa753071788d2a Jan 21 01:02:49.155663 containerd[1962]: 2026-01-21 01:02:48.995 [INFO][4989] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.64/26 handle="k8s-pod-network.bb07c3a7102c0e6fe1c0fe0f957fa34f8b7a88e82359c2bfb2aa753071788d2a" host="ip-172-31-28-215" Jan 21 01:02:49.155663 containerd[1962]: 2026-01-21 01:02:49.028 [INFO][4989] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.69/26] block=192.168.122.64/26 handle="k8s-pod-network.bb07c3a7102c0e6fe1c0fe0f957fa34f8b7a88e82359c2bfb2aa753071788d2a" host="ip-172-31-28-215" Jan 21 01:02:49.155663 containerd[1962]: 2026-01-21 01:02:49.031 [INFO][4989] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.69/26] handle="k8s-pod-network.bb07c3a7102c0e6fe1c0fe0f957fa34f8b7a88e82359c2bfb2aa753071788d2a" host="ip-172-31-28-215" Jan 21 01:02:49.155663 containerd[1962]: 2026-01-21 01:02:49.032 [INFO][4989] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 01:02:49.155663 containerd[1962]: 2026-01-21 01:02:49.032 [INFO][4989] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.69/26] IPv6=[] ContainerID="bb07c3a7102c0e6fe1c0fe0f957fa34f8b7a88e82359c2bfb2aa753071788d2a" HandleID="k8s-pod-network.bb07c3a7102c0e6fe1c0fe0f957fa34f8b7a88e82359c2bfb2aa753071788d2a" Workload="ip--172--31--28--215-k8s-whisker--66b4f466fd--6gfxt-eth0" Jan 21 01:02:49.155949 containerd[1962]: 2026-01-21 01:02:49.056 [INFO][4854] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bb07c3a7102c0e6fe1c0fe0f957fa34f8b7a88e82359c2bfb2aa753071788d2a" Namespace="calico-system" Pod="whisker-66b4f466fd-6gfxt" WorkloadEndpoint="ip--172--31--28--215-k8s-whisker--66b4f466fd--6gfxt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--215-k8s-whisker--66b4f466fd--6gfxt-eth0", GenerateName:"whisker-66b4f466fd-", Namespace:"calico-system", SelfLink:"", UID:"a6a90027-9f22-4c0c-9ffd-5b7564ee55c8", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 1, 2, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66b4f466fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-215", ContainerID:"", Pod:"whisker-66b4f466fd-6gfxt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.122.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8bbdd3edcdb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 01:02:49.155949 containerd[1962]: 2026-01-21 01:02:49.056 [INFO][4854] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.69/32] ContainerID="bb07c3a7102c0e6fe1c0fe0f957fa34f8b7a88e82359c2bfb2aa753071788d2a" Namespace="calico-system" Pod="whisker-66b4f466fd-6gfxt" WorkloadEndpoint="ip--172--31--28--215-k8s-whisker--66b4f466fd--6gfxt-eth0" Jan 21 01:02:49.156095 containerd[1962]: 2026-01-21 01:02:49.056 [INFO][4854] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8bbdd3edcdb ContainerID="bb07c3a7102c0e6fe1c0fe0f957fa34f8b7a88e82359c2bfb2aa753071788d2a" Namespace="calico-system" Pod="whisker-66b4f466fd-6gfxt" WorkloadEndpoint="ip--172--31--28--215-k8s-whisker--66b4f466fd--6gfxt-eth0" Jan 21 01:02:49.156095 containerd[1962]: 2026-01-21 01:02:49.090 [INFO][4854] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bb07c3a7102c0e6fe1c0fe0f957fa34f8b7a88e82359c2bfb2aa753071788d2a" Namespace="calico-system" Pod="whisker-66b4f466fd-6gfxt" WorkloadEndpoint="ip--172--31--28--215-k8s-whisker--66b4f466fd--6gfxt-eth0" Jan 21 01:02:49.156180 containerd[1962]: 2026-01-21 01:02:49.094 [INFO][4854] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bb07c3a7102c0e6fe1c0fe0f957fa34f8b7a88e82359c2bfb2aa753071788d2a" Namespace="calico-system" Pod="whisker-66b4f466fd-6gfxt" WorkloadEndpoint="ip--172--31--28--215-k8s-whisker--66b4f466fd--6gfxt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--215-k8s-whisker--66b4f466fd--6gfxt-eth0", GenerateName:"whisker-66b4f466fd-", Namespace:"calico-system", SelfLink:"", UID:"a6a90027-9f22-4c0c-9ffd-5b7564ee55c8", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 1, 2, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66b4f466fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-215", ContainerID:"bb07c3a7102c0e6fe1c0fe0f957fa34f8b7a88e82359c2bfb2aa753071788d2a", Pod:"whisker-66b4f466fd-6gfxt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.122.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8bbdd3edcdb", MAC:"b6:77:b8:e5:5d:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 01:02:49.156573 containerd[1962]: 2026-01-21 01:02:49.131 [INFO][4854] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bb07c3a7102c0e6fe1c0fe0f957fa34f8b7a88e82359c2bfb2aa753071788d2a" Namespace="calico-system" Pod="whisker-66b4f466fd-6gfxt" WorkloadEndpoint="ip--172--31--28--215-k8s-whisker--66b4f466fd--6gfxt-eth0" Jan 21 01:02:49.299269 containerd[1962]: time="2026-01-21T01:02:49.298373305Z" level=info msg="connecting to shim bb07c3a7102c0e6fe1c0fe0f957fa34f8b7a88e82359c2bfb2aa753071788d2a" address="unix:///run/containerd/s/40e3372b5e5be5a16918d4fcd4589a0f4311fb92ae1559f10b29892e0d726817" namespace=k8s.io protocol=ttrpc version=3 Jan 21 01:02:49.303434 containerd[1962]: time="2026-01-21T01:02:49.303272832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-764d9fb657-cw7sh,Uid:5e4fb692-d42b-4062-b09a-d89e5a5a14cf,Namespace:calico-system,Attempt:0,} returns sandbox id \"002cbbc4dd1f6cb51f78a1f848b7d2c33db99323c41f7302303c0674e86d00f0\"" Jan 21 01:02:49.388762 containerd[1962]: time="2026-01-21T01:02:49.388635634Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:02:49.394040 containerd[1962]: time="2026-01-21T01:02:49.393863324Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 01:02:49.394314 containerd[1962]: time="2026-01-21T01:02:49.394027208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 01:02:49.395874 kubelet[3468]: E0121 01:02:49.395491 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:02:49.395874 kubelet[3468]: E0121 01:02:49.395801 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:02:49.396527 kubelet[3468]: E0121 01:02:49.396445 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-98vll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b9ccb8fff-8h9ht_calico-apiserver(bec76da0-2399-4e6c-952b-9dc7ad88a302): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 01:02:49.396782 systemd[1]: Started cri-containerd-bb07c3a7102c0e6fe1c0fe0f957fa34f8b7a88e82359c2bfb2aa753071788d2a.scope - libcontainer container bb07c3a7102c0e6fe1c0fe0f957fa34f8b7a88e82359c2bfb2aa753071788d2a. Jan 21 01:02:49.397851 containerd[1962]: time="2026-01-21T01:02:49.397409927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 21 01:02:49.399399 kubelet[3468]: E0121 01:02:49.399338 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b9ccb8fff-8h9ht" podUID="bec76da0-2399-4e6c-952b-9dc7ad88a302" Jan 21 01:02:49.495000 audit: BPF prog-id=202 op=LOAD Jan 21 01:02:49.497000 audit: BPF prog-id=203 op=LOAD Jan 21 01:02:49.497000 audit[5100]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=5089 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:49.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262303763336137313032633065366665316330666530663935376661 Jan 21 01:02:49.497000 audit: BPF prog-id=203 op=UNLOAD Jan 21 01:02:49.497000 audit[5100]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5089 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:49.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262303763336137313032633065366665316330666530663935376661 Jan 21 01:02:49.498000 audit: BPF prog-id=204 op=LOAD Jan 21 01:02:49.498000 audit[5100]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=5089 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:49.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262303763336137313032633065366665316330666530663935376661 Jan 21 01:02:49.498000 audit: BPF prog-id=205 op=LOAD Jan 21 01:02:49.498000 audit[5100]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=5089 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:49.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262303763336137313032633065366665316330666530663935376661 Jan 21 01:02:49.498000 audit: BPF prog-id=205 op=UNLOAD Jan 21 01:02:49.498000 audit[5100]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5089 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:49.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262303763336137313032633065366665316330666530663935376661 Jan 21 01:02:49.498000 audit: BPF prog-id=204 op=UNLOAD Jan 21 01:02:49.498000 audit[5100]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5089 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:49.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262303763336137313032633065366665316330666530663935376661 Jan 21 01:02:49.498000 audit: BPF prog-id=206 op=LOAD Jan 21 01:02:49.498000 audit[5100]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=5089 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:49.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262303763336137313032633065366665316330666530663935376661 Jan 21 01:02:49.538417 systemd-networkd[1555]: caliccafe319875: Gained IPv6LL Jan 21 01:02:49.575417 systemd-networkd[1555]: calif3d68c2ddc3: Link UP Jan 21 01:02:49.577497 systemd-networkd[1555]: calif3d68c2ddc3: Gained carrier Jan 21 01:02:49.646547 containerd[1962]: 2026-01-21 01:02:49.351 [INFO][5064] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 21 01:02:49.646547 containerd[1962]: 2026-01-21 01:02:49.398 [INFO][5064] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--215-k8s-coredns--668d6bf9bc--m6nqt-eth0 coredns-668d6bf9bc- kube-system 8dde797d-8111-4b2e-a7ed-5cc610d0e8e0 869 0 2026-01-21 01:01:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-215 coredns-668d6bf9bc-m6nqt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif3d68c2ddc3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="165abae88ca49708b01858fd70e14a8b25d6b19db4788074d7adea04972bfbfa" Namespace="kube-system" Pod="coredns-668d6bf9bc-m6nqt" WorkloadEndpoint="ip--172--31--28--215-k8s-coredns--668d6bf9bc--m6nqt-" Jan 21 01:02:49.646547 containerd[1962]: 2026-01-21 01:02:49.398 [INFO][5064] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="165abae88ca49708b01858fd70e14a8b25d6b19db4788074d7adea04972bfbfa" Namespace="kube-system" Pod="coredns-668d6bf9bc-m6nqt" WorkloadEndpoint="ip--172--31--28--215-k8s-coredns--668d6bf9bc--m6nqt-eth0" Jan 21 01:02:49.646547 containerd[1962]: 2026-01-21 01:02:49.479 [INFO][5122] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="165abae88ca49708b01858fd70e14a8b25d6b19db4788074d7adea04972bfbfa" HandleID="k8s-pod-network.165abae88ca49708b01858fd70e14a8b25d6b19db4788074d7adea04972bfbfa" Workload="ip--172--31--28--215-k8s-coredns--668d6bf9bc--m6nqt-eth0" Jan 21 01:02:49.648870 containerd[1962]: 2026-01-21 01:02:49.480 [INFO][5122] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="165abae88ca49708b01858fd70e14a8b25d6b19db4788074d7adea04972bfbfa" HandleID="k8s-pod-network.165abae88ca49708b01858fd70e14a8b25d6b19db4788074d7adea04972bfbfa" Workload="ip--172--31--28--215-k8s-coredns--668d6bf9bc--m6nqt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000316c80), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-215", "pod":"coredns-668d6bf9bc-m6nqt", "timestamp":"2026-01-21 01:02:49.479825993 +0000 UTC"}, Hostname:"ip-172-31-28-215", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 01:02:49.648870 containerd[1962]: 2026-01-21 01:02:49.480 [INFO][5122] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 01:02:49.648870 containerd[1962]: 2026-01-21 01:02:49.480 [INFO][5122] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 01:02:49.648870 containerd[1962]: 2026-01-21 01:02:49.480 [INFO][5122] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-215' Jan 21 01:02:49.648870 containerd[1962]: 2026-01-21 01:02:49.504 [INFO][5122] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.165abae88ca49708b01858fd70e14a8b25d6b19db4788074d7adea04972bfbfa" host="ip-172-31-28-215" Jan 21 01:02:49.648870 containerd[1962]: 2026-01-21 01:02:49.514 [INFO][5122] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-215" Jan 21 01:02:49.648870 containerd[1962]: 2026-01-21 01:02:49.523 [INFO][5122] ipam/ipam.go 511: Trying affinity for 192.168.122.64/26 host="ip-172-31-28-215" Jan 21 01:02:49.648870 containerd[1962]: 2026-01-21 01:02:49.526 [INFO][5122] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.64/26 host="ip-172-31-28-215" Jan 21 01:02:49.648870 containerd[1962]: 2026-01-21 01:02:49.531 [INFO][5122] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.64/26 host="ip-172-31-28-215" Jan 21 01:02:49.649276 containerd[1962]: 2026-01-21 01:02:49.532 [INFO][5122] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.64/26 handle="k8s-pod-network.165abae88ca49708b01858fd70e14a8b25d6b19db4788074d7adea04972bfbfa" host="ip-172-31-28-215" Jan 21 01:02:49.649276 containerd[1962]: 2026-01-21 01:02:49.535 [INFO][5122] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.165abae88ca49708b01858fd70e14a8b25d6b19db4788074d7adea04972bfbfa Jan 21 01:02:49.649276 containerd[1962]: 2026-01-21 01:02:49.552 [INFO][5122] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.64/26 handle="k8s-pod-network.165abae88ca49708b01858fd70e14a8b25d6b19db4788074d7adea04972bfbfa" host="ip-172-31-28-215" Jan 21 01:02:49.649276 containerd[1962]: 2026-01-21 01:02:49.562 [INFO][5122] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.70/26] block=192.168.122.64/26 handle="k8s-pod-network.165abae88ca49708b01858fd70e14a8b25d6b19db4788074d7adea04972bfbfa" host="ip-172-31-28-215" Jan 21 01:02:49.649276 containerd[1962]: 2026-01-21 01:02:49.563 [INFO][5122] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.70/26] handle="k8s-pod-network.165abae88ca49708b01858fd70e14a8b25d6b19db4788074d7adea04972bfbfa" host="ip-172-31-28-215" Jan 21 01:02:49.649276 containerd[1962]: 2026-01-21 01:02:49.563 [INFO][5122] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 01:02:49.649276 containerd[1962]: 2026-01-21 01:02:49.563 [INFO][5122] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.70/26] IPv6=[] ContainerID="165abae88ca49708b01858fd70e14a8b25d6b19db4788074d7adea04972bfbfa" HandleID="k8s-pod-network.165abae88ca49708b01858fd70e14a8b25d6b19db4788074d7adea04972bfbfa" Workload="ip--172--31--28--215-k8s-coredns--668d6bf9bc--m6nqt-eth0" Jan 21 01:02:49.649556 containerd[1962]: 2026-01-21 01:02:49.567 [INFO][5064] cni-plugin/k8s.go 418: Populated endpoint ContainerID="165abae88ca49708b01858fd70e14a8b25d6b19db4788074d7adea04972bfbfa" Namespace="kube-system" Pod="coredns-668d6bf9bc-m6nqt" WorkloadEndpoint="ip--172--31--28--215-k8s-coredns--668d6bf9bc--m6nqt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--215-k8s-coredns--668d6bf9bc--m6nqt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8dde797d-8111-4b2e-a7ed-5cc610d0e8e0", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 1, 1, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-215", ContainerID:"", Pod:"coredns-668d6bf9bc-m6nqt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif3d68c2ddc3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 01:02:49.649556 containerd[1962]: 2026-01-21 01:02:49.568 [INFO][5064] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.70/32] ContainerID="165abae88ca49708b01858fd70e14a8b25d6b19db4788074d7adea04972bfbfa" Namespace="kube-system" Pod="coredns-668d6bf9bc-m6nqt" WorkloadEndpoint="ip--172--31--28--215-k8s-coredns--668d6bf9bc--m6nqt-eth0" Jan 21 01:02:49.649556 containerd[1962]: 2026-01-21 01:02:49.568 [INFO][5064] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif3d68c2ddc3 ContainerID="165abae88ca49708b01858fd70e14a8b25d6b19db4788074d7adea04972bfbfa" Namespace="kube-system" Pod="coredns-668d6bf9bc-m6nqt" WorkloadEndpoint="ip--172--31--28--215-k8s-coredns--668d6bf9bc--m6nqt-eth0" Jan 21 01:02:49.649556 containerd[1962]: 2026-01-21 01:02:49.582 [INFO][5064] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="165abae88ca49708b01858fd70e14a8b25d6b19db4788074d7adea04972bfbfa" Namespace="kube-system" Pod="coredns-668d6bf9bc-m6nqt" WorkloadEndpoint="ip--172--31--28--215-k8s-coredns--668d6bf9bc--m6nqt-eth0" Jan 21 01:02:49.649556 containerd[1962]: 2026-01-21 01:02:49.586 [INFO][5064] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="165abae88ca49708b01858fd70e14a8b25d6b19db4788074d7adea04972bfbfa" Namespace="kube-system" Pod="coredns-668d6bf9bc-m6nqt" WorkloadEndpoint="ip--172--31--28--215-k8s-coredns--668d6bf9bc--m6nqt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--215-k8s-coredns--668d6bf9bc--m6nqt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8dde797d-8111-4b2e-a7ed-5cc610d0e8e0", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 1, 1, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-215", ContainerID:"165abae88ca49708b01858fd70e14a8b25d6b19db4788074d7adea04972bfbfa", Pod:"coredns-668d6bf9bc-m6nqt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif3d68c2ddc3", MAC:"02:88:1a:c3:24:32", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 01:02:49.649556 containerd[1962]: 2026-01-21 01:02:49.636 [INFO][5064] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="165abae88ca49708b01858fd70e14a8b25d6b19db4788074d7adea04972bfbfa" Namespace="kube-system" Pod="coredns-668d6bf9bc-m6nqt" WorkloadEndpoint="ip--172--31--28--215-k8s-coredns--668d6bf9bc--m6nqt-eth0" Jan 21 01:02:49.659350 containerd[1962]: time="2026-01-21T01:02:49.659307161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66b4f466fd-6gfxt,Uid:a6a90027-9f22-4c0c-9ffd-5b7564ee55c8,Namespace:calico-system,Attempt:0,} returns sandbox id \"bb07c3a7102c0e6fe1c0fe0f957fa34f8b7a88e82359c2bfb2aa753071788d2a\"" Jan 21 01:02:49.698797 containerd[1962]: time="2026-01-21T01:02:49.698744855Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:02:49.701378 containerd[1962]: time="2026-01-21T01:02:49.701316413Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 21 01:02:49.701565 containerd[1962]: time="2026-01-21T01:02:49.701343497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 21 01:02:49.701619 kubelet[3468]: E0121 01:02:49.701565 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 01:02:49.701709 kubelet[3468]: E0121 01:02:49.701628 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 01:02:49.704562 kubelet[3468]: E0121 01:02:49.701916 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pqxsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vhxxc_calico-system(b0fd67ff-2b5d-470f-b242-daa718038f97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 21 01:02:49.704562 kubelet[3468]: E0121 01:02:49.704141 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vhxxc" podUID="b0fd67ff-2b5d-470f-b242-daa718038f97" Jan 21 01:02:49.704818 containerd[1962]: time="2026-01-21T01:02:49.702430953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 21 01:02:49.723690 systemd-networkd[1555]: cali4b5de6b026b: Gained IPv6LL Jan 21 01:02:49.737083 containerd[1962]: time="2026-01-21T01:02:49.735376362Z" level=info msg="connecting to shim 165abae88ca49708b01858fd70e14a8b25d6b19db4788074d7adea04972bfbfa" address="unix:///run/containerd/s/c106edb59d78db0509168da6f55270213caa8f4db8a249c970dada82c2178ef2" namespace=k8s.io protocol=ttrpc version=3 Jan 21 01:02:49.815503 kubelet[3468]: E0121 01:02:49.815462 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vhxxc" podUID="b0fd67ff-2b5d-470f-b242-daa718038f97" Jan 21 01:02:49.822198 kubelet[3468]: E0121 01:02:49.822155 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b9ccb8fff-8h9ht" podUID="bec76da0-2399-4e6c-952b-9dc7ad88a302" Jan 21 01:02:49.840839 systemd[1]: Started cri-containerd-165abae88ca49708b01858fd70e14a8b25d6b19db4788074d7adea04972bfbfa.scope - libcontainer container 165abae88ca49708b01858fd70e14a8b25d6b19db4788074d7adea04972bfbfa. Jan 21 01:02:49.844241 kubelet[3468]: E0121 01:02:49.844088 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b9ccb8fff-hdchl" podUID="0aa53f42-4d9d-4bec-8857-7799d5876dce" Jan 21 01:02:49.906000 audit: BPF prog-id=207 op=LOAD Jan 21 01:02:49.910000 audit: BPF prog-id=208 op=LOAD Jan 21 01:02:49.910000 audit[5172]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5159 pid=5172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:49.910000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136356162616538386361343937303862303138353866643730653134 Jan 21 01:02:49.910000 audit: BPF prog-id=208 op=UNLOAD Jan 21 01:02:49.910000 audit[5172]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5159 pid=5172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:49.910000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136356162616538386361343937303862303138353866643730653134 Jan 21 01:02:49.911000 audit: BPF prog-id=209 op=LOAD Jan 21 01:02:49.911000 audit[5172]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5159 pid=5172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:49.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136356162616538386361343937303862303138353866643730653134 Jan 21 01:02:49.911000 audit: BPF prog-id=210 op=LOAD Jan 21 01:02:49.911000 audit[5172]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5159 pid=5172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:49.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136356162616538386361343937303862303138353866643730653134 Jan 21 01:02:49.911000 audit: BPF prog-id=210 op=UNLOAD Jan 21 01:02:49.911000 audit[5172]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5159 pid=5172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:49.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136356162616538386361343937303862303138353866643730653134 Jan 21 01:02:49.914000 audit: BPF prog-id=209 op=UNLOAD Jan 21 01:02:49.914000 audit[5172]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5159 pid=5172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:49.914000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136356162616538386361343937303862303138353866643730653134 Jan 21 01:02:49.916760 systemd-networkd[1555]: cali2a8f735ee4f: Gained IPv6LL Jan 21 01:02:49.914000 audit: BPF prog-id=211 op=LOAD Jan 21 01:02:49.914000 audit[5172]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5159 pid=5172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:49.914000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136356162616538386361343937303862303138353866643730653134 Jan 21 01:02:49.997163 containerd[1962]: time="2026-01-21T01:02:49.996909211Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:02:50.000882 containerd[1962]: time="2026-01-21T01:02:50.000708873Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 21 01:02:50.001938 containerd[1962]: time="2026-01-21T01:02:50.001828198Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 21 01:02:50.003233 kubelet[3468]: E0121 01:02:50.002246 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 01:02:50.003233 kubelet[3468]: E0121 01:02:50.002304 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 01:02:50.003233 kubelet[3468]: E0121 01:02:50.002579 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxm28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-764d9fb657-cw7sh_calico-system(5e4fb692-d42b-4062-b09a-d89e5a5a14cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 21 01:02:50.005094 containerd[1962]: time="2026-01-21T01:02:50.004285092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m6nqt,Uid:8dde797d-8111-4b2e-a7ed-5cc610d0e8e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"165abae88ca49708b01858fd70e14a8b25d6b19db4788074d7adea04972bfbfa\"" Jan 21 01:02:50.006011 kubelet[3468]: E0121 01:02:50.005604 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-764d9fb657-cw7sh" podUID="5e4fb692-d42b-4062-b09a-d89e5a5a14cf" Jan 21 01:02:50.007572 containerd[1962]: time="2026-01-21T01:02:50.007040738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 21 01:02:50.011494 containerd[1962]: time="2026-01-21T01:02:50.011158987Z" level=info msg="CreateContainer within sandbox \"165abae88ca49708b01858fd70e14a8b25d6b19db4788074d7adea04972bfbfa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 21 01:02:50.022000 audit[5214]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=5214 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:50.022000 audit[5214]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc363e1160 a2=0 a3=7ffc363e114c items=0 ppid=3578 pid=5214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.022000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:02:50.054000 audit[5214]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=5214 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:50.054000 audit[5214]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc363e1160 a2=0 a3=0 items=0 ppid=3578 pid=5214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.054000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:02:50.069000 audit[5224]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=5224 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:50.069000 audit[5224]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffef48cdf70 a2=0 a3=7ffef48cdf5c items=0 ppid=3578 pid=5224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.069000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:02:50.072000 audit[5224]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=5224 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:50.072000 audit[5224]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffef48cdf70 a2=0 a3=0 items=0 ppid=3578 pid=5224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.072000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:02:50.106421 systemd-networkd[1555]: cali5711bd92e25: Gained IPv6LL Jan 21 01:02:50.186183 containerd[1962]: time="2026-01-21T01:02:50.185342995Z" level=info msg="Container b42ca03d996955c2caf24dcbb394b0befd9f44368daa4a597e25b2cac0447406: CDI devices from CRI Config.CDIDevices: []" Jan 21 01:02:50.210046 containerd[1962]: time="2026-01-21T01:02:50.209993887Z" level=info msg="CreateContainer within sandbox \"165abae88ca49708b01858fd70e14a8b25d6b19db4788074d7adea04972bfbfa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b42ca03d996955c2caf24dcbb394b0befd9f44368daa4a597e25b2cac0447406\"" Jan 21 01:02:50.210843 containerd[1962]: time="2026-01-21T01:02:50.210816145Z" level=info msg="StartContainer for \"b42ca03d996955c2caf24dcbb394b0befd9f44368daa4a597e25b2cac0447406\"" Jan 21 01:02:50.211892 containerd[1962]: time="2026-01-21T01:02:50.211852373Z" level=info msg="connecting to shim b42ca03d996955c2caf24dcbb394b0befd9f44368daa4a597e25b2cac0447406" address="unix:///run/containerd/s/c106edb59d78db0509168da6f55270213caa8f4db8a249c970dada82c2178ef2" protocol=ttrpc version=3 Jan 21 01:02:50.236498 systemd[1]: Started cri-containerd-b42ca03d996955c2caf24dcbb394b0befd9f44368daa4a597e25b2cac0447406.scope - libcontainer container b42ca03d996955c2caf24dcbb394b0befd9f44368daa4a597e25b2cac0447406. Jan 21 01:02:50.240691 containerd[1962]: time="2026-01-21T01:02:50.240646880Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:02:50.242931 containerd[1962]: time="2026-01-21T01:02:50.242890958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 21 01:02:50.243058 containerd[1962]: time="2026-01-21T01:02:50.243029505Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 21 01:02:50.243326 kubelet[3468]: E0121 01:02:50.243289 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 01:02:50.243646 kubelet[3468]: E0121 01:02:50.243337 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 01:02:50.243646 kubelet[3468]: E0121 01:02:50.243438 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:70d67053a2a44bfca2c3c6f84dd42b97,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ssm6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66b4f466fd-6gfxt_calico-system(a6a90027-9f22-4c0c-9ffd-5b7564ee55c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 21 01:02:50.246968 containerd[1962]: time="2026-01-21T01:02:50.246835125Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 21 01:02:50.304000 audit: BPF prog-id=212 op=LOAD Jan 21 01:02:50.304000 audit[5250]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc4b908780 a2=98 a3=1fffffffffffffff items=0 ppid=4848 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.304000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 01:02:50.304000 audit: BPF prog-id=212 op=UNLOAD Jan 21 01:02:50.304000 audit[5250]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc4b908750 a3=0 items=0 ppid=4848 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.304000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 01:02:50.310000 audit: BPF prog-id=213 op=LOAD Jan 21 01:02:50.310000 audit[5250]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc4b908660 a2=94 a3=3 items=0 ppid=4848 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.310000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 01:02:50.310000 audit: BPF prog-id=213 op=UNLOAD Jan 21 01:02:50.310000 audit[5250]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc4b908660 a2=94 a3=3 items=0 ppid=4848 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.310000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 01:02:50.311000 audit: BPF prog-id=214 op=LOAD Jan 21 01:02:50.311000 audit[5250]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc4b9086a0 a2=94 a3=7ffc4b908880 items=0 ppid=4848 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.311000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 01:02:50.311000 audit: BPF prog-id=215 op=LOAD Jan 21 01:02:50.311000 audit: BPF prog-id=214 op=UNLOAD Jan 21 01:02:50.311000 audit[5250]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc4b9086a0 a2=94 a3=7ffc4b908880 items=0 ppid=4848 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.311000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 01:02:50.312000 audit: BPF prog-id=216 op=LOAD Jan 21 01:02:50.312000 audit[5225]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=5159 pid=5225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234326361303364393936393535633263616632346463626233393462 Jan 21 01:02:50.313000 audit: BPF prog-id=216 op=UNLOAD Jan 21 01:02:50.313000 audit[5225]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5159 pid=5225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234326361303364393936393535633263616632346463626233393462 Jan 21 01:02:50.313000 audit: BPF prog-id=217 op=LOAD Jan 21 01:02:50.313000 audit[5225]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=5159 pid=5225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234326361303364393936393535633263616632346463626233393462 Jan 21 01:02:50.313000 audit: BPF prog-id=218 op=LOAD Jan 21 01:02:50.313000 audit[5225]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=5159 pid=5225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234326361303364393936393535633263616632346463626233393462 Jan 21 01:02:50.313000 audit: BPF prog-id=218 op=UNLOAD Jan 21 01:02:50.313000 audit[5225]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5159 pid=5225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234326361303364393936393535633263616632346463626233393462 Jan 21 01:02:50.313000 audit: BPF prog-id=217 op=UNLOAD Jan 21 01:02:50.313000 audit[5225]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5159 pid=5225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234326361303364393936393535633263616632346463626233393462 Jan 21 01:02:50.313000 audit: BPF prog-id=219 op=LOAD Jan 21 01:02:50.313000 audit[5225]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=5159 pid=5225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234326361303364393936393535633263616632346463626233393462 Jan 21 01:02:50.317000 audit: BPF prog-id=220 op=LOAD Jan 21 01:02:50.317000 audit[5251]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff345b1a90 a2=98 a3=3 items=0 ppid=4848 pid=5251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.317000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 01:02:50.318000 audit: BPF prog-id=220 op=UNLOAD Jan 21 01:02:50.318000 audit[5251]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff345b1a60 a3=0 items=0 ppid=4848 pid=5251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.318000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 01:02:50.320000 audit: BPF prog-id=221 op=LOAD Jan 21 01:02:50.320000 audit[5251]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff345b1880 a2=94 a3=54428f items=0 ppid=4848 pid=5251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.320000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 01:02:50.320000 audit: BPF prog-id=221 op=UNLOAD Jan 21 01:02:50.320000 audit[5251]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff345b1880 a2=94 a3=54428f items=0 ppid=4848 pid=5251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.320000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 01:02:50.320000 audit: BPF prog-id=222 op=LOAD Jan 21 01:02:50.320000 audit[5251]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff345b18b0 a2=94 a3=2 items=0 ppid=4848 pid=5251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.320000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 01:02:50.320000 audit: BPF prog-id=222 op=UNLOAD Jan 21 01:02:50.320000 audit[5251]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff345b18b0 a2=0 a3=2 items=0 ppid=4848 pid=5251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.320000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 01:02:50.360709 containerd[1962]: time="2026-01-21T01:02:50.360670121Z" level=info msg="StartContainer for \"b42ca03d996955c2caf24dcbb394b0befd9f44368daa4a597e25b2cac0447406\" returns successfully" Jan 21 01:02:50.491888 containerd[1962]: time="2026-01-21T01:02:50.491748619Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:02:50.494491 containerd[1962]: time="2026-01-21T01:02:50.494331614Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 21 01:02:50.494856 containerd[1962]: time="2026-01-21T01:02:50.494816767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 21 01:02:50.495303 kubelet[3468]: E0121 01:02:50.495042 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 01:02:50.495303 kubelet[3468]: E0121 01:02:50.495097 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 01:02:50.497346 kubelet[3468]: E0121 01:02:50.497258 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ssm6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66b4f466fd-6gfxt_calico-system(a6a90027-9f22-4c0c-9ffd-5b7564ee55c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 21 01:02:50.498943 kubelet[3468]: E0121 01:02:50.498872 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66b4f466fd-6gfxt" podUID="a6a90027-9f22-4c0c-9ffd-5b7564ee55c8" Jan 21 01:02:50.522000 audit: BPF prog-id=223 op=LOAD Jan 21 01:02:50.522000 audit[5251]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff345b1770 a2=94 a3=1 items=0 ppid=4848 pid=5251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.522000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 01:02:50.522000 audit: BPF prog-id=223 op=UNLOAD Jan 21 01:02:50.522000 audit[5251]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff345b1770 a2=94 a3=1 items=0 ppid=4848 pid=5251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.522000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 01:02:50.536000 audit: BPF prog-id=224 op=LOAD Jan 21 01:02:50.536000 audit[5251]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff345b1760 a2=94 a3=4 items=0 ppid=4848 pid=5251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.536000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 01:02:50.537000 audit: BPF prog-id=224 op=UNLOAD Jan 21 01:02:50.537000 audit[5251]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff345b1760 a2=0 a3=4 items=0 ppid=4848 pid=5251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.537000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 01:02:50.537000 audit: BPF prog-id=225 op=LOAD Jan 21 01:02:50.537000 audit[5251]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff345b15c0 a2=94 a3=5 items=0 ppid=4848 pid=5251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.537000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 01:02:50.537000 audit: BPF prog-id=225 op=UNLOAD Jan 21 01:02:50.537000 audit[5251]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff345b15c0 a2=0 a3=5 items=0 ppid=4848 pid=5251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.537000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 01:02:50.537000 audit: BPF prog-id=226 op=LOAD Jan 21 01:02:50.537000 audit[5251]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff345b17e0 a2=94 a3=6 items=0 ppid=4848 pid=5251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.537000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 01:02:50.537000 audit: BPF prog-id=226 op=UNLOAD Jan 21 01:02:50.537000 audit[5251]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff345b17e0 a2=0 a3=6 items=0 ppid=4848 pid=5251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.537000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 01:02:50.537000 audit: BPF prog-id=227 op=LOAD Jan 21 01:02:50.537000 audit[5251]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff345b0f90 a2=94 a3=88 items=0 ppid=4848 pid=5251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.537000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 01:02:50.538000 audit: BPF prog-id=228 op=LOAD Jan 21 01:02:50.538000 audit[5251]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff345b0e10 a2=94 a3=2 items=0 ppid=4848 pid=5251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.538000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 01:02:50.538000 audit: BPF prog-id=228 op=UNLOAD Jan 21 01:02:50.538000 audit[5251]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff345b0e40 a2=0 a3=7fff345b0f40 items=0 ppid=4848 pid=5251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.538000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 01:02:50.538000 audit: BPF prog-id=227 op=UNLOAD Jan 21 01:02:50.538000 audit[5251]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=21266d10 a2=0 a3=7c25128c8912dc6c items=0 ppid=4848 pid=5251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.538000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 01:02:50.619450 systemd-networkd[1555]: cali8bbdd3edcdb: Gained IPv6LL Jan 21 01:02:50.900820 kubelet[3468]: E0121 01:02:50.900775 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b9ccb8fff-8h9ht" podUID="bec76da0-2399-4e6c-952b-9dc7ad88a302" Jan 21 01:02:50.913481 kubelet[3468]: E0121 01:02:50.913428 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b9ccb8fff-hdchl" podUID="0aa53f42-4d9d-4bec-8857-7799d5876dce" Jan 21 01:02:50.913699 kubelet[3468]: E0121 01:02:50.913534 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vhxxc" podUID="b0fd67ff-2b5d-470f-b242-daa718038f97" Jan 21 01:02:50.913699 kubelet[3468]: E0121 01:02:50.913590 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-764d9fb657-cw7sh" podUID="5e4fb692-d42b-4062-b09a-d89e5a5a14cf" Jan 21 01:02:50.915472 kubelet[3468]: E0121 01:02:50.915418 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66b4f466fd-6gfxt" podUID="a6a90027-9f22-4c0c-9ffd-5b7564ee55c8" Jan 21 01:02:50.947000 audit[5268]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=5268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:50.947000 audit[5268]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd3c4b5f60 a2=0 a3=7ffd3c4b5f4c items=0 ppid=3578 pid=5268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.947000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:02:50.953000 audit[5268]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=5268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:50.953000 audit[5268]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd3c4b5f60 a2=0 a3=0 items=0 ppid=3578 pid=5268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:50.953000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:02:50.990427 kubelet[3468]: I0121 01:02:50.981750 3468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-m6nqt" podStartSLOduration=79.979137719 podStartE2EDuration="1m19.979137719s" podCreationTimestamp="2026-01-21 01:01:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 01:02:50.978733607 +0000 UTC m=+85.034137831" watchObservedRunningTime="2026-01-21 01:02:50.979137719 +0000 UTC m=+85.034541944" Jan 21 01:02:51.039000 audit: BPF prog-id=229 op=LOAD Jan 21 01:02:51.039000 audit[5269]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd88333cf0 a2=98 a3=1999999999999999 items=0 ppid=4848 pid=5269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.039000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 01:02:51.040000 audit: BPF prog-id=229 op=UNLOAD Jan 21 01:02:51.040000 audit[5269]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd88333cc0 a3=0 items=0 ppid=4848 pid=5269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.040000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 01:02:51.040000 audit: BPF prog-id=230 op=LOAD Jan 21 01:02:51.040000 audit[5269]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd88333bd0 a2=94 a3=ffff items=0 ppid=4848 pid=5269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.040000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 01:02:51.040000 audit: BPF prog-id=230 op=UNLOAD Jan 21 01:02:51.040000 audit[5269]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd88333bd0 a2=94 a3=ffff items=0 ppid=4848 pid=5269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.040000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 01:02:51.040000 audit: BPF prog-id=231 op=LOAD Jan 21 01:02:51.040000 audit[5269]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd88333c10 a2=94 a3=7ffd88333df0 items=0 ppid=4848 pid=5269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.040000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 01:02:51.040000 audit: BPF prog-id=231 op=UNLOAD Jan 21 01:02:51.040000 audit[5269]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd88333c10 a2=94 a3=7ffd88333df0 items=0 ppid=4848 pid=5269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.040000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 01:02:51.221377 systemd-networkd[1555]: vxlan.calico: Link UP Jan 21 01:02:51.221387 systemd-networkd[1555]: vxlan.calico: Gained carrier Jan 21 01:02:51.320000 audit: BPF prog-id=232 op=LOAD Jan 21 01:02:51.320000 audit[5296]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe8cdc99f0 a2=98 a3=0 items=0 ppid=4848 pid=5296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.320000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 01:02:51.320000 audit: BPF prog-id=232 op=UNLOAD Jan 21 01:02:51.320000 audit[5296]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe8cdc99c0 a3=0 items=0 ppid=4848 pid=5296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.320000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 01:02:51.321000 audit: BPF prog-id=233 op=LOAD Jan 21 01:02:51.321000 audit[5296]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe8cdc9800 a2=94 a3=54428f items=0 ppid=4848 pid=5296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.321000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 01:02:51.321000 audit: BPF prog-id=233 op=UNLOAD Jan 21 01:02:51.321000 audit[5296]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe8cdc9800 a2=94 a3=54428f items=0 ppid=4848 pid=5296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.321000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 01:02:51.321000 audit: BPF prog-id=234 op=LOAD Jan 21 01:02:51.321000 audit[5296]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe8cdc9830 a2=94 a3=2 items=0 ppid=4848 pid=5296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.321000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 01:02:51.321000 audit: BPF prog-id=234 op=UNLOAD Jan 21 01:02:51.321000 audit[5296]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe8cdc9830 a2=0 a3=2 items=0 ppid=4848 pid=5296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.321000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 01:02:51.322000 audit: BPF prog-id=235 op=LOAD Jan 21 01:02:51.322000 audit[5296]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe8cdc95e0 a2=94 a3=4 items=0 ppid=4848 pid=5296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.322000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 01:02:51.322000 audit: BPF prog-id=235 op=UNLOAD Jan 21 01:02:51.322000 audit[5296]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe8cdc95e0 a2=94 a3=4 items=0 ppid=4848 pid=5296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.322000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 01:02:51.322000 audit: BPF prog-id=236 op=LOAD Jan 21 01:02:51.322000 audit[5296]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe8cdc96e0 a2=94 a3=7ffe8cdc9860 items=0 ppid=4848 pid=5296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.322000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 01:02:51.322000 audit: BPF prog-id=236 op=UNLOAD Jan 21 01:02:51.322000 audit[5296]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe8cdc96e0 a2=0 a3=7ffe8cdc9860 items=0 ppid=4848 pid=5296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.322000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 01:02:51.323000 audit: BPF prog-id=237 op=LOAD Jan 21 01:02:51.323000 audit[5296]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe8cdc8e10 a2=94 a3=2 items=0 ppid=4848 pid=5296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.323000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 01:02:51.323000 audit: BPF prog-id=237 op=UNLOAD Jan 21 01:02:51.323000 audit[5296]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe8cdc8e10 a2=0 a3=2 items=0 ppid=4848 pid=5296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.323000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 01:02:51.323000 audit: BPF prog-id=238 op=LOAD Jan 21 01:02:51.323000 audit[5296]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe8cdc8f10 a2=94 a3=30 items=0 ppid=4848 pid=5296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.323000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 01:02:51.341000 audit: BPF prog-id=239 op=LOAD Jan 21 01:02:51.341000 audit[5302]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcc7851e90 a2=98 a3=0 items=0 ppid=4848 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.341000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 01:02:51.342000 audit: BPF prog-id=239 op=UNLOAD Jan 21 01:02:51.342000 audit[5302]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcc7851e60 a3=0 items=0 ppid=4848 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.342000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 01:02:51.342000 audit: BPF prog-id=240 op=LOAD Jan 21 01:02:51.342000 audit[5302]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcc7851c80 a2=94 a3=54428f items=0 ppid=4848 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.342000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 01:02:51.342000 audit: BPF prog-id=240 op=UNLOAD Jan 21 01:02:51.342000 audit[5302]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcc7851c80 a2=94 a3=54428f items=0 ppid=4848 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.342000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 01:02:51.342000 audit: BPF prog-id=241 op=LOAD Jan 21 01:02:51.342000 audit[5302]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcc7851cb0 a2=94 a3=2 items=0 ppid=4848 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.342000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 01:02:51.342000 audit: BPF prog-id=241 op=UNLOAD Jan 21 01:02:51.342000 audit[5302]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcc7851cb0 a2=0 a3=2 items=0 ppid=4848 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.342000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 01:02:51.450390 systemd-networkd[1555]: calif3d68c2ddc3: Gained IPv6LL Jan 21 01:02:51.567000 audit: BPF prog-id=242 op=LOAD Jan 21 01:02:51.567000 audit[5302]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcc7851b70 a2=94 a3=1 items=0 ppid=4848 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.567000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 01:02:51.567000 audit: BPF prog-id=242 op=UNLOAD Jan 21 01:02:51.567000 audit[5302]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcc7851b70 a2=94 a3=1 items=0 ppid=4848 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.567000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 01:02:51.579000 audit: BPF prog-id=243 op=LOAD Jan 21 01:02:51.579000 audit[5302]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcc7851b60 a2=94 a3=4 items=0 ppid=4848 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.579000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 01:02:51.580000 audit: BPF prog-id=243 op=UNLOAD Jan 21 01:02:51.580000 audit[5302]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffcc7851b60 a2=0 a3=4 items=0 ppid=4848 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.580000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 01:02:51.580000 audit: BPF prog-id=244 op=LOAD Jan 21 01:02:51.580000 audit[5302]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcc78519c0 a2=94 a3=5 items=0 ppid=4848 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.580000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 01:02:51.580000 audit: BPF prog-id=244 op=UNLOAD Jan 21 01:02:51.580000 audit[5302]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcc78519c0 a2=0 a3=5 items=0 ppid=4848 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.580000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 01:02:51.580000 audit: BPF prog-id=245 op=LOAD Jan 21 01:02:51.580000 audit[5302]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcc7851be0 a2=94 a3=6 items=0 ppid=4848 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.580000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 01:02:51.580000 audit: BPF prog-id=245 op=UNLOAD Jan 21 01:02:51.580000 audit[5302]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffcc7851be0 a2=0 a3=6 items=0 ppid=4848 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.580000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 01:02:51.580000 audit: BPF prog-id=246 op=LOAD Jan 21 01:02:51.580000 audit[5302]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcc7851390 a2=94 a3=88 items=0 ppid=4848 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.580000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 01:02:51.581000 audit: BPF prog-id=247 op=LOAD Jan 21 01:02:51.581000 audit[5302]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffcc7851210 a2=94 a3=2 items=0 ppid=4848 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.581000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 01:02:51.581000 audit: BPF prog-id=247 op=UNLOAD Jan 21 01:02:51.581000 audit[5302]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffcc7851240 a2=0 a3=7ffcc7851340 items=0 ppid=4848 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.581000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 01:02:51.581000 audit: BPF prog-id=246 op=UNLOAD Jan 21 01:02:51.581000 audit[5302]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=2fe11d10 a2=0 a3=bec7f5803a6a6e86 items=0 ppid=4848 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.581000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 01:02:51.587000 audit: BPF prog-id=238 op=UNLOAD Jan 21 01:02:51.587000 audit[4848]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c001029300 a2=0 a3=0 items=0 ppid=4831 pid=4848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.587000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 21 01:02:51.781000 audit[5330]: NETFILTER_CFG table=mangle:127 family=2 entries=16 op=nft_register_chain pid=5330 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 01:02:51.781000 audit[5330]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffc26a24de0 a2=0 a3=7ffc26a24dcc items=0 ppid=4848 pid=5330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.781000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 01:02:51.797000 audit[5328]: NETFILTER_CFG table=raw:128 family=2 entries=21 op=nft_register_chain pid=5328 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 01:02:51.797000 audit[5328]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffea9a28880 a2=0 a3=7ffea9a2886c items=0 ppid=4848 pid=5328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.797000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 01:02:51.802000 audit[5336]: NETFILTER_CFG table=nat:129 family=2 entries=15 op=nft_register_chain pid=5336 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 01:02:51.802000 audit[5336]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffd56e56cd0 a2=0 a3=7ffd56e56cbc items=0 ppid=4848 pid=5336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.802000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 01:02:51.811000 audit[5334]: NETFILTER_CFG table=filter:130 family=2 entries=263 op=nft_register_chain pid=5334 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 01:02:51.811000 audit[5334]: SYSCALL arch=c000003e syscall=46 success=yes exit=156020 a0=3 a1=7ffe4c55bd00 a2=0 a3=7ffe4c55bcec items=0 ppid=4848 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.811000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 01:02:51.979000 audit[5347]: NETFILTER_CFG table=filter:131 family=2 entries=20 op=nft_register_rule pid=5347 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:51.979000 audit[5347]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd3b57fab0 a2=0 a3=7ffd3b57fa9c items=0 ppid=3578 pid=5347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.979000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:02:51.984000 audit[5347]: NETFILTER_CFG table=nat:132 family=2 entries=14 op=nft_register_rule pid=5347 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:51.984000 audit[5347]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd3b57fab0 a2=0 a3=0 items=0 ppid=3578 pid=5347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:51.984000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:02:52.411633 systemd-networkd[1555]: vxlan.calico: Gained IPv6LL Jan 21 01:02:55.135059 ntpd[1923]: Listen normally on 6 vxlan.calico 192.168.122.64:123 Jan 21 01:02:55.135116 ntpd[1923]: Listen normally on 7 cali4b5de6b026b [fe80::ecee:eeff:feee:eeee%4]:123 Jan 21 01:02:55.135996 ntpd[1923]: 21 Jan 01:02:55 ntpd[1923]: Listen normally on 6 vxlan.calico 192.168.122.64:123 Jan 21 01:02:55.135996 ntpd[1923]: 21 Jan 01:02:55 ntpd[1923]: Listen normally on 7 cali4b5de6b026b [fe80::ecee:eeff:feee:eeee%4]:123 Jan 21 01:02:55.135996 ntpd[1923]: 21 Jan 01:02:55 ntpd[1923]: Listen normally on 8 caliccafe319875 [fe80::ecee:eeff:feee:eeee%5]:123 Jan 21 01:02:55.135996 ntpd[1923]: 21 Jan 01:02:55 ntpd[1923]: Listen normally on 9 cali2a8f735ee4f [fe80::ecee:eeff:feee:eeee%6]:123 Jan 21 01:02:55.135996 ntpd[1923]: 21 Jan 01:02:55 ntpd[1923]: Listen normally on 10 cali5711bd92e25 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 21 01:02:55.135996 ntpd[1923]: 21 Jan 01:02:55 ntpd[1923]: Listen normally on 11 cali8bbdd3edcdb [fe80::ecee:eeff:feee:eeee%8]:123 Jan 21 01:02:55.135996 ntpd[1923]: 21 Jan 01:02:55 ntpd[1923]: Listen normally on 12 calif3d68c2ddc3 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 21 01:02:55.135996 ntpd[1923]: 21 Jan 01:02:55 ntpd[1923]: Listen normally on 13 vxlan.calico [fe80::6475:ebff:fe5a:588b%10]:123 Jan 21 01:02:55.135139 ntpd[1923]: Listen normally on 8 caliccafe319875 [fe80::ecee:eeff:feee:eeee%5]:123 Jan 21 01:02:55.135159 ntpd[1923]: Listen normally on 9 cali2a8f735ee4f [fe80::ecee:eeff:feee:eeee%6]:123 Jan 21 01:02:55.135179 ntpd[1923]: Listen normally on 10 cali5711bd92e25 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 21 01:02:55.135200 ntpd[1923]: Listen normally on 11 cali8bbdd3edcdb [fe80::ecee:eeff:feee:eeee%8]:123 Jan 21 01:02:55.135247 ntpd[1923]: Listen normally on 12 calif3d68c2ddc3 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 21 01:02:55.135268 ntpd[1923]: Listen normally on 13 vxlan.calico [fe80::6475:ebff:fe5a:588b%10]:123 Jan 21 01:02:58.131420 containerd[1962]: time="2026-01-21T01:02:58.130972829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95fb8fc69-kstsq,Uid:35190377-3d75-48f7-8c27-98f24edff14f,Namespace:calico-apiserver,Attempt:0,}" Jan 21 01:02:58.325548 systemd-networkd[1555]: caliadd37fcdaa3: Link UP Jan 21 01:02:58.327261 systemd-networkd[1555]: caliadd37fcdaa3: Gained carrier Jan 21 01:02:58.336008 (udev-worker)[5376]: Network interface NamePolicy= disabled on kernel command line. Jan 21 01:02:58.373467 containerd[1962]: 2026-01-21 01:02:58.217 [INFO][5357] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--215-k8s-calico--apiserver--95fb8fc69--kstsq-eth0 calico-apiserver-95fb8fc69- calico-apiserver 35190377-3d75-48f7-8c27-98f24edff14f 874 0 2026-01-21 01:02:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:95fb8fc69 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-215 calico-apiserver-95fb8fc69-kstsq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliadd37fcdaa3 [] [] }} ContainerID="acc78088e14daa0abce11dccdca384aa40f5e182cb07bb8397fda904bdf6c8f7" Namespace="calico-apiserver" Pod="calico-apiserver-95fb8fc69-kstsq" WorkloadEndpoint="ip--172--31--28--215-k8s-calico--apiserver--95fb8fc69--kstsq-" Jan 21 01:02:58.373467 containerd[1962]: 2026-01-21 01:02:58.218 [INFO][5357] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="acc78088e14daa0abce11dccdca384aa40f5e182cb07bb8397fda904bdf6c8f7" Namespace="calico-apiserver" Pod="calico-apiserver-95fb8fc69-kstsq" WorkloadEndpoint="ip--172--31--28--215-k8s-calico--apiserver--95fb8fc69--kstsq-eth0" Jan 21 01:02:58.373467 containerd[1962]: 2026-01-21 01:02:58.244 [INFO][5368] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="acc78088e14daa0abce11dccdca384aa40f5e182cb07bb8397fda904bdf6c8f7" HandleID="k8s-pod-network.acc78088e14daa0abce11dccdca384aa40f5e182cb07bb8397fda904bdf6c8f7" Workload="ip--172--31--28--215-k8s-calico--apiserver--95fb8fc69--kstsq-eth0" Jan 21 01:02:58.373467 containerd[1962]: 2026-01-21 01:02:58.244 [INFO][5368] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="acc78088e14daa0abce11dccdca384aa40f5e182cb07bb8397fda904bdf6c8f7" HandleID="k8s-pod-network.acc78088e14daa0abce11dccdca384aa40f5e182cb07bb8397fda904bdf6c8f7" Workload="ip--172--31--28--215-k8s-calico--apiserver--95fb8fc69--kstsq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-28-215", "pod":"calico-apiserver-95fb8fc69-kstsq", "timestamp":"2026-01-21 01:02:58.244484381 +0000 UTC"}, Hostname:"ip-172-31-28-215", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 01:02:58.373467 containerd[1962]: 2026-01-21 01:02:58.244 [INFO][5368] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 01:02:58.373467 containerd[1962]: 2026-01-21 01:02:58.244 [INFO][5368] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 01:02:58.373467 containerd[1962]: 2026-01-21 01:02:58.244 [INFO][5368] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-215' Jan 21 01:02:58.373467 containerd[1962]: 2026-01-21 01:02:58.252 [INFO][5368] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.acc78088e14daa0abce11dccdca384aa40f5e182cb07bb8397fda904bdf6c8f7" host="ip-172-31-28-215" Jan 21 01:02:58.373467 containerd[1962]: 2026-01-21 01:02:58.258 [INFO][5368] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-215" Jan 21 01:02:58.373467 containerd[1962]: 2026-01-21 01:02:58.264 [INFO][5368] ipam/ipam.go 511: Trying affinity for 192.168.122.64/26 host="ip-172-31-28-215" Jan 21 01:02:58.373467 containerd[1962]: 2026-01-21 01:02:58.267 [INFO][5368] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.64/26 host="ip-172-31-28-215" Jan 21 01:02:58.373467 containerd[1962]: 2026-01-21 01:02:58.269 [INFO][5368] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.64/26 host="ip-172-31-28-215" Jan 21 01:02:58.373467 containerd[1962]: 2026-01-21 01:02:58.269 [INFO][5368] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.64/26 handle="k8s-pod-network.acc78088e14daa0abce11dccdca384aa40f5e182cb07bb8397fda904bdf6c8f7" host="ip-172-31-28-215" Jan 21 01:02:58.373467 containerd[1962]: 2026-01-21 01:02:58.274 [INFO][5368] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.acc78088e14daa0abce11dccdca384aa40f5e182cb07bb8397fda904bdf6c8f7 Jan 21 01:02:58.373467 containerd[1962]: 2026-01-21 01:02:58.286 [INFO][5368] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.64/26 handle="k8s-pod-network.acc78088e14daa0abce11dccdca384aa40f5e182cb07bb8397fda904bdf6c8f7" host="ip-172-31-28-215" Jan 21 01:02:58.373467 containerd[1962]: 2026-01-21 01:02:58.308 [INFO][5368] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.71/26] block=192.168.122.64/26 handle="k8s-pod-network.acc78088e14daa0abce11dccdca384aa40f5e182cb07bb8397fda904bdf6c8f7" host="ip-172-31-28-215" Jan 21 01:02:58.373467 containerd[1962]: 2026-01-21 01:02:58.309 [INFO][5368] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.71/26] handle="k8s-pod-network.acc78088e14daa0abce11dccdca384aa40f5e182cb07bb8397fda904bdf6c8f7" host="ip-172-31-28-215" Jan 21 01:02:58.373467 containerd[1962]: 2026-01-21 01:02:58.309 [INFO][5368] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 01:02:58.373467 containerd[1962]: 2026-01-21 01:02:58.309 [INFO][5368] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.71/26] IPv6=[] ContainerID="acc78088e14daa0abce11dccdca384aa40f5e182cb07bb8397fda904bdf6c8f7" HandleID="k8s-pod-network.acc78088e14daa0abce11dccdca384aa40f5e182cb07bb8397fda904bdf6c8f7" Workload="ip--172--31--28--215-k8s-calico--apiserver--95fb8fc69--kstsq-eth0" Jan 21 01:02:58.375143 containerd[1962]: 2026-01-21 01:02:58.316 [INFO][5357] cni-plugin/k8s.go 418: Populated endpoint ContainerID="acc78088e14daa0abce11dccdca384aa40f5e182cb07bb8397fda904bdf6c8f7" Namespace="calico-apiserver" Pod="calico-apiserver-95fb8fc69-kstsq" WorkloadEndpoint="ip--172--31--28--215-k8s-calico--apiserver--95fb8fc69--kstsq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--215-k8s-calico--apiserver--95fb8fc69--kstsq-eth0", GenerateName:"calico-apiserver-95fb8fc69-", Namespace:"calico-apiserver", SelfLink:"", UID:"35190377-3d75-48f7-8c27-98f24edff14f", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 1, 2, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"95fb8fc69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-215", ContainerID:"", Pod:"calico-apiserver-95fb8fc69-kstsq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliadd37fcdaa3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 01:02:58.375143 containerd[1962]: 2026-01-21 01:02:58.316 [INFO][5357] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.71/32] ContainerID="acc78088e14daa0abce11dccdca384aa40f5e182cb07bb8397fda904bdf6c8f7" Namespace="calico-apiserver" Pod="calico-apiserver-95fb8fc69-kstsq" WorkloadEndpoint="ip--172--31--28--215-k8s-calico--apiserver--95fb8fc69--kstsq-eth0" Jan 21 01:02:58.375143 containerd[1962]: 2026-01-21 01:02:58.316 [INFO][5357] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliadd37fcdaa3 ContainerID="acc78088e14daa0abce11dccdca384aa40f5e182cb07bb8397fda904bdf6c8f7" Namespace="calico-apiserver" Pod="calico-apiserver-95fb8fc69-kstsq" WorkloadEndpoint="ip--172--31--28--215-k8s-calico--apiserver--95fb8fc69--kstsq-eth0" Jan 21 01:02:58.375143 containerd[1962]: 2026-01-21 01:02:58.330 [INFO][5357] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="acc78088e14daa0abce11dccdca384aa40f5e182cb07bb8397fda904bdf6c8f7" Namespace="calico-apiserver" Pod="calico-apiserver-95fb8fc69-kstsq" WorkloadEndpoint="ip--172--31--28--215-k8s-calico--apiserver--95fb8fc69--kstsq-eth0" Jan 21 01:02:58.375143 containerd[1962]: 2026-01-21 01:02:58.333 [INFO][5357] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="acc78088e14daa0abce11dccdca384aa40f5e182cb07bb8397fda904bdf6c8f7" Namespace="calico-apiserver" Pod="calico-apiserver-95fb8fc69-kstsq" WorkloadEndpoint="ip--172--31--28--215-k8s-calico--apiserver--95fb8fc69--kstsq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--215-k8s-calico--apiserver--95fb8fc69--kstsq-eth0", GenerateName:"calico-apiserver-95fb8fc69-", Namespace:"calico-apiserver", SelfLink:"", UID:"35190377-3d75-48f7-8c27-98f24edff14f", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 1, 2, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"95fb8fc69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-215", ContainerID:"acc78088e14daa0abce11dccdca384aa40f5e182cb07bb8397fda904bdf6c8f7", Pod:"calico-apiserver-95fb8fc69-kstsq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliadd37fcdaa3", MAC:"3e:13:bc:6a:31:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 01:02:58.375143 containerd[1962]: 2026-01-21 01:02:58.368 [INFO][5357] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="acc78088e14daa0abce11dccdca384aa40f5e182cb07bb8397fda904bdf6c8f7" Namespace="calico-apiserver" Pod="calico-apiserver-95fb8fc69-kstsq" WorkloadEndpoint="ip--172--31--28--215-k8s-calico--apiserver--95fb8fc69--kstsq-eth0" Jan 21 01:02:58.454341 containerd[1962]: time="2026-01-21T01:02:58.452630480Z" level=info msg="connecting to shim acc78088e14daa0abce11dccdca384aa40f5e182cb07bb8397fda904bdf6c8f7" address="unix:///run/containerd/s/b7059ad9872791e2cb05851a437c881f937dc96df0ceb60f3e4956dab8884a2c" namespace=k8s.io protocol=ttrpc version=3 Jan 21 01:02:58.521265 kernel: kauditd_printk_skb: 366 callbacks suppressed Jan 21 01:02:58.521404 kernel: audit: type=1325 audit(1768957378.517:727): table=filter:133 family=2 entries=57 op=nft_register_chain pid=5419 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 01:02:58.517000 audit[5419]: NETFILTER_CFG table=filter:133 family=2 entries=57 op=nft_register_chain pid=5419 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 01:02:58.517000 audit[5419]: SYSCALL arch=c000003e syscall=46 success=yes exit=27828 a0=3 a1=7ffc8e181850 a2=0 a3=7ffc8e18183c items=0 ppid=4848 pid=5419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:58.524899 kernel: audit: type=1300 audit(1768957378.517:727): arch=c000003e syscall=46 success=yes exit=27828 a0=3 a1=7ffc8e181850 a2=0 a3=7ffc8e18183c items=0 ppid=4848 pid=5419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:58.530689 kernel: audit: type=1327 audit(1768957378.517:727): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 01:02:58.517000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 01:02:58.535513 systemd[1]: Started cri-containerd-acc78088e14daa0abce11dccdca384aa40f5e182cb07bb8397fda904bdf6c8f7.scope - libcontainer container acc78088e14daa0abce11dccdca384aa40f5e182cb07bb8397fda904bdf6c8f7. Jan 21 01:02:58.556000 audit: BPF prog-id=248 op=LOAD Jan 21 01:02:58.560248 kernel: audit: type=1334 audit(1768957378.556:728): prog-id=248 op=LOAD Jan 21 01:02:58.559000 audit: BPF prog-id=249 op=LOAD Jan 21 01:02:58.559000 audit[5404]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5393 pid=5404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:58.567356 kernel: audit: type=1334 audit(1768957378.559:729): prog-id=249 op=LOAD Jan 21 01:02:58.567491 kernel: audit: type=1300 audit(1768957378.559:729): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5393 pid=5404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:58.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163633738303838653134646161306162636531316463636463613338 Jan 21 01:02:58.572618 kernel: audit: type=1327 audit(1768957378.559:729): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163633738303838653134646161306162636531316463636463613338 Jan 21 01:02:58.559000 audit: BPF prog-id=249 op=UNLOAD Jan 21 01:02:58.574042 kernel: audit: type=1334 audit(1768957378.559:730): prog-id=249 op=UNLOAD Jan 21 01:02:58.576301 kernel: audit: type=1300 audit(1768957378.559:730): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5393 pid=5404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:58.559000 audit[5404]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5393 pid=5404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:58.584502 kernel: audit: type=1327 audit(1768957378.559:730): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163633738303838653134646161306162636531316463636463613338 Jan 21 01:02:58.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163633738303838653134646161306162636531316463636463613338 Jan 21 01:02:58.559000 audit: BPF prog-id=250 op=LOAD Jan 21 01:02:58.559000 audit[5404]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5393 pid=5404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:58.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163633738303838653134646161306162636531316463636463613338 Jan 21 01:02:58.559000 audit: BPF prog-id=251 op=LOAD Jan 21 01:02:58.559000 audit[5404]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5393 pid=5404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:58.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163633738303838653134646161306162636531316463636463613338 Jan 21 01:02:58.559000 audit: BPF prog-id=251 op=UNLOAD Jan 21 01:02:58.559000 audit[5404]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5393 pid=5404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:58.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163633738303838653134646161306162636531316463636463613338 Jan 21 01:02:58.559000 audit: BPF prog-id=250 op=UNLOAD Jan 21 01:02:58.559000 audit[5404]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5393 pid=5404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:58.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163633738303838653134646161306162636531316463636463613338 Jan 21 01:02:58.559000 audit: BPF prog-id=252 op=LOAD Jan 21 01:02:58.559000 audit[5404]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5393 pid=5404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:58.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163633738303838653134646161306162636531316463636463613338 Jan 21 01:02:58.623708 containerd[1962]: time="2026-01-21T01:02:58.623658014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95fb8fc69-kstsq,Uid:35190377-3d75-48f7-8c27-98f24edff14f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"acc78088e14daa0abce11dccdca384aa40f5e182cb07bb8397fda904bdf6c8f7\"" Jan 21 01:02:58.625957 containerd[1962]: time="2026-01-21T01:02:58.625806017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 01:02:58.897714 containerd[1962]: time="2026-01-21T01:02:58.897663189Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:02:58.899944 containerd[1962]: time="2026-01-21T01:02:58.899842707Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 01:02:58.899944 containerd[1962]: time="2026-01-21T01:02:58.899882896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 01:02:58.900240 kubelet[3468]: E0121 01:02:58.900154 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:02:58.900240 kubelet[3468]: E0121 01:02:58.900231 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:02:58.900597 kubelet[3468]: E0121 01:02:58.900356 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c6nw7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-95fb8fc69-kstsq_calico-apiserver(35190377-3d75-48f7-8c27-98f24edff14f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 01:02:58.901792 kubelet[3468]: E0121 01:02:58.901732 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-95fb8fc69-kstsq" podUID="35190377-3d75-48f7-8c27-98f24edff14f" Jan 21 01:02:59.131715 containerd[1962]: time="2026-01-21T01:02:59.131669827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9xsfz,Uid:76d94f9b-071e-45b7-9881-314d22adc37f,Namespace:calico-system,Attempt:0,}" Jan 21 01:02:59.272637 systemd-networkd[1555]: cali06ad78817f6: Link UP Jan 21 01:02:59.273787 systemd-networkd[1555]: cali06ad78817f6: Gained carrier Jan 21 01:02:59.302817 containerd[1962]: 2026-01-21 01:02:59.179 [INFO][5434] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--215-k8s-csi--node--driver--9xsfz-eth0 csi-node-driver- calico-system 76d94f9b-071e-45b7-9881-314d22adc37f 744 0 2026-01-21 01:02:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-28-215 csi-node-driver-9xsfz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali06ad78817f6 [] [] }} ContainerID="4ce799b9b02124826f58fb8caad628cb5f6ad6097279a224176995fef53039d4" Namespace="calico-system" Pod="csi-node-driver-9xsfz" WorkloadEndpoint="ip--172--31--28--215-k8s-csi--node--driver--9xsfz-" Jan 21 01:02:59.302817 containerd[1962]: 2026-01-21 01:02:59.179 [INFO][5434] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4ce799b9b02124826f58fb8caad628cb5f6ad6097279a224176995fef53039d4" Namespace="calico-system" Pod="csi-node-driver-9xsfz" WorkloadEndpoint="ip--172--31--28--215-k8s-csi--node--driver--9xsfz-eth0" Jan 21 01:02:59.302817 containerd[1962]: 2026-01-21 01:02:59.226 [INFO][5445] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ce799b9b02124826f58fb8caad628cb5f6ad6097279a224176995fef53039d4" HandleID="k8s-pod-network.4ce799b9b02124826f58fb8caad628cb5f6ad6097279a224176995fef53039d4" Workload="ip--172--31--28--215-k8s-csi--node--driver--9xsfz-eth0" Jan 21 01:02:59.302817 containerd[1962]: 2026-01-21 01:02:59.226 [INFO][5445] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4ce799b9b02124826f58fb8caad628cb5f6ad6097279a224176995fef53039d4" HandleID="k8s-pod-network.4ce799b9b02124826f58fb8caad628cb5f6ad6097279a224176995fef53039d4" Workload="ip--172--31--28--215-k8s-csi--node--driver--9xsfz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-215", "pod":"csi-node-driver-9xsfz", "timestamp":"2026-01-21 01:02:59.226390018 +0000 UTC"}, Hostname:"ip-172-31-28-215", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 01:02:59.302817 containerd[1962]: 2026-01-21 01:02:59.226 [INFO][5445] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 01:02:59.302817 containerd[1962]: 2026-01-21 01:02:59.226 [INFO][5445] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 01:02:59.302817 containerd[1962]: 2026-01-21 01:02:59.226 [INFO][5445] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-215' Jan 21 01:02:59.302817 containerd[1962]: 2026-01-21 01:02:59.233 [INFO][5445] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4ce799b9b02124826f58fb8caad628cb5f6ad6097279a224176995fef53039d4" host="ip-172-31-28-215" Jan 21 01:02:59.302817 containerd[1962]: 2026-01-21 01:02:59.239 [INFO][5445] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-215" Jan 21 01:02:59.302817 containerd[1962]: 2026-01-21 01:02:59.244 [INFO][5445] ipam/ipam.go 511: Trying affinity for 192.168.122.64/26 host="ip-172-31-28-215" Jan 21 01:02:59.302817 containerd[1962]: 2026-01-21 01:02:59.247 [INFO][5445] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.64/26 host="ip-172-31-28-215" Jan 21 01:02:59.302817 containerd[1962]: 2026-01-21 01:02:59.250 [INFO][5445] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.64/26 host="ip-172-31-28-215" Jan 21 01:02:59.302817 containerd[1962]: 2026-01-21 01:02:59.250 [INFO][5445] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.64/26 handle="k8s-pod-network.4ce799b9b02124826f58fb8caad628cb5f6ad6097279a224176995fef53039d4" host="ip-172-31-28-215" Jan 21 01:02:59.302817 containerd[1962]: 2026-01-21 01:02:59.252 [INFO][5445] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4ce799b9b02124826f58fb8caad628cb5f6ad6097279a224176995fef53039d4 Jan 21 01:02:59.302817 containerd[1962]: 2026-01-21 01:02:59.257 [INFO][5445] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.64/26 handle="k8s-pod-network.4ce799b9b02124826f58fb8caad628cb5f6ad6097279a224176995fef53039d4" host="ip-172-31-28-215" Jan 21 01:02:59.302817 containerd[1962]: 2026-01-21 01:02:59.265 [INFO][5445] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.72/26] block=192.168.122.64/26 handle="k8s-pod-network.4ce799b9b02124826f58fb8caad628cb5f6ad6097279a224176995fef53039d4" host="ip-172-31-28-215" Jan 21 01:02:59.302817 containerd[1962]: 2026-01-21 01:02:59.265 [INFO][5445] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.72/26] handle="k8s-pod-network.4ce799b9b02124826f58fb8caad628cb5f6ad6097279a224176995fef53039d4" host="ip-172-31-28-215" Jan 21 01:02:59.302817 containerd[1962]: 2026-01-21 01:02:59.265 [INFO][5445] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 01:02:59.302817 containerd[1962]: 2026-01-21 01:02:59.265 [INFO][5445] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.72/26] IPv6=[] ContainerID="4ce799b9b02124826f58fb8caad628cb5f6ad6097279a224176995fef53039d4" HandleID="k8s-pod-network.4ce799b9b02124826f58fb8caad628cb5f6ad6097279a224176995fef53039d4" Workload="ip--172--31--28--215-k8s-csi--node--driver--9xsfz-eth0" Jan 21 01:02:59.305517 containerd[1962]: 2026-01-21 01:02:59.268 [INFO][5434] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4ce799b9b02124826f58fb8caad628cb5f6ad6097279a224176995fef53039d4" Namespace="calico-system" Pod="csi-node-driver-9xsfz" WorkloadEndpoint="ip--172--31--28--215-k8s-csi--node--driver--9xsfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--215-k8s-csi--node--driver--9xsfz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"76d94f9b-071e-45b7-9881-314d22adc37f", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 1, 2, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-215", ContainerID:"", Pod:"csi-node-driver-9xsfz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.122.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali06ad78817f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 01:02:59.305517 containerd[1962]: 2026-01-21 01:02:59.268 [INFO][5434] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.72/32] ContainerID="4ce799b9b02124826f58fb8caad628cb5f6ad6097279a224176995fef53039d4" Namespace="calico-system" Pod="csi-node-driver-9xsfz" WorkloadEndpoint="ip--172--31--28--215-k8s-csi--node--driver--9xsfz-eth0" Jan 21 01:02:59.305517 containerd[1962]: 2026-01-21 01:02:59.268 [INFO][5434] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali06ad78817f6 ContainerID="4ce799b9b02124826f58fb8caad628cb5f6ad6097279a224176995fef53039d4" Namespace="calico-system" Pod="csi-node-driver-9xsfz" WorkloadEndpoint="ip--172--31--28--215-k8s-csi--node--driver--9xsfz-eth0" Jan 21 01:02:59.305517 containerd[1962]: 2026-01-21 01:02:59.274 [INFO][5434] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ce799b9b02124826f58fb8caad628cb5f6ad6097279a224176995fef53039d4" Namespace="calico-system" Pod="csi-node-driver-9xsfz" WorkloadEndpoint="ip--172--31--28--215-k8s-csi--node--driver--9xsfz-eth0" Jan 21 01:02:59.305517 containerd[1962]: 2026-01-21 01:02:59.274 [INFO][5434] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4ce799b9b02124826f58fb8caad628cb5f6ad6097279a224176995fef53039d4" Namespace="calico-system" Pod="csi-node-driver-9xsfz" WorkloadEndpoint="ip--172--31--28--215-k8s-csi--node--driver--9xsfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--215-k8s-csi--node--driver--9xsfz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"76d94f9b-071e-45b7-9881-314d22adc37f", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 1, 2, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-215", ContainerID:"4ce799b9b02124826f58fb8caad628cb5f6ad6097279a224176995fef53039d4", Pod:"csi-node-driver-9xsfz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.122.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali06ad78817f6", MAC:"42:3e:fc:bc:24:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 01:02:59.305517 containerd[1962]: 2026-01-21 01:02:59.297 [INFO][5434] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4ce799b9b02124826f58fb8caad628cb5f6ad6097279a224176995fef53039d4" Namespace="calico-system" Pod="csi-node-driver-9xsfz" WorkloadEndpoint="ip--172--31--28--215-k8s-csi--node--driver--9xsfz-eth0" Jan 21 01:02:59.323000 audit[5459]: NETFILTER_CFG table=filter:134 family=2 entries=60 op=nft_register_chain pid=5459 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 01:02:59.323000 audit[5459]: SYSCALL arch=c000003e syscall=46 success=yes exit=26704 a0=3 a1=7ffd6e1c6060 a2=0 a3=7ffd6e1c604c items=0 ppid=4848 pid=5459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:59.323000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 01:02:59.340425 containerd[1962]: time="2026-01-21T01:02:59.340379852Z" level=info msg="connecting to shim 4ce799b9b02124826f58fb8caad628cb5f6ad6097279a224176995fef53039d4" address="unix:///run/containerd/s/0d6a3bab6b5e3cea4edf09e9cf44cf2483e738562b7dbbddf49b1221a038d10f" namespace=k8s.io protocol=ttrpc version=3 Jan 21 01:02:59.368608 systemd[1]: Started cri-containerd-4ce799b9b02124826f58fb8caad628cb5f6ad6097279a224176995fef53039d4.scope - libcontainer container 4ce799b9b02124826f58fb8caad628cb5f6ad6097279a224176995fef53039d4. Jan 21 01:02:59.381000 audit: BPF prog-id=253 op=LOAD Jan 21 01:02:59.381000 audit: BPF prog-id=254 op=LOAD Jan 21 01:02:59.381000 audit[5480]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=5469 pid=5480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:59.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463653739396239623032313234383236663538666238636161643632 Jan 21 01:02:59.381000 audit: BPF prog-id=254 op=UNLOAD Jan 21 01:02:59.381000 audit[5480]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5469 pid=5480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:59.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463653739396239623032313234383236663538666238636161643632 Jan 21 01:02:59.381000 audit: BPF prog-id=255 op=LOAD Jan 21 01:02:59.381000 audit[5480]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=5469 pid=5480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:59.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463653739396239623032313234383236663538666238636161643632 Jan 21 01:02:59.381000 audit: BPF prog-id=256 op=LOAD Jan 21 01:02:59.381000 audit[5480]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=5469 pid=5480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:59.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463653739396239623032313234383236663538666238636161643632 Jan 21 01:02:59.381000 audit: BPF prog-id=256 op=UNLOAD Jan 21 01:02:59.381000 audit[5480]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5469 pid=5480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:59.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463653739396239623032313234383236663538666238636161643632 Jan 21 01:02:59.381000 audit: BPF prog-id=255 op=UNLOAD Jan 21 01:02:59.381000 audit[5480]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5469 pid=5480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:59.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463653739396239623032313234383236663538666238636161643632 Jan 21 01:02:59.381000 audit: BPF prog-id=257 op=LOAD Jan 21 01:02:59.381000 audit[5480]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=5469 pid=5480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:59.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463653739396239623032313234383236663538666238636161643632 Jan 21 01:02:59.403033 containerd[1962]: time="2026-01-21T01:02:59.402995854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9xsfz,Uid:76d94f9b-071e-45b7-9881-314d22adc37f,Namespace:calico-system,Attempt:0,} returns sandbox id \"4ce799b9b02124826f58fb8caad628cb5f6ad6097279a224176995fef53039d4\"" Jan 21 01:02:59.406339 containerd[1962]: time="2026-01-21T01:02:59.405119698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 21 01:02:59.662271 containerd[1962]: time="2026-01-21T01:02:59.661728175Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:02:59.663885 containerd[1962]: time="2026-01-21T01:02:59.663838681Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 21 01:02:59.663985 containerd[1962]: time="2026-01-21T01:02:59.663944285Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 21 01:02:59.664577 kubelet[3468]: E0121 01:02:59.664174 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 01:02:59.664577 kubelet[3468]: E0121 01:02:59.664237 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 01:02:59.664577 kubelet[3468]: E0121 01:02:59.664351 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z7shk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9xsfz_calico-system(76d94f9b-071e-45b7-9881-314d22adc37f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 21 01:02:59.666633 containerd[1962]: time="2026-01-21T01:02:59.666603611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 21 01:02:59.899040 kubelet[3468]: E0121 01:02:59.898995 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-95fb8fc69-kstsq" podUID="35190377-3d75-48f7-8c27-98f24edff14f" Jan 21 01:02:59.936000 audit[5506]: NETFILTER_CFG table=filter:135 family=2 entries=20 op=nft_register_rule pid=5506 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:59.936000 audit[5506]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc883ebd10 a2=0 a3=7ffc883ebcfc items=0 ppid=3578 pid=5506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:59.936000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:02:59.939000 audit[5506]: NETFILTER_CFG table=nat:136 family=2 entries=14 op=nft_register_rule pid=5506 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:02:59.939000 audit[5506]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc883ebd10 a2=0 a3=0 items=0 ppid=3578 pid=5506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:59.939000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:02:59.951641 containerd[1962]: time="2026-01-21T01:02:59.951589385Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:02:59.953825 containerd[1962]: time="2026-01-21T01:02:59.953778316Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 21 01:02:59.954017 containerd[1962]: time="2026-01-21T01:02:59.953839642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 21 01:02:59.954190 kubelet[3468]: E0121 01:02:59.954138 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 01:02:59.954776 kubelet[3468]: E0121 01:02:59.954190 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 01:02:59.954776 kubelet[3468]: E0121 01:02:59.954435 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z7shk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9xsfz_calico-system(76d94f9b-071e-45b7-9881-314d22adc37f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 21 01:02:59.956316 kubelet[3468]: E0121 01:02:59.955615 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9xsfz" podUID="76d94f9b-071e-45b7-9881-314d22adc37f" Jan 21 01:03:00.282380 systemd-networkd[1555]: caliadd37fcdaa3: Gained IPv6LL Jan 21 01:03:00.474758 systemd-networkd[1555]: cali06ad78817f6: Gained IPv6LL Jan 21 01:03:00.903847 kubelet[3468]: E0121 01:03:00.903806 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9xsfz" podUID="76d94f9b-071e-45b7-9881-314d22adc37f" Jan 21 01:03:01.133535 containerd[1962]: time="2026-01-21T01:03:01.133465030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 01:03:01.418441 containerd[1962]: time="2026-01-21T01:03:01.418387171Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:03:01.421972 containerd[1962]: time="2026-01-21T01:03:01.421897186Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 01:03:01.422133 containerd[1962]: time="2026-01-21T01:03:01.422023196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 01:03:01.422338 kubelet[3468]: E0121 01:03:01.422288 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:03:01.422768 kubelet[3468]: E0121 01:03:01.422355 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:03:01.422768 kubelet[3468]: E0121 01:03:01.422519 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-98vll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b9ccb8fff-8h9ht_calico-apiserver(bec76da0-2399-4e6c-952b-9dc7ad88a302): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 01:03:01.428158 kubelet[3468]: E0121 01:03:01.426131 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b9ccb8fff-8h9ht" podUID="bec76da0-2399-4e6c-952b-9dc7ad88a302" Jan 21 01:03:01.956000 audit[5515]: NETFILTER_CFG table=filter:137 family=2 entries=17 op=nft_register_rule pid=5515 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:03:01.956000 audit[5515]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffdfc21140 a2=0 a3=7fffdfc2112c items=0 ppid=3578 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:01.956000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:03:01.961000 audit[5515]: NETFILTER_CFG table=nat:138 family=2 entries=35 op=nft_register_chain pid=5515 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:03:01.961000 audit[5515]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fffdfc21140 a2=0 a3=7fffdfc2112c items=0 ppid=3578 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:01.961000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:03:02.135783 containerd[1962]: time="2026-01-21T01:03:02.135728377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hfnnv,Uid:50c0e75e-1727-4fb1-ab60-7bbb5f8cfadf,Namespace:kube-system,Attempt:0,}" Jan 21 01:03:02.136505 containerd[1962]: time="2026-01-21T01:03:02.136406305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 21 01:03:02.355018 systemd-networkd[1555]: cali7a5ab852aa4: Link UP Jan 21 01:03:02.357096 systemd-networkd[1555]: cali7a5ab852aa4: Gained carrier Jan 21 01:03:02.366053 (udev-worker)[5536]: Network interface NamePolicy= disabled on kernel command line. Jan 21 01:03:02.390670 containerd[1962]: 2026-01-21 01:03:02.209 [INFO][5516] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--215-k8s-coredns--668d6bf9bc--hfnnv-eth0 coredns-668d6bf9bc- kube-system 50c0e75e-1727-4fb1-ab60-7bbb5f8cfadf 861 0 2026-01-21 01:01:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-215 coredns-668d6bf9bc-hfnnv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7a5ab852aa4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="de89e8b1326e5f69a0ee86cd71f8a8add02b62e8c750c005b9cfd46332c38a29" Namespace="kube-system" Pod="coredns-668d6bf9bc-hfnnv" WorkloadEndpoint="ip--172--31--28--215-k8s-coredns--668d6bf9bc--hfnnv-" Jan 21 01:03:02.390670 containerd[1962]: 2026-01-21 01:03:02.209 [INFO][5516] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="de89e8b1326e5f69a0ee86cd71f8a8add02b62e8c750c005b9cfd46332c38a29" Namespace="kube-system" Pod="coredns-668d6bf9bc-hfnnv" WorkloadEndpoint="ip--172--31--28--215-k8s-coredns--668d6bf9bc--hfnnv-eth0" Jan 21 01:03:02.390670 containerd[1962]: 2026-01-21 01:03:02.251 [INFO][5528] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de89e8b1326e5f69a0ee86cd71f8a8add02b62e8c750c005b9cfd46332c38a29" HandleID="k8s-pod-network.de89e8b1326e5f69a0ee86cd71f8a8add02b62e8c750c005b9cfd46332c38a29" Workload="ip--172--31--28--215-k8s-coredns--668d6bf9bc--hfnnv-eth0" Jan 21 01:03:02.390670 containerd[1962]: 2026-01-21 01:03:02.251 [INFO][5528] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="de89e8b1326e5f69a0ee86cd71f8a8add02b62e8c750c005b9cfd46332c38a29" HandleID="k8s-pod-network.de89e8b1326e5f69a0ee86cd71f8a8add02b62e8c750c005b9cfd46332c38a29" Workload="ip--172--31--28--215-k8s-coredns--668d6bf9bc--hfnnv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f2a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-215", "pod":"coredns-668d6bf9bc-hfnnv", "timestamp":"2026-01-21 01:03:02.251653193 +0000 UTC"}, Hostname:"ip-172-31-28-215", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 01:03:02.390670 containerd[1962]: 2026-01-21 01:03:02.251 [INFO][5528] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 01:03:02.390670 containerd[1962]: 2026-01-21 01:03:02.251 [INFO][5528] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 01:03:02.390670 containerd[1962]: 2026-01-21 01:03:02.251 [INFO][5528] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-215' Jan 21 01:03:02.390670 containerd[1962]: 2026-01-21 01:03:02.260 [INFO][5528] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.de89e8b1326e5f69a0ee86cd71f8a8add02b62e8c750c005b9cfd46332c38a29" host="ip-172-31-28-215" Jan 21 01:03:02.390670 containerd[1962]: 2026-01-21 01:03:02.267 [INFO][5528] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-215" Jan 21 01:03:02.390670 containerd[1962]: 2026-01-21 01:03:02.280 [INFO][5528] ipam/ipam.go 511: Trying affinity for 192.168.122.64/26 host="ip-172-31-28-215" Jan 21 01:03:02.390670 containerd[1962]: 2026-01-21 01:03:02.303 [INFO][5528] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.64/26 host="ip-172-31-28-215" Jan 21 01:03:02.390670 containerd[1962]: 2026-01-21 01:03:02.325 [INFO][5528] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.64/26 host="ip-172-31-28-215" Jan 21 01:03:02.390670 containerd[1962]: 2026-01-21 01:03:02.325 [INFO][5528] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.64/26 handle="k8s-pod-network.de89e8b1326e5f69a0ee86cd71f8a8add02b62e8c750c005b9cfd46332c38a29" host="ip-172-31-28-215" Jan 21 01:03:02.390670 containerd[1962]: 2026-01-21 01:03:02.329 [INFO][5528] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.de89e8b1326e5f69a0ee86cd71f8a8add02b62e8c750c005b9cfd46332c38a29 Jan 21 01:03:02.390670 containerd[1962]: 2026-01-21 01:03:02.334 [INFO][5528] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.64/26 handle="k8s-pod-network.de89e8b1326e5f69a0ee86cd71f8a8add02b62e8c750c005b9cfd46332c38a29" host="ip-172-31-28-215" Jan 21 01:03:02.390670 containerd[1962]: 2026-01-21 01:03:02.347 [INFO][5528] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.73/26] block=192.168.122.64/26 handle="k8s-pod-network.de89e8b1326e5f69a0ee86cd71f8a8add02b62e8c750c005b9cfd46332c38a29" host="ip-172-31-28-215" Jan 21 01:03:02.390670 containerd[1962]: 2026-01-21 01:03:02.347 [INFO][5528] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.73/26] handle="k8s-pod-network.de89e8b1326e5f69a0ee86cd71f8a8add02b62e8c750c005b9cfd46332c38a29" host="ip-172-31-28-215" Jan 21 01:03:02.390670 containerd[1962]: 2026-01-21 01:03:02.347 [INFO][5528] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 01:03:02.390670 containerd[1962]: 2026-01-21 01:03:02.348 [INFO][5528] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.73/26] IPv6=[] ContainerID="de89e8b1326e5f69a0ee86cd71f8a8add02b62e8c750c005b9cfd46332c38a29" HandleID="k8s-pod-network.de89e8b1326e5f69a0ee86cd71f8a8add02b62e8c750c005b9cfd46332c38a29" Workload="ip--172--31--28--215-k8s-coredns--668d6bf9bc--hfnnv-eth0" Jan 21 01:03:02.392153 containerd[1962]: 2026-01-21 01:03:02.350 [INFO][5516] cni-plugin/k8s.go 418: Populated endpoint ContainerID="de89e8b1326e5f69a0ee86cd71f8a8add02b62e8c750c005b9cfd46332c38a29" Namespace="kube-system" Pod="coredns-668d6bf9bc-hfnnv" WorkloadEndpoint="ip--172--31--28--215-k8s-coredns--668d6bf9bc--hfnnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--215-k8s-coredns--668d6bf9bc--hfnnv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"50c0e75e-1727-4fb1-ab60-7bbb5f8cfadf", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 1, 1, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-215", ContainerID:"", Pod:"coredns-668d6bf9bc-hfnnv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7a5ab852aa4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 01:03:02.392153 containerd[1962]: 2026-01-21 01:03:02.350 [INFO][5516] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.73/32] ContainerID="de89e8b1326e5f69a0ee86cd71f8a8add02b62e8c750c005b9cfd46332c38a29" Namespace="kube-system" Pod="coredns-668d6bf9bc-hfnnv" WorkloadEndpoint="ip--172--31--28--215-k8s-coredns--668d6bf9bc--hfnnv-eth0" Jan 21 01:03:02.392153 containerd[1962]: 2026-01-21 01:03:02.350 [INFO][5516] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a5ab852aa4 ContainerID="de89e8b1326e5f69a0ee86cd71f8a8add02b62e8c750c005b9cfd46332c38a29" Namespace="kube-system" Pod="coredns-668d6bf9bc-hfnnv" WorkloadEndpoint="ip--172--31--28--215-k8s-coredns--668d6bf9bc--hfnnv-eth0" Jan 21 01:03:02.392153 containerd[1962]: 2026-01-21 01:03:02.353 [INFO][5516] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de89e8b1326e5f69a0ee86cd71f8a8add02b62e8c750c005b9cfd46332c38a29" Namespace="kube-system" Pod="coredns-668d6bf9bc-hfnnv" WorkloadEndpoint="ip--172--31--28--215-k8s-coredns--668d6bf9bc--hfnnv-eth0" Jan 21 01:03:02.392153 containerd[1962]: 2026-01-21 01:03:02.354 [INFO][5516] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="de89e8b1326e5f69a0ee86cd71f8a8add02b62e8c750c005b9cfd46332c38a29" Namespace="kube-system" Pod="coredns-668d6bf9bc-hfnnv" WorkloadEndpoint="ip--172--31--28--215-k8s-coredns--668d6bf9bc--hfnnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--215-k8s-coredns--668d6bf9bc--hfnnv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"50c0e75e-1727-4fb1-ab60-7bbb5f8cfadf", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 1, 1, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-215", ContainerID:"de89e8b1326e5f69a0ee86cd71f8a8add02b62e8c750c005b9cfd46332c38a29", Pod:"coredns-668d6bf9bc-hfnnv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7a5ab852aa4", MAC:"0a:06:8e:56:d6:8e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 01:03:02.392153 containerd[1962]: 2026-01-21 01:03:02.385 [INFO][5516] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="de89e8b1326e5f69a0ee86cd71f8a8add02b62e8c750c005b9cfd46332c38a29" Namespace="kube-system" Pod="coredns-668d6bf9bc-hfnnv" WorkloadEndpoint="ip--172--31--28--215-k8s-coredns--668d6bf9bc--hfnnv-eth0" Jan 21 01:03:02.411000 audit[5546]: NETFILTER_CFG table=filter:139 family=2 entries=60 op=nft_register_chain pid=5546 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 01:03:02.411000 audit[5546]: SYSCALL arch=c000003e syscall=46 success=yes exit=26284 a0=3 a1=7ffd273214d0 a2=0 a3=7ffd273214bc items=0 ppid=4848 pid=5546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:02.411000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 01:03:02.441486 kubelet[3468]: E0121 01:03:02.435377 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 01:03:02.441486 kubelet[3468]: E0121 01:03:02.435434 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 01:03:02.441486 kubelet[3468]: E0121 01:03:02.436027 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pqxsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vhxxc_calico-system(b0fd67ff-2b5d-470f-b242-daa718038f97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 21 01:03:02.441486 kubelet[3468]: E0121 01:03:02.438012 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vhxxc" podUID="b0fd67ff-2b5d-470f-b242-daa718038f97" Jan 21 01:03:02.442494 containerd[1962]: time="2026-01-21T01:03:02.431689620Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:03:02.442494 containerd[1962]: time="2026-01-21T01:03:02.433948249Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 21 01:03:02.442494 containerd[1962]: time="2026-01-21T01:03:02.434067271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 21 01:03:02.451883 containerd[1962]: time="2026-01-21T01:03:02.451024745Z" level=info msg="connecting to shim de89e8b1326e5f69a0ee86cd71f8a8add02b62e8c750c005b9cfd46332c38a29" address="unix:///run/containerd/s/e5cffddf3a6cbd3b06f377ce7984f8ae608d1dadc3c75e9d01b6b54dbd85bdbb" namespace=k8s.io protocol=ttrpc version=3 Jan 21 01:03:02.526698 systemd[1]: Started cri-containerd-de89e8b1326e5f69a0ee86cd71f8a8add02b62e8c750c005b9cfd46332c38a29.scope - libcontainer container de89e8b1326e5f69a0ee86cd71f8a8add02b62e8c750c005b9cfd46332c38a29. Jan 21 01:03:02.558000 audit: BPF prog-id=258 op=LOAD Jan 21 01:03:02.559000 audit: BPF prog-id=259 op=LOAD Jan 21 01:03:02.559000 audit[5568]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000230238 a2=98 a3=0 items=0 ppid=5556 pid=5568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:02.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465383965386231333236653566363961306565383663643731663861 Jan 21 01:03:02.559000 audit: BPF prog-id=259 op=UNLOAD Jan 21 01:03:02.559000 audit[5568]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5556 pid=5568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:02.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465383965386231333236653566363961306565383663643731663861 Jan 21 01:03:02.560000 audit: BPF prog-id=260 op=LOAD Jan 21 01:03:02.560000 audit[5568]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000230488 a2=98 a3=0 items=0 ppid=5556 pid=5568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:02.560000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465383965386231333236653566363961306565383663643731663861 Jan 21 01:03:02.560000 audit: BPF prog-id=261 op=LOAD Jan 21 01:03:02.560000 audit[5568]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000230218 a2=98 a3=0 items=0 ppid=5556 pid=5568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:02.560000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465383965386231333236653566363961306565383663643731663861 Jan 21 01:03:02.561000 audit: BPF prog-id=261 op=UNLOAD Jan 21 01:03:02.561000 audit[5568]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5556 pid=5568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:02.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465383965386231333236653566363961306565383663643731663861 Jan 21 01:03:02.561000 audit: BPF prog-id=260 op=UNLOAD Jan 21 01:03:02.561000 audit[5568]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5556 pid=5568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:02.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465383965386231333236653566363961306565383663643731663861 Jan 21 01:03:02.562000 audit: BPF prog-id=262 op=LOAD Jan 21 01:03:02.562000 audit[5568]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002306e8 a2=98 a3=0 items=0 ppid=5556 pid=5568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:02.562000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465383965386231333236653566363961306565383663643731663861 Jan 21 01:03:02.647843 containerd[1962]: time="2026-01-21T01:03:02.647652059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hfnnv,Uid:50c0e75e-1727-4fb1-ab60-7bbb5f8cfadf,Namespace:kube-system,Attempt:0,} returns sandbox id \"de89e8b1326e5f69a0ee86cd71f8a8add02b62e8c750c005b9cfd46332c38a29\"" Jan 21 01:03:02.656672 containerd[1962]: time="2026-01-21T01:03:02.656625945Z" level=info msg="CreateContainer within sandbox \"de89e8b1326e5f69a0ee86cd71f8a8add02b62e8c750c005b9cfd46332c38a29\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 21 01:03:02.682424 containerd[1962]: time="2026-01-21T01:03:02.682368243Z" level=info msg="Container 70d4304e84bf05b65239cd3b5ed99b5a0a71773eb805893ef5b6db9b42eddfa7: CDI devices from CRI Config.CDIDevices: []" Jan 21 01:03:02.695946 containerd[1962]: time="2026-01-21T01:03:02.695883705Z" level=info msg="CreateContainer within sandbox \"de89e8b1326e5f69a0ee86cd71f8a8add02b62e8c750c005b9cfd46332c38a29\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"70d4304e84bf05b65239cd3b5ed99b5a0a71773eb805893ef5b6db9b42eddfa7\"" Jan 21 01:03:02.696886 containerd[1962]: time="2026-01-21T01:03:02.696847362Z" level=info msg="StartContainer for \"70d4304e84bf05b65239cd3b5ed99b5a0a71773eb805893ef5b6db9b42eddfa7\"" Jan 21 01:03:02.699100 containerd[1962]: time="2026-01-21T01:03:02.698773486Z" level=info msg="connecting to shim 70d4304e84bf05b65239cd3b5ed99b5a0a71773eb805893ef5b6db9b42eddfa7" address="unix:///run/containerd/s/e5cffddf3a6cbd3b06f377ce7984f8ae608d1dadc3c75e9d01b6b54dbd85bdbb" protocol=ttrpc version=3 Jan 21 01:03:02.724561 systemd[1]: Started cri-containerd-70d4304e84bf05b65239cd3b5ed99b5a0a71773eb805893ef5b6db9b42eddfa7.scope - libcontainer container 70d4304e84bf05b65239cd3b5ed99b5a0a71773eb805893ef5b6db9b42eddfa7. Jan 21 01:03:02.739000 audit: BPF prog-id=263 op=LOAD Jan 21 01:03:02.740000 audit: BPF prog-id=264 op=LOAD Jan 21 01:03:02.740000 audit[5597]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=5556 pid=5597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:02.740000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730643433303465383462663035623635323339636433623565643939 Jan 21 01:03:02.740000 audit: BPF prog-id=264 op=UNLOAD Jan 21 01:03:02.740000 audit[5597]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5556 pid=5597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:02.740000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730643433303465383462663035623635323339636433623565643939 Jan 21 01:03:02.740000 audit: BPF prog-id=265 op=LOAD Jan 21 01:03:02.740000 audit[5597]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=5556 pid=5597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:02.740000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730643433303465383462663035623635323339636433623565643939 Jan 21 01:03:02.740000 audit: BPF prog-id=266 op=LOAD Jan 21 01:03:02.740000 audit[5597]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=5556 pid=5597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:02.740000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730643433303465383462663035623635323339636433623565643939 Jan 21 01:03:02.741000 audit: BPF prog-id=266 op=UNLOAD Jan 21 01:03:02.741000 audit[5597]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5556 pid=5597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:02.741000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730643433303465383462663035623635323339636433623565643939 Jan 21 01:03:02.741000 audit: BPF prog-id=265 op=UNLOAD Jan 21 01:03:02.741000 audit[5597]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5556 pid=5597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:02.741000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730643433303465383462663035623635323339636433623565643939 Jan 21 01:03:02.741000 audit: BPF prog-id=267 op=LOAD Jan 21 01:03:02.741000 audit[5597]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=5556 pid=5597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:02.741000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730643433303465383462663035623635323339636433623565643939 Jan 21 01:03:02.774780 containerd[1962]: time="2026-01-21T01:03:02.774742688Z" level=info msg="StartContainer for \"70d4304e84bf05b65239cd3b5ed99b5a0a71773eb805893ef5b6db9b42eddfa7\" returns successfully" Jan 21 01:03:02.935625 kubelet[3468]: I0121 01:03:02.934473 3468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hfnnv" podStartSLOduration=91.934454899 podStartE2EDuration="1m31.934454899s" podCreationTimestamp="2026-01-21 01:01:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 01:03:02.93376567 +0000 UTC m=+96.989169894" watchObservedRunningTime="2026-01-21 01:03:02.934454899 +0000 UTC m=+96.989859124" Jan 21 01:03:02.952000 audit[5632]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=5632 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:03:02.952000 audit[5632]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc81b62110 a2=0 a3=7ffc81b620fc items=0 ppid=3578 pid=5632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:02.952000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:03:02.959000 audit[5632]: NETFILTER_CFG table=nat:141 family=2 entries=44 op=nft_register_rule pid=5632 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:03:02.959000 audit[5632]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc81b62110 a2=0 a3=7ffc81b620fc items=0 ppid=3578 pid=5632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:02.959000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:03:02.986000 audit[5634]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=5634 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:03:02.986000 audit[5634]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff547e3fa0 a2=0 a3=7fff547e3f8c items=0 ppid=3578 pid=5634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:02.986000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:03:03.033000 audit[5634]: NETFILTER_CFG table=nat:143 family=2 entries=56 op=nft_register_chain pid=5634 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:03:03.033000 audit[5634]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff547e3fa0 a2=0 a3=7fff547e3f8c items=0 ppid=3578 pid=5634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:03.033000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:03:03.133556 containerd[1962]: time="2026-01-21T01:03:03.133514903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 21 01:03:03.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.28.215:22-68.220.241.50:52144 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:03.200275 systemd[1]: Started sshd@7-172.31.28.215:22-68.220.241.50:52144.service - OpenSSH per-connection server daemon (68.220.241.50:52144). Jan 21 01:03:03.378461 containerd[1962]: time="2026-01-21T01:03:03.378413457Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:03:03.380666 containerd[1962]: time="2026-01-21T01:03:03.380611601Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 21 01:03:03.380966 containerd[1962]: time="2026-01-21T01:03:03.380705790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 21 01:03:03.381007 kubelet[3468]: E0121 01:03:03.380887 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 01:03:03.381007 kubelet[3468]: E0121 01:03:03.380929 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 01:03:03.381140 kubelet[3468]: E0121 01:03:03.381075 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxm28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-764d9fb657-cw7sh_calico-system(5e4fb692-d42b-4062-b09a-d89e5a5a14cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 21 01:03:03.382387 kubelet[3468]: E0121 01:03:03.382313 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-764d9fb657-cw7sh" podUID="5e4fb692-d42b-4062-b09a-d89e5a5a14cf" Jan 21 01:03:03.612404 systemd-networkd[1555]: cali7a5ab852aa4: Gained IPv6LL Jan 21 01:03:03.690000 audit[5637]: USER_ACCT pid=5637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:03.692463 kernel: kauditd_printk_skb: 112 callbacks suppressed Jan 21 01:03:03.692570 kernel: audit: type=1101 audit(1768957383.690:771): pid=5637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:03.692617 sshd[5637]: Accepted publickey for core from 68.220.241.50 port 52144 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:03:03.695315 sshd-session[5637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:03:03.692000 audit[5637]: CRED_ACQ pid=5637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:03.699117 kernel: audit: type=1103 audit(1768957383.692:772): pid=5637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:03.703579 kernel: audit: type=1006 audit(1768957383.692:773): pid=5637 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 21 01:03:03.711477 kernel: audit: type=1300 audit(1768957383.692:773): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb65c5d20 a2=3 a3=0 items=0 ppid=1 pid=5637 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:03.692000 audit[5637]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb65c5d20 a2=3 a3=0 items=0 ppid=1 pid=5637 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:03.708867 systemd-logind[1938]: New session 9 of user core. Jan 21 01:03:03.692000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:03:03.715260 kernel: audit: type=1327 audit(1768957383.692:773): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:03:03.721495 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 21 01:03:03.725000 audit[5637]: USER_START pid=5637 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:03.733306 kernel: audit: type=1105 audit(1768957383.725:774): pid=5637 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:03.728000 audit[5641]: CRED_ACQ pid=5641 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:03.738598 kernel: audit: type=1103 audit(1768957383.728:775): pid=5641 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:04.133331 containerd[1962]: time="2026-01-21T01:03:04.133006592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 01:03:04.411671 containerd[1962]: time="2026-01-21T01:03:04.411445287Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:03:04.414978 containerd[1962]: time="2026-01-21T01:03:04.414852547Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 01:03:04.414978 containerd[1962]: time="2026-01-21T01:03:04.414864940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 01:03:04.415359 kubelet[3468]: E0121 01:03:04.415306 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:03:04.415359 kubelet[3468]: E0121 01:03:04.415356 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:03:04.416746 kubelet[3468]: E0121 01:03:04.416475 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-57wl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b9ccb8fff-hdchl_calico-apiserver(0aa53f42-4d9d-4bec-8857-7799d5876dce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 01:03:04.417927 kubelet[3468]: E0121 01:03:04.417865 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b9ccb8fff-hdchl" podUID="0aa53f42-4d9d-4bec-8857-7799d5876dce" Jan 21 01:03:04.788466 sshd[5641]: Connection closed by 68.220.241.50 port 52144 Jan 21 01:03:04.789433 sshd-session[5637]: pam_unix(sshd:session): session closed for user core Jan 21 01:03:04.793000 audit[5637]: USER_END pid=5637 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:04.802446 kernel: audit: type=1106 audit(1768957384.793:776): pid=5637 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:04.798780 systemd[1]: sshd@7-172.31.28.215:22-68.220.241.50:52144.service: Deactivated successfully. Jan 21 01:03:04.812810 kernel: audit: type=1104 audit(1768957384.793:777): pid=5637 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:04.812857 kernel: audit: type=1131 audit(1768957384.796:778): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.28.215:22-68.220.241.50:52144 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:04.793000 audit[5637]: CRED_DISP pid=5637 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:04.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.28.215:22-68.220.241.50:52144 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:04.801405 systemd[1]: session-9.scope: Deactivated successfully. Jan 21 01:03:04.812128 systemd-logind[1938]: Session 9 logged out. Waiting for processes to exit. Jan 21 01:03:04.813798 systemd-logind[1938]: Removed session 9. Jan 21 01:03:05.132538 containerd[1962]: time="2026-01-21T01:03:05.132422198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 21 01:03:05.418898 containerd[1962]: time="2026-01-21T01:03:05.418767425Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:03:05.421177 containerd[1962]: time="2026-01-21T01:03:05.421094268Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 21 01:03:05.421317 containerd[1962]: time="2026-01-21T01:03:05.421193693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 21 01:03:05.421455 kubelet[3468]: E0121 01:03:05.421396 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 01:03:05.421455 kubelet[3468]: E0121 01:03:05.421451 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 01:03:05.421806 kubelet[3468]: E0121 01:03:05.421558 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:70d67053a2a44bfca2c3c6f84dd42b97,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ssm6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66b4f466fd-6gfxt_calico-system(a6a90027-9f22-4c0c-9ffd-5b7564ee55c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 21 01:03:05.425124 containerd[1962]: time="2026-01-21T01:03:05.424941336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 21 01:03:05.693579 containerd[1962]: time="2026-01-21T01:03:05.693100226Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:03:05.695543 containerd[1962]: time="2026-01-21T01:03:05.695446678Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 21 01:03:05.696069 containerd[1962]: time="2026-01-21T01:03:05.695529842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 21 01:03:05.696696 kubelet[3468]: E0121 01:03:05.696544 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 01:03:05.696696 kubelet[3468]: E0121 01:03:05.696619 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 01:03:05.697024 kubelet[3468]: E0121 01:03:05.696977 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ssm6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66b4f466fd-6gfxt_calico-system(a6a90027-9f22-4c0c-9ffd-5b7564ee55c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 21 01:03:05.698491 kubelet[3468]: E0121 01:03:05.698441 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66b4f466fd-6gfxt" podUID="a6a90027-9f22-4c0c-9ffd-5b7564ee55c8" Jan 21 01:03:06.134592 ntpd[1923]: Listen normally on 14 caliadd37fcdaa3 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 21 01:03:06.134979 ntpd[1923]: 21 Jan 01:03:06 ntpd[1923]: Listen normally on 14 caliadd37fcdaa3 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 21 01:03:06.134979 ntpd[1923]: 21 Jan 01:03:06 ntpd[1923]: Listen normally on 15 cali06ad78817f6 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 21 01:03:06.134979 ntpd[1923]: 21 Jan 01:03:06 ntpd[1923]: Listen normally on 16 cali7a5ab852aa4 [fe80::ecee:eeff:feee:eeee%15]:123 Jan 21 01:03:06.134657 ntpd[1923]: Listen normally on 15 cali06ad78817f6 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 21 01:03:06.134682 ntpd[1923]: Listen normally on 16 cali7a5ab852aa4 [fe80::ecee:eeff:feee:eeee%15]:123 Jan 21 01:03:09.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.28.215:22-68.220.241.50:52154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:09.885923 systemd[1]: Started sshd@8-172.31.28.215:22-68.220.241.50:52154.service - OpenSSH per-connection server daemon (68.220.241.50:52154). Jan 21 01:03:09.907295 kernel: audit: type=1130 audit(1768957389.884:779): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.28.215:22-68.220.241.50:52154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:10.345000 audit[5660]: USER_ACCT pid=5660 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:10.349315 sshd-session[5660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:03:10.352751 sshd[5660]: Accepted publickey for core from 68.220.241.50 port 52154 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:03:10.345000 audit[5660]: CRED_ACQ pid=5660 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:10.354412 kernel: audit: type=1101 audit(1768957390.345:780): pid=5660 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:10.354489 kernel: audit: type=1103 audit(1768957390.345:781): pid=5660 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:10.358790 kernel: audit: type=1006 audit(1768957390.348:782): pid=5660 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 21 01:03:10.362368 kernel: audit: type=1300 audit(1768957390.348:782): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce72905f0 a2=3 a3=0 items=0 ppid=1 pid=5660 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:10.348000 audit[5660]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce72905f0 a2=3 a3=0 items=0 ppid=1 pid=5660 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:10.363277 systemd-logind[1938]: New session 10 of user core. Jan 21 01:03:10.348000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:03:10.368350 kernel: audit: type=1327 audit(1768957390.348:782): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:03:10.369460 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 21 01:03:10.371000 audit[5660]: USER_START pid=5660 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:10.373000 audit[5664]: CRED_ACQ pid=5664 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:10.379864 kernel: audit: type=1105 audit(1768957390.371:783): pid=5660 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:10.379938 kernel: audit: type=1103 audit(1768957390.373:784): pid=5664 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:10.676492 sshd[5664]: Connection closed by 68.220.241.50 port 52154 Jan 21 01:03:10.677116 sshd-session[5660]: pam_unix(sshd:session): session closed for user core Jan 21 01:03:10.678000 audit[5660]: USER_END pid=5660 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:10.683430 systemd[1]: sshd@8-172.31.28.215:22-68.220.241.50:52154.service: Deactivated successfully. Jan 21 01:03:10.686256 kernel: audit: type=1106 audit(1768957390.678:785): pid=5660 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:10.686714 kernel: audit: type=1104 audit(1768957390.678:786): pid=5660 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:10.678000 audit[5660]: CRED_DISP pid=5660 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:10.687167 systemd[1]: session-10.scope: Deactivated successfully. Jan 21 01:03:10.690904 systemd-logind[1938]: Session 10 logged out. Waiting for processes to exit. Jan 21 01:03:10.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.28.215:22-68.220.241.50:52154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:10.691884 systemd-logind[1938]: Removed session 10. Jan 21 01:03:11.132249 containerd[1962]: time="2026-01-21T01:03:11.131715194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 01:03:11.442965 containerd[1962]: time="2026-01-21T01:03:11.442816397Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:03:11.445093 containerd[1962]: time="2026-01-21T01:03:11.445031879Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 01:03:11.445276 containerd[1962]: time="2026-01-21T01:03:11.445144651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 01:03:11.445532 kubelet[3468]: E0121 01:03:11.445487 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:03:11.445933 kubelet[3468]: E0121 01:03:11.445553 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:03:11.461860 kubelet[3468]: E0121 01:03:11.445716 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c6nw7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-95fb8fc69-kstsq_calico-apiserver(35190377-3d75-48f7-8c27-98f24edff14f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 01:03:11.463321 kubelet[3468]: E0121 01:03:11.463251 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-95fb8fc69-kstsq" podUID="35190377-3d75-48f7-8c27-98f24edff14f" Jan 21 01:03:13.131971 kubelet[3468]: E0121 01:03:13.131923 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b9ccb8fff-8h9ht" podUID="bec76da0-2399-4e6c-952b-9dc7ad88a302" Jan 21 01:03:13.133351 containerd[1962]: time="2026-01-21T01:03:13.132458448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 21 01:03:13.551091 containerd[1962]: time="2026-01-21T01:03:13.551044553Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:03:13.553311 containerd[1962]: time="2026-01-21T01:03:13.553258251Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 21 01:03:13.553479 containerd[1962]: time="2026-01-21T01:03:13.553347059Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 21 01:03:13.553543 kubelet[3468]: E0121 01:03:13.553508 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 01:03:13.554011 kubelet[3468]: E0121 01:03:13.553553 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 01:03:13.554011 kubelet[3468]: E0121 01:03:13.553664 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z7shk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9xsfz_calico-system(76d94f9b-071e-45b7-9881-314d22adc37f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 21 01:03:13.556058 containerd[1962]: time="2026-01-21T01:03:13.556028224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 21 01:03:13.794974 containerd[1962]: time="2026-01-21T01:03:13.794769640Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:03:13.797140 containerd[1962]: time="2026-01-21T01:03:13.797009977Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 21 01:03:13.797140 containerd[1962]: time="2026-01-21T01:03:13.797051792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 21 01:03:13.797569 kubelet[3468]: E0121 01:03:13.797528 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 01:03:13.797743 kubelet[3468]: E0121 01:03:13.797578 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 01:03:13.797996 kubelet[3468]: E0121 01:03:13.797879 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z7shk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9xsfz_calico-system(76d94f9b-071e-45b7-9881-314d22adc37f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 21 01:03:13.799517 kubelet[3468]: E0121 01:03:13.799444 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9xsfz" podUID="76d94f9b-071e-45b7-9881-314d22adc37f" Jan 21 01:03:15.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.28.215:22-68.220.241.50:33174 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:15.765185 systemd[1]: Started sshd@9-172.31.28.215:22-68.220.241.50:33174.service - OpenSSH per-connection server daemon (68.220.241.50:33174). Jan 21 01:03:15.766454 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 01:03:15.766488 kernel: audit: type=1130 audit(1768957395.764:788): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.28.215:22-68.220.241.50:33174 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:16.232000 audit[5685]: USER_ACCT pid=5685 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:16.236201 sshd-session[5685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:03:16.239812 sshd[5685]: Accepted publickey for core from 68.220.241.50 port 33174 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:03:16.240292 kernel: audit: type=1101 audit(1768957396.232:789): pid=5685 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:16.232000 audit[5685]: CRED_ACQ pid=5685 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:16.248305 kernel: audit: type=1103 audit(1768957396.232:790): pid=5685 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:16.248392 kernel: audit: type=1006 audit(1768957396.232:791): pid=5685 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 21 01:03:16.248571 kernel: audit: type=1300 audit(1768957396.232:791): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe80965e70 a2=3 a3=0 items=0 ppid=1 pid=5685 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:16.232000 audit[5685]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe80965e70 a2=3 a3=0 items=0 ppid=1 pid=5685 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:16.232000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:03:16.254786 kernel: audit: type=1327 audit(1768957396.232:791): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:03:16.257133 systemd-logind[1938]: New session 11 of user core. Jan 21 01:03:16.262442 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 21 01:03:16.264000 audit[5685]: USER_START pid=5685 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:16.272334 kernel: audit: type=1105 audit(1768957396.264:792): pid=5685 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:16.272000 audit[5689]: CRED_ACQ pid=5689 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:16.278268 kernel: audit: type=1103 audit(1768957396.272:793): pid=5689 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:16.596071 sshd[5689]: Connection closed by 68.220.241.50 port 33174 Jan 21 01:03:16.598090 sshd-session[5685]: pam_unix(sshd:session): session closed for user core Jan 21 01:03:16.602000 audit[5685]: USER_END pid=5685 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:16.607241 systemd[1]: sshd@9-172.31.28.215:22-68.220.241.50:33174.service: Deactivated successfully. Jan 21 01:03:16.610236 kernel: audit: type=1106 audit(1768957396.602:794): pid=5685 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:16.602000 audit[5685]: CRED_DISP pid=5685 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:16.611185 systemd[1]: session-11.scope: Deactivated successfully. Jan 21 01:03:16.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.28.215:22-68.220.241.50:33174 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:16.616236 kernel: audit: type=1104 audit(1768957396.602:795): pid=5685 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:16.616708 systemd-logind[1938]: Session 11 logged out. Waiting for processes to exit. Jan 21 01:03:16.618280 systemd-logind[1938]: Removed session 11. Jan 21 01:03:16.684011 systemd[1]: Started sshd@10-172.31.28.215:22-68.220.241.50:33180.service - OpenSSH per-connection server daemon (68.220.241.50:33180). Jan 21 01:03:16.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.28.215:22-68.220.241.50:33180 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:17.132627 kubelet[3468]: E0121 01:03:17.132423 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vhxxc" podUID="b0fd67ff-2b5d-470f-b242-daa718038f97" Jan 21 01:03:17.148000 audit[5726]: USER_ACCT pid=5726 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:17.150366 sshd[5726]: Accepted publickey for core from 68.220.241.50 port 33180 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:03:17.150000 audit[5726]: CRED_ACQ pid=5726 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:17.150000 audit[5726]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd6877c290 a2=3 a3=0 items=0 ppid=1 pid=5726 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:17.150000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:03:17.152653 sshd-session[5726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:03:17.158281 systemd-logind[1938]: New session 12 of user core. Jan 21 01:03:17.170508 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 21 01:03:17.173000 audit[5726]: USER_START pid=5726 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:17.175000 audit[5730]: CRED_ACQ pid=5730 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:17.534330 sshd[5730]: Connection closed by 68.220.241.50 port 33180 Jan 21 01:03:17.536879 sshd-session[5726]: pam_unix(sshd:session): session closed for user core Jan 21 01:03:17.543000 audit[5726]: USER_END pid=5726 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:17.545000 audit[5726]: CRED_DISP pid=5726 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:17.549556 systemd-logind[1938]: Session 12 logged out. Waiting for processes to exit. Jan 21 01:03:17.550068 systemd[1]: sshd@10-172.31.28.215:22-68.220.241.50:33180.service: Deactivated successfully. Jan 21 01:03:17.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.28.215:22-68.220.241.50:33180 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:17.552377 systemd[1]: session-12.scope: Deactivated successfully. Jan 21 01:03:17.554990 systemd-logind[1938]: Removed session 12. Jan 21 01:03:17.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.28.215:22-68.220.241.50:33182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:17.624584 systemd[1]: Started sshd@11-172.31.28.215:22-68.220.241.50:33182.service - OpenSSH per-connection server daemon (68.220.241.50:33182). Jan 21 01:03:18.084000 audit[5740]: USER_ACCT pid=5740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:18.085532 sshd[5740]: Accepted publickey for core from 68.220.241.50 port 33182 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:03:18.085000 audit[5740]: CRED_ACQ pid=5740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:18.085000 audit[5740]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb3651ea0 a2=3 a3=0 items=0 ppid=1 pid=5740 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:18.085000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:03:18.087614 sshd-session[5740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:03:18.093285 systemd-logind[1938]: New session 13 of user core. Jan 21 01:03:18.098485 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 21 01:03:18.100000 audit[5740]: USER_START pid=5740 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:18.102000 audit[5744]: CRED_ACQ pid=5744 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:18.134289 kubelet[3468]: E0121 01:03:18.134146 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-764d9fb657-cw7sh" podUID="5e4fb692-d42b-4062-b09a-d89e5a5a14cf" Jan 21 01:03:18.138316 kubelet[3468]: E0121 01:03:18.138187 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b9ccb8fff-hdchl" podUID="0aa53f42-4d9d-4bec-8857-7799d5876dce" Jan 21 01:03:18.406323 sshd[5744]: Connection closed by 68.220.241.50 port 33182 Jan 21 01:03:18.406789 sshd-session[5740]: pam_unix(sshd:session): session closed for user core Jan 21 01:03:18.407000 audit[5740]: USER_END pid=5740 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:18.407000 audit[5740]: CRED_DISP pid=5740 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:18.412696 systemd[1]: sshd@11-172.31.28.215:22-68.220.241.50:33182.service: Deactivated successfully. Jan 21 01:03:18.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.28.215:22-68.220.241.50:33182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:18.415907 systemd[1]: session-13.scope: Deactivated successfully. Jan 21 01:03:18.420195 systemd-logind[1938]: Session 13 logged out. Waiting for processes to exit. Jan 21 01:03:18.422321 systemd-logind[1938]: Removed session 13. Jan 21 01:03:20.136953 kubelet[3468]: E0121 01:03:20.136900 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66b4f466fd-6gfxt" podUID="a6a90027-9f22-4c0c-9ffd-5b7564ee55c8" Jan 21 01:03:23.492646 systemd[1]: Started sshd@12-172.31.28.215:22-68.220.241.50:59506.service - OpenSSH per-connection server daemon (68.220.241.50:59506). Jan 21 01:03:23.494661 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 21 01:03:23.494722 kernel: audit: type=1130 audit(1768957403.491:815): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.28.215:22-68.220.241.50:59506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:23.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.28.215:22-68.220.241.50:59506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:23.971000 audit[5766]: USER_ACCT pid=5766 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:23.976833 sshd[5766]: Accepted publickey for core from 68.220.241.50 port 59506 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:03:23.976000 audit[5766]: CRED_ACQ pid=5766 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:23.981260 kernel: audit: type=1101 audit(1768957403.971:816): pid=5766 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:23.981799 kernel: audit: type=1103 audit(1768957403.976:817): pid=5766 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:23.982970 sshd-session[5766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:03:23.976000 audit[5766]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc96f7bf50 a2=3 a3=0 items=0 ppid=1 pid=5766 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:23.993384 kernel: audit: type=1006 audit(1768957403.976:818): pid=5766 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 21 01:03:23.993483 kernel: audit: type=1300 audit(1768957403.976:818): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc96f7bf50 a2=3 a3=0 items=0 ppid=1 pid=5766 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:23.996385 kernel: audit: type=1327 audit(1768957403.976:818): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:03:23.976000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:03:23.993877 systemd-logind[1938]: New session 14 of user core. Jan 21 01:03:24.001536 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 21 01:03:24.006000 audit[5766]: USER_START pid=5766 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:24.010000 audit[5770]: CRED_ACQ pid=5770 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:24.015991 kernel: audit: type=1105 audit(1768957404.006:819): pid=5766 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:24.016102 kernel: audit: type=1103 audit(1768957404.010:820): pid=5770 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:24.137147 kubelet[3468]: E0121 01:03:24.135029 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-95fb8fc69-kstsq" podUID="35190377-3d75-48f7-8c27-98f24edff14f" Jan 21 01:03:24.302037 sshd[5770]: Connection closed by 68.220.241.50 port 59506 Jan 21 01:03:24.303434 sshd-session[5766]: pam_unix(sshd:session): session closed for user core Jan 21 01:03:24.303000 audit[5766]: USER_END pid=5766 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:24.309090 systemd[1]: sshd@12-172.31.28.215:22-68.220.241.50:59506.service: Deactivated successfully. Jan 21 01:03:24.311791 kernel: audit: type=1106 audit(1768957404.303:821): pid=5766 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:24.311448 systemd[1]: session-14.scope: Deactivated successfully. Jan 21 01:03:24.303000 audit[5766]: CRED_DISP pid=5766 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:24.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.28.215:22-68.220.241.50:59506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:24.313719 systemd-logind[1938]: Session 14 logged out. Waiting for processes to exit. Jan 21 01:03:24.314885 systemd-logind[1938]: Removed session 14. Jan 21 01:03:24.317840 kernel: audit: type=1104 audit(1768957404.303:822): pid=5766 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:26.135112 containerd[1962]: time="2026-01-21T01:03:26.134349116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 01:03:26.404683 containerd[1962]: time="2026-01-21T01:03:26.404539313Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:03:26.406836 containerd[1962]: time="2026-01-21T01:03:26.406715486Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 01:03:26.406836 containerd[1962]: time="2026-01-21T01:03:26.406721009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 01:03:26.407041 kubelet[3468]: E0121 01:03:26.406994 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:03:26.407555 kubelet[3468]: E0121 01:03:26.407047 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:03:26.407555 kubelet[3468]: E0121 01:03:26.407169 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-98vll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b9ccb8fff-8h9ht_calico-apiserver(bec76da0-2399-4e6c-952b-9dc7ad88a302): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 01:03:26.408708 kubelet[3468]: E0121 01:03:26.408666 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b9ccb8fff-8h9ht" podUID="bec76da0-2399-4e6c-952b-9dc7ad88a302" Jan 21 01:03:29.132730 kubelet[3468]: E0121 01:03:29.132677 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9xsfz" podUID="76d94f9b-071e-45b7-9881-314d22adc37f" Jan 21 01:03:29.405694 systemd[1]: Started sshd@13-172.31.28.215:22-68.220.241.50:59522.service - OpenSSH per-connection server daemon (68.220.241.50:59522). Jan 21 01:03:29.412236 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 01:03:29.412486 kernel: audit: type=1130 audit(1768957409.404:824): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.28.215:22-68.220.241.50:59522 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:29.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.28.215:22-68.220.241.50:59522 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:29.889000 audit[5785]: USER_ACCT pid=5785 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:29.891250 sshd[5785]: Accepted publickey for core from 68.220.241.50 port 59522 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:03:29.894021 sshd-session[5785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:03:29.892000 audit[5785]: CRED_ACQ pid=5785 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:29.898639 kernel: audit: type=1101 audit(1768957409.889:825): pid=5785 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:29.898720 kernel: audit: type=1103 audit(1768957409.892:826): pid=5785 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:29.902661 systemd-logind[1938]: New session 15 of user core. Jan 21 01:03:29.892000 audit[5785]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8a2c4880 a2=3 a3=0 items=0 ppid=1 pid=5785 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:29.907850 kernel: audit: type=1006 audit(1768957409.892:827): pid=5785 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 21 01:03:29.907974 kernel: audit: type=1300 audit(1768957409.892:827): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8a2c4880 a2=3 a3=0 items=0 ppid=1 pid=5785 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:29.892000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:03:29.912737 kernel: audit: type=1327 audit(1768957409.892:827): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:03:29.914495 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 21 01:03:29.917000 audit[5785]: USER_START pid=5785 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:29.921000 audit[5789]: CRED_ACQ pid=5789 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:29.926991 kernel: audit: type=1105 audit(1768957409.917:828): pid=5785 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:29.927070 kernel: audit: type=1103 audit(1768957409.921:829): pid=5789 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:30.138274 containerd[1962]: time="2026-01-21T01:03:30.137747576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 01:03:30.226545 sshd[5789]: Connection closed by 68.220.241.50 port 59522 Jan 21 01:03:30.226400 sshd-session[5785]: pam_unix(sshd:session): session closed for user core Jan 21 01:03:30.227000 audit[5785]: USER_END pid=5785 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:30.236280 kernel: audit: type=1106 audit(1768957410.227:830): pid=5785 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:30.227000 audit[5785]: CRED_DISP pid=5785 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:30.246187 systemd[1]: sshd@13-172.31.28.215:22-68.220.241.50:59522.service: Deactivated successfully. Jan 21 01:03:30.246532 systemd-logind[1938]: Session 15 logged out. Waiting for processes to exit. Jan 21 01:03:30.250191 systemd[1]: session-15.scope: Deactivated successfully. Jan 21 01:03:30.251487 kernel: audit: type=1104 audit(1768957410.227:831): pid=5785 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:30.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.28.215:22-68.220.241.50:59522 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:30.255657 systemd-logind[1938]: Removed session 15. Jan 21 01:03:30.413283 containerd[1962]: time="2026-01-21T01:03:30.413233364Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:03:30.415483 containerd[1962]: time="2026-01-21T01:03:30.415435993Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 01:03:30.415613 containerd[1962]: time="2026-01-21T01:03:30.415457243Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 01:03:30.415741 kubelet[3468]: E0121 01:03:30.415696 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:03:30.416388 kubelet[3468]: E0121 01:03:30.415761 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:03:30.416388 kubelet[3468]: E0121 01:03:30.416073 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-57wl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b9ccb8fff-hdchl_calico-apiserver(0aa53f42-4d9d-4bec-8857-7799d5876dce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 01:03:30.417013 containerd[1962]: time="2026-01-21T01:03:30.416372624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 21 01:03:30.417979 kubelet[3468]: E0121 01:03:30.417923 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b9ccb8fff-hdchl" podUID="0aa53f42-4d9d-4bec-8857-7799d5876dce" Jan 21 01:03:30.689540 containerd[1962]: time="2026-01-21T01:03:30.689495059Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:03:30.691635 containerd[1962]: time="2026-01-21T01:03:30.691593310Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 21 01:03:30.691738 containerd[1962]: time="2026-01-21T01:03:30.691683089Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 21 01:03:30.691915 kubelet[3468]: E0121 01:03:30.691879 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 01:03:30.691971 kubelet[3468]: E0121 01:03:30.691928 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 01:03:30.692269 kubelet[3468]: E0121 01:03:30.692154 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxm28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-764d9fb657-cw7sh_calico-system(5e4fb692-d42b-4062-b09a-d89e5a5a14cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 21 01:03:30.692414 containerd[1962]: time="2026-01-21T01:03:30.692394459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 21 01:03:30.694152 kubelet[3468]: E0121 01:03:30.694113 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-764d9fb657-cw7sh" podUID="5e4fb692-d42b-4062-b09a-d89e5a5a14cf" Jan 21 01:03:30.950574 containerd[1962]: time="2026-01-21T01:03:30.950362130Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:03:30.952553 containerd[1962]: time="2026-01-21T01:03:30.952466816Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 21 01:03:30.952776 containerd[1962]: time="2026-01-21T01:03:30.952672003Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 21 01:03:30.952994 kubelet[3468]: E0121 01:03:30.952943 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 01:03:30.952994 kubelet[3468]: E0121 01:03:30.952996 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 01:03:30.953168 kubelet[3468]: E0121 01:03:30.953117 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pqxsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vhxxc_calico-system(b0fd67ff-2b5d-470f-b242-daa718038f97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 21 01:03:30.955280 kubelet[3468]: E0121 01:03:30.955242 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vhxxc" podUID="b0fd67ff-2b5d-470f-b242-daa718038f97" Jan 21 01:03:31.134196 containerd[1962]: time="2026-01-21T01:03:31.134152124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 21 01:03:31.411230 containerd[1962]: time="2026-01-21T01:03:31.411167529Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:03:31.413428 containerd[1962]: time="2026-01-21T01:03:31.413367292Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 21 01:03:31.413570 containerd[1962]: time="2026-01-21T01:03:31.413489250Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 21 01:03:31.413995 kubelet[3468]: E0121 01:03:31.413674 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 01:03:31.413995 kubelet[3468]: E0121 01:03:31.413726 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 01:03:31.413995 kubelet[3468]: E0121 01:03:31.413860 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:70d67053a2a44bfca2c3c6f84dd42b97,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ssm6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66b4f466fd-6gfxt_calico-system(a6a90027-9f22-4c0c-9ffd-5b7564ee55c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 21 01:03:31.416129 containerd[1962]: time="2026-01-21T01:03:31.416000457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 21 01:03:31.681478 containerd[1962]: time="2026-01-21T01:03:31.680988381Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:03:31.684197 containerd[1962]: time="2026-01-21T01:03:31.683642302Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 21 01:03:31.684464 containerd[1962]: time="2026-01-21T01:03:31.684179386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 21 01:03:31.684746 kubelet[3468]: E0121 01:03:31.684702 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 01:03:31.685158 kubelet[3468]: E0121 01:03:31.684763 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 01:03:31.685158 kubelet[3468]: E0121 01:03:31.684909 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ssm6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66b4f466fd-6gfxt_calico-system(a6a90027-9f22-4c0c-9ffd-5b7564ee55c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 21 01:03:31.687274 kubelet[3468]: E0121 01:03:31.686900 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66b4f466fd-6gfxt" podUID="a6a90027-9f22-4c0c-9ffd-5b7564ee55c8" Jan 21 01:03:35.132776 containerd[1962]: time="2026-01-21T01:03:35.132132016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 01:03:35.319562 systemd[1]: Started sshd@14-172.31.28.215:22-68.220.241.50:54248.service - OpenSSH per-connection server daemon (68.220.241.50:54248). Jan 21 01:03:35.321142 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 01:03:35.321205 kernel: audit: type=1130 audit(1768957415.318:833): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.28.215:22-68.220.241.50:54248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:35.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.28.215:22-68.220.241.50:54248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:35.451789 containerd[1962]: time="2026-01-21T01:03:35.451634485Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:03:35.454308 containerd[1962]: time="2026-01-21T01:03:35.454254032Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 01:03:35.454499 containerd[1962]: time="2026-01-21T01:03:35.454374001Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 01:03:35.454603 kubelet[3468]: E0121 01:03:35.454563 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:03:35.455031 kubelet[3468]: E0121 01:03:35.454617 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:03:35.455031 kubelet[3468]: E0121 01:03:35.454779 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c6nw7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-95fb8fc69-kstsq_calico-apiserver(35190377-3d75-48f7-8c27-98f24edff14f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 01:03:35.456048 kubelet[3468]: E0121 01:03:35.455988 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-95fb8fc69-kstsq" podUID="35190377-3d75-48f7-8c27-98f24edff14f" Jan 21 01:03:35.860967 sshd[5807]: Accepted publickey for core from 68.220.241.50 port 54248 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:03:35.859000 audit[5807]: USER_ACCT pid=5807 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:35.867951 sshd-session[5807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:03:35.868332 kernel: audit: type=1101 audit(1768957415.859:834): pid=5807 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:35.881255 kernel: audit: type=1103 audit(1768957415.863:835): pid=5807 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:35.863000 audit[5807]: CRED_ACQ pid=5807 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:35.892070 kernel: audit: type=1006 audit(1768957415.863:836): pid=5807 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 21 01:03:35.892240 kernel: audit: type=1300 audit(1768957415.863:836): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb14458d0 a2=3 a3=0 items=0 ppid=1 pid=5807 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:35.863000 audit[5807]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb14458d0 a2=3 a3=0 items=0 ppid=1 pid=5807 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:35.889415 systemd-logind[1938]: New session 16 of user core. Jan 21 01:03:35.863000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:03:35.896355 kernel: audit: type=1327 audit(1768957415.863:836): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:03:35.898518 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 21 01:03:35.903000 audit[5807]: USER_START pid=5807 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:35.918350 kernel: audit: type=1105 audit(1768957415.903:837): pid=5807 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:35.918481 kernel: audit: type=1103 audit(1768957415.911:838): pid=5811 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:35.911000 audit[5811]: CRED_ACQ pid=5811 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:36.253984 sshd[5811]: Connection closed by 68.220.241.50 port 54248 Jan 21 01:03:36.252523 sshd-session[5807]: pam_unix(sshd:session): session closed for user core Jan 21 01:03:36.253000 audit[5807]: USER_END pid=5807 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:36.262257 kernel: audit: type=1106 audit(1768957416.253:839): pid=5807 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:36.263943 systemd[1]: sshd@14-172.31.28.215:22-68.220.241.50:54248.service: Deactivated successfully. Jan 21 01:03:36.253000 audit[5807]: CRED_DISP pid=5807 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:36.268118 systemd[1]: session-16.scope: Deactivated successfully. Jan 21 01:03:36.271361 kernel: audit: type=1104 audit(1768957416.253:840): pid=5807 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:36.271822 systemd-logind[1938]: Session 16 logged out. Waiting for processes to exit. Jan 21 01:03:36.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.28.215:22-68.220.241.50:54248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:36.273675 systemd-logind[1938]: Removed session 16. Jan 21 01:03:36.355628 systemd[1]: Started sshd@15-172.31.28.215:22-68.220.241.50:54260.service - OpenSSH per-connection server daemon (68.220.241.50:54260). Jan 21 01:03:36.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.28.215:22-68.220.241.50:54260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:36.829000 audit[5823]: USER_ACCT pid=5823 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:36.830467 sshd[5823]: Accepted publickey for core from 68.220.241.50 port 54260 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:03:36.830000 audit[5823]: CRED_ACQ pid=5823 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:36.830000 audit[5823]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb21b5440 a2=3 a3=0 items=0 ppid=1 pid=5823 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:36.830000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:03:36.832433 sshd-session[5823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:03:36.838055 systemd-logind[1938]: New session 17 of user core. Jan 21 01:03:36.847428 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 21 01:03:36.849000 audit[5823]: USER_START pid=5823 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:36.852000 audit[5829]: CRED_ACQ pid=5829 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:38.446961 sshd[5829]: Connection closed by 68.220.241.50 port 54260 Jan 21 01:03:38.448409 sshd-session[5823]: pam_unix(sshd:session): session closed for user core Jan 21 01:03:38.449000 audit[5823]: USER_END pid=5823 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:38.449000 audit[5823]: CRED_DISP pid=5823 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:38.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.28.215:22-68.220.241.50:54260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:38.453275 systemd-logind[1938]: Session 17 logged out. Waiting for processes to exit. Jan 21 01:03:38.453914 systemd[1]: sshd@15-172.31.28.215:22-68.220.241.50:54260.service: Deactivated successfully. Jan 21 01:03:38.456474 systemd[1]: session-17.scope: Deactivated successfully. Jan 21 01:03:38.459002 systemd-logind[1938]: Removed session 17. Jan 21 01:03:38.543356 systemd[1]: Started sshd@16-172.31.28.215:22-68.220.241.50:54272.service - OpenSSH per-connection server daemon (68.220.241.50:54272). Jan 21 01:03:38.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.28.215:22-68.220.241.50:54272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:39.031000 audit[5838]: USER_ACCT pid=5838 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:39.032817 sshd[5838]: Accepted publickey for core from 68.220.241.50 port 54272 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:03:39.032000 audit[5838]: CRED_ACQ pid=5838 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:39.033000 audit[5838]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd68b1e800 a2=3 a3=0 items=0 ppid=1 pid=5838 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:39.033000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:03:39.035490 sshd-session[5838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:03:39.044725 systemd-logind[1938]: New session 18 of user core. Jan 21 01:03:39.050634 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 21 01:03:39.055000 audit[5838]: USER_START pid=5838 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:39.057000 audit[5842]: CRED_ACQ pid=5842 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:40.168000 audit[5852]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=5852 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:03:40.168000 audit[5852]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff359a5d50 a2=0 a3=7fff359a5d3c items=0 ppid=3578 pid=5852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:40.168000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:03:40.176000 audit[5852]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5852 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:03:40.176000 audit[5852]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff359a5d50 a2=0 a3=0 items=0 ppid=3578 pid=5852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:40.176000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:03:40.219000 audit[5854]: NETFILTER_CFG table=filter:146 family=2 entries=38 op=nft_register_rule pid=5854 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:03:40.219000 audit[5854]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffccce57290 a2=0 a3=7ffccce5727c items=0 ppid=3578 pid=5854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:40.219000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:03:40.224754 sshd[5842]: Connection closed by 68.220.241.50 port 54272 Jan 21 01:03:40.225619 sshd-session[5838]: pam_unix(sshd:session): session closed for user core Jan 21 01:03:40.228000 audit[5838]: USER_END pid=5838 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:40.228000 audit[5838]: CRED_DISP pid=5838 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:40.236370 systemd[1]: sshd@16-172.31.28.215:22-68.220.241.50:54272.service: Deactivated successfully. Jan 21 01:03:40.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.28.215:22-68.220.241.50:54272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:40.239663 systemd[1]: session-18.scope: Deactivated successfully. Jan 21 01:03:40.242012 systemd-logind[1938]: Session 18 logged out. Waiting for processes to exit. Jan 21 01:03:40.244951 systemd-logind[1938]: Removed session 18. Jan 21 01:03:40.254000 audit[5854]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5854 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:03:40.254000 audit[5854]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffccce57290 a2=0 a3=0 items=0 ppid=3578 pid=5854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:40.254000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:03:40.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.28.215:22-68.220.241.50:54288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:40.315054 systemd[1]: Started sshd@17-172.31.28.215:22-68.220.241.50:54288.service - OpenSSH per-connection server daemon (68.220.241.50:54288). Jan 21 01:03:40.801232 sshd[5859]: Accepted publickey for core from 68.220.241.50 port 54288 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:03:40.803576 kernel: kauditd_printk_skb: 36 callbacks suppressed Jan 21 01:03:40.803679 kernel: audit: type=1101 audit(1768957420.799:865): pid=5859 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:40.799000 audit[5859]: USER_ACCT pid=5859 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:40.803461 sshd-session[5859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:03:40.815518 kernel: audit: type=1103 audit(1768957420.800:866): pid=5859 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:40.800000 audit[5859]: CRED_ACQ pid=5859 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:40.812336 systemd-logind[1938]: New session 19 of user core. Jan 21 01:03:40.819224 kernel: audit: type=1006 audit(1768957420.800:867): pid=5859 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 21 01:03:40.820371 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 21 01:03:40.826654 kernel: audit: type=1300 audit(1768957420.800:867): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd663487d0 a2=3 a3=0 items=0 ppid=1 pid=5859 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:40.800000 audit[5859]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd663487d0 a2=3 a3=0 items=0 ppid=1 pid=5859 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:40.800000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:03:40.832261 kernel: audit: type=1327 audit(1768957420.800:867): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:03:40.830000 audit[5859]: USER_START pid=5859 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:40.839301 kernel: audit: type=1105 audit(1768957420.830:868): pid=5859 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:40.837000 audit[5863]: CRED_ACQ pid=5863 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:40.847334 kernel: audit: type=1103 audit(1768957420.837:869): pid=5863 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:41.134587 kubelet[3468]: E0121 01:03:41.134404 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vhxxc" podUID="b0fd67ff-2b5d-470f-b242-daa718038f97" Jan 21 01:03:41.135132 kubelet[3468]: E0121 01:03:41.134893 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b9ccb8fff-8h9ht" podUID="bec76da0-2399-4e6c-952b-9dc7ad88a302" Jan 21 01:03:41.442375 sshd[5863]: Connection closed by 68.220.241.50 port 54288 Jan 21 01:03:41.444492 sshd-session[5859]: pam_unix(sshd:session): session closed for user core Jan 21 01:03:41.444000 audit[5859]: USER_END pid=5859 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:41.451012 systemd[1]: sshd@17-172.31.28.215:22-68.220.241.50:54288.service: Deactivated successfully. Jan 21 01:03:41.453328 kernel: audit: type=1106 audit(1768957421.444:870): pid=5859 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:41.453444 systemd[1]: session-19.scope: Deactivated successfully. Jan 21 01:03:41.444000 audit[5859]: CRED_DISP pid=5859 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:41.454635 systemd-logind[1938]: Session 19 logged out. Waiting for processes to exit. Jan 21 01:03:41.458804 systemd-logind[1938]: Removed session 19. Jan 21 01:03:41.459774 kernel: audit: type=1104 audit(1768957421.444:871): pid=5859 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:41.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.28.215:22-68.220.241.50:54288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:41.465291 kernel: audit: type=1131 audit(1768957421.450:872): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.28.215:22-68.220.241.50:54288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:41.549343 systemd[1]: Started sshd@18-172.31.28.215:22-68.220.241.50:54302.service - OpenSSH per-connection server daemon (68.220.241.50:54302). Jan 21 01:03:41.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.28.215:22-68.220.241.50:54302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:42.086000 audit[5873]: USER_ACCT pid=5873 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:42.088437 sshd[5873]: Accepted publickey for core from 68.220.241.50 port 54302 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:03:42.088000 audit[5873]: CRED_ACQ pid=5873 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:42.088000 audit[5873]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc616009f0 a2=3 a3=0 items=0 ppid=1 pid=5873 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:42.088000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:03:42.091150 sshd-session[5873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:03:42.101475 systemd-logind[1938]: New session 20 of user core. Jan 21 01:03:42.108439 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 21 01:03:42.112000 audit[5873]: USER_START pid=5873 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:42.115000 audit[5877]: CRED_ACQ pid=5877 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:42.682233 sshd[5877]: Connection closed by 68.220.241.50 port 54302 Jan 21 01:03:42.682740 sshd-session[5873]: pam_unix(sshd:session): session closed for user core Jan 21 01:03:42.683000 audit[5873]: USER_END pid=5873 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:42.683000 audit[5873]: CRED_DISP pid=5873 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:42.688571 systemd-logind[1938]: Session 20 logged out. Waiting for processes to exit. Jan 21 01:03:42.689395 systemd[1]: sshd@18-172.31.28.215:22-68.220.241.50:54302.service: Deactivated successfully. Jan 21 01:03:42.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.28.215:22-68.220.241.50:54302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:42.692864 systemd[1]: session-20.scope: Deactivated successfully. Jan 21 01:03:42.695080 systemd-logind[1938]: Removed session 20. Jan 21 01:03:43.133074 kubelet[3468]: E0121 01:03:43.132710 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-764d9fb657-cw7sh" podUID="5e4fb692-d42b-4062-b09a-d89e5a5a14cf" Jan 21 01:03:43.133562 kubelet[3468]: E0121 01:03:43.133527 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b9ccb8fff-hdchl" podUID="0aa53f42-4d9d-4bec-8857-7799d5876dce" Jan 21 01:03:43.134625 kubelet[3468]: E0121 01:03:43.134354 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66b4f466fd-6gfxt" podUID="a6a90027-9f22-4c0c-9ffd-5b7564ee55c8" Jan 21 01:03:44.137262 containerd[1962]: time="2026-01-21T01:03:44.137200125Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 21 01:03:44.418887 containerd[1962]: time="2026-01-21T01:03:44.418527546Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:03:44.420685 containerd[1962]: time="2026-01-21T01:03:44.420638218Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 21 01:03:44.420842 containerd[1962]: time="2026-01-21T01:03:44.420726341Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 21 01:03:44.420906 kubelet[3468]: E0121 01:03:44.420872 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 01:03:44.421361 kubelet[3468]: E0121 01:03:44.420917 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 01:03:44.421361 kubelet[3468]: E0121 01:03:44.421031 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z7shk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9xsfz_calico-system(76d94f9b-071e-45b7-9881-314d22adc37f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 21 01:03:44.423968 containerd[1962]: time="2026-01-21T01:03:44.423943698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 21 01:03:44.716547 containerd[1962]: time="2026-01-21T01:03:44.716313817Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:03:44.718587 containerd[1962]: time="2026-01-21T01:03:44.718533915Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 21 01:03:44.718898 containerd[1962]: time="2026-01-21T01:03:44.718586330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 21 01:03:44.719120 kubelet[3468]: E0121 01:03:44.719060 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 01:03:44.719120 kubelet[3468]: E0121 01:03:44.719115 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 01:03:44.719354 kubelet[3468]: E0121 01:03:44.719252 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z7shk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9xsfz_calico-system(76d94f9b-071e-45b7-9881-314d22adc37f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 21 01:03:44.720766 kubelet[3468]: E0121 01:03:44.720694 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9xsfz" podUID="76d94f9b-071e-45b7-9881-314d22adc37f" Jan 21 01:03:46.710839 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 21 01:03:46.711566 kernel: audit: type=1325 audit(1768957426.704:882): table=filter:148 family=2 entries=26 op=nft_register_rule pid=5918 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:03:46.704000 audit[5918]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=5918 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:03:46.704000 audit[5918]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd6f8887f0 a2=0 a3=7ffd6f8887dc items=0 ppid=3578 pid=5918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:46.722300 kernel: audit: type=1300 audit(1768957426.704:882): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd6f8887f0 a2=0 a3=7ffd6f8887dc items=0 ppid=3578 pid=5918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:46.722426 kernel: audit: type=1327 audit(1768957426.704:882): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:03:46.704000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:03:46.726000 audit[5918]: NETFILTER_CFG table=nat:149 family=2 entries=104 op=nft_register_chain pid=5918 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:03:46.738782 kernel: audit: type=1325 audit(1768957426.726:883): table=nat:149 family=2 entries=104 op=nft_register_chain pid=5918 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:03:46.739288 kernel: audit: type=1300 audit(1768957426.726:883): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd6f8887f0 a2=0 a3=7ffd6f8887dc items=0 ppid=3578 pid=5918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:46.726000 audit[5918]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd6f8887f0 a2=0 a3=7ffd6f8887dc items=0 ppid=3578 pid=5918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:46.726000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:03:46.743261 kernel: audit: type=1327 audit(1768957426.726:883): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:03:47.133793 kubelet[3468]: E0121 01:03:47.133743 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-95fb8fc69-kstsq" podUID="35190377-3d75-48f7-8c27-98f24edff14f" Jan 21 01:03:47.782138 systemd[1]: Started sshd@19-172.31.28.215:22-68.220.241.50:44370.service - OpenSSH per-connection server daemon (68.220.241.50:44370). Jan 21 01:03:47.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.28.215:22-68.220.241.50:44370 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:47.788290 kernel: audit: type=1130 audit(1768957427.781:884): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.28.215:22-68.220.241.50:44370 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:48.291000 audit[5920]: USER_ACCT pid=5920 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:48.295252 sshd[5920]: Accepted publickey for core from 68.220.241.50 port 44370 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:03:48.299238 kernel: audit: type=1101 audit(1768957428.291:885): pid=5920 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:48.300124 sshd-session[5920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:03:48.294000 audit[5920]: CRED_ACQ pid=5920 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:48.307237 kernel: audit: type=1103 audit(1768957428.294:886): pid=5920 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:48.312748 systemd-logind[1938]: New session 21 of user core. Jan 21 01:03:48.313248 kernel: audit: type=1006 audit(1768957428.294:887): pid=5920 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 21 01:03:48.294000 audit[5920]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0ea51fa0 a2=3 a3=0 items=0 ppid=1 pid=5920 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:48.294000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:03:48.320474 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 21 01:03:48.327000 audit[5920]: USER_START pid=5920 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:48.330000 audit[5924]: CRED_ACQ pid=5924 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:48.682272 sshd[5924]: Connection closed by 68.220.241.50 port 44370 Jan 21 01:03:48.685689 sshd-session[5920]: pam_unix(sshd:session): session closed for user core Jan 21 01:03:48.686000 audit[5920]: USER_END pid=5920 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:48.686000 audit[5920]: CRED_DISP pid=5920 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:48.694145 systemd[1]: sshd@19-172.31.28.215:22-68.220.241.50:44370.service: Deactivated successfully. Jan 21 01:03:48.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.28.215:22-68.220.241.50:44370 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:48.699128 systemd[1]: session-21.scope: Deactivated successfully. Jan 21 01:03:48.702055 systemd-logind[1938]: Session 21 logged out. Waiting for processes to exit. Jan 21 01:03:48.707793 systemd-logind[1938]: Removed session 21. Jan 21 01:03:53.132845 kubelet[3468]: E0121 01:03:53.132797 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vhxxc" podUID="b0fd67ff-2b5d-470f-b242-daa718038f97" Jan 21 01:03:53.781566 systemd[1]: Started sshd@20-172.31.28.215:22-68.220.241.50:56600.service - OpenSSH per-connection server daemon (68.220.241.50:56600). Jan 21 01:03:53.785006 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 21 01:03:53.785078 kernel: audit: type=1130 audit(1768957433.780:893): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.28.215:22-68.220.241.50:56600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:53.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.28.215:22-68.220.241.50:56600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:54.135978 kubelet[3468]: E0121 01:03:54.135399 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b9ccb8fff-8h9ht" podUID="bec76da0-2399-4e6c-952b-9dc7ad88a302" Jan 21 01:03:54.243000 audit[5935]: USER_ACCT pid=5935 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:54.246971 sshd-session[5935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:03:54.247800 sshd[5935]: Accepted publickey for core from 68.220.241.50 port 56600 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:03:54.244000 audit[5935]: CRED_ACQ pid=5935 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:54.251557 kernel: audit: type=1101 audit(1768957434.243:894): pid=5935 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:54.251616 kernel: audit: type=1103 audit(1768957434.244:895): pid=5935 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:54.254280 systemd-logind[1938]: New session 22 of user core. Jan 21 01:03:54.255840 kernel: audit: type=1006 audit(1768957434.244:896): pid=5935 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 21 01:03:54.244000 audit[5935]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffaa5aed70 a2=3 a3=0 items=0 ppid=1 pid=5935 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:54.259256 kernel: audit: type=1300 audit(1768957434.244:896): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffaa5aed70 a2=3 a3=0 items=0 ppid=1 pid=5935 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:03:54.260524 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 21 01:03:54.244000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:03:54.278482 kernel: audit: type=1327 audit(1768957434.244:896): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:03:54.278618 kernel: audit: type=1105 audit(1768957434.265:897): pid=5935 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:54.265000 audit[5935]: USER_START pid=5935 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:54.283410 kernel: audit: type=1103 audit(1768957434.267:898): pid=5939 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:54.267000 audit[5939]: CRED_ACQ pid=5939 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:54.583231 sshd[5939]: Connection closed by 68.220.241.50 port 56600 Jan 21 01:03:54.583892 sshd-session[5935]: pam_unix(sshd:session): session closed for user core Jan 21 01:03:54.586000 audit[5935]: USER_END pid=5935 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:54.597387 kernel: audit: type=1106 audit(1768957434.586:899): pid=5935 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:54.600703 systemd[1]: sshd@20-172.31.28.215:22-68.220.241.50:56600.service: Deactivated successfully. Jan 21 01:03:54.603926 systemd[1]: session-22.scope: Deactivated successfully. Jan 21 01:03:54.586000 audit[5935]: CRED_DISP pid=5935 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:54.612355 kernel: audit: type=1104 audit(1768957434.586:900): pid=5935 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:03:54.613461 systemd-logind[1938]: Session 22 logged out. Waiting for processes to exit. Jan 21 01:03:54.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.28.215:22-68.220.241.50:56600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:54.619811 systemd-logind[1938]: Removed session 22. Jan 21 01:03:56.138457 kubelet[3468]: E0121 01:03:56.138242 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9xsfz" podUID="76d94f9b-071e-45b7-9881-314d22adc37f" Jan 21 01:03:56.138457 kubelet[3468]: E0121 01:03:56.138367 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66b4f466fd-6gfxt" podUID="a6a90027-9f22-4c0c-9ffd-5b7564ee55c8" Jan 21 01:03:56.139409 kubelet[3468]: E0121 01:03:56.139306 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-764d9fb657-cw7sh" podUID="5e4fb692-d42b-4062-b09a-d89e5a5a14cf" Jan 21 01:03:57.131681 kubelet[3468]: E0121 01:03:57.131630 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b9ccb8fff-hdchl" podUID="0aa53f42-4d9d-4bec-8857-7799d5876dce" Jan 21 01:03:59.672505 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 01:03:59.672613 kernel: audit: type=1130 audit(1768957439.665:902): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.28.215:22-68.220.241.50:56616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:59.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.28.215:22-68.220.241.50:56616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:03:59.666151 systemd[1]: Started sshd@21-172.31.28.215:22-68.220.241.50:56616.service - OpenSSH per-connection server daemon (68.220.241.50:56616). Jan 21 01:04:00.115000 audit[5951]: USER_ACCT pid=5951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:00.121857 sshd-session[5951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:04:00.123575 sshd[5951]: Accepted publickey for core from 68.220.241.50 port 56616 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:04:00.124606 kernel: audit: type=1101 audit(1768957440.115:903): pid=5951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:00.118000 audit[5951]: CRED_ACQ pid=5951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:00.132391 kernel: audit: type=1103 audit(1768957440.118:904): pid=5951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:00.142745 systemd-logind[1938]: New session 23 of user core. Jan 21 01:04:00.146557 kernel: audit: type=1006 audit(1768957440.118:905): pid=5951 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 21 01:04:00.146625 kernel: audit: type=1300 audit(1768957440.118:905): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd350a2490 a2=3 a3=0 items=0 ppid=1 pid=5951 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:00.118000 audit[5951]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd350a2490 a2=3 a3=0 items=0 ppid=1 pid=5951 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:00.118000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:04:00.153170 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 21 01:04:00.155415 kernel: audit: type=1327 audit(1768957440.118:905): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:04:00.161000 audit[5951]: USER_START pid=5951 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:00.164000 audit[5955]: CRED_ACQ pid=5955 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:00.171166 kernel: audit: type=1105 audit(1768957440.161:906): pid=5951 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:00.171305 kernel: audit: type=1103 audit(1768957440.164:907): pid=5955 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:00.457807 sshd[5955]: Connection closed by 68.220.241.50 port 56616 Jan 21 01:04:00.458772 sshd-session[5951]: pam_unix(sshd:session): session closed for user core Jan 21 01:04:00.462000 audit[5951]: USER_END pid=5951 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:00.468376 systemd-logind[1938]: Session 23 logged out. Waiting for processes to exit. Jan 21 01:04:00.469121 systemd[1]: sshd@21-172.31.28.215:22-68.220.241.50:56616.service: Deactivated successfully. Jan 21 01:04:00.470931 kernel: audit: type=1106 audit(1768957440.462:908): pid=5951 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:00.462000 audit[5951]: CRED_DISP pid=5951 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:00.475944 systemd[1]: session-23.scope: Deactivated successfully. Jan 21 01:04:00.479294 kernel: audit: type=1104 audit(1768957440.462:909): pid=5951 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:00.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.28.215:22-68.220.241.50:56616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:04:00.483814 systemd-logind[1938]: Removed session 23. Jan 21 01:04:01.133161 kubelet[3468]: E0121 01:04:01.133110 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-95fb8fc69-kstsq" podUID="35190377-3d75-48f7-8c27-98f24edff14f" Jan 21 01:04:05.134084 kubelet[3468]: E0121 01:04:05.134029 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b9ccb8fff-8h9ht" podUID="bec76da0-2399-4e6c-952b-9dc7ad88a302" Jan 21 01:04:05.134696 kubelet[3468]: E0121 01:04:05.134155 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vhxxc" podUID="b0fd67ff-2b5d-470f-b242-daa718038f97" Jan 21 01:04:05.547807 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 01:04:05.547901 kernel: audit: type=1130 audit(1768957445.545:911): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.28.215:22-68.220.241.50:42828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:04:05.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.28.215:22-68.220.241.50:42828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:04:05.546993 systemd[1]: Started sshd@22-172.31.28.215:22-68.220.241.50:42828.service - OpenSSH per-connection server daemon (68.220.241.50:42828). Jan 21 01:04:05.984000 audit[5968]: USER_ACCT pid=5968 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:05.992258 kernel: audit: type=1101 audit(1768957445.984:912): pid=5968 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:05.993686 sshd[5968]: Accepted publickey for core from 68.220.241.50 port 42828 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:04:05.995397 sshd-session[5968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:04:05.993000 audit[5968]: CRED_ACQ pid=5968 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:06.006238 kernel: audit: type=1103 audit(1768957445.993:913): pid=5968 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:06.010541 systemd-logind[1938]: New session 24 of user core. Jan 21 01:04:05.993000 audit[5968]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffedfea5760 a2=3 a3=0 items=0 ppid=1 pid=5968 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:06.017939 kernel: audit: type=1006 audit(1768957445.993:914): pid=5968 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 21 01:04:06.018271 kernel: audit: type=1300 audit(1768957445.993:914): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffedfea5760 a2=3 a3=0 items=0 ppid=1 pid=5968 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:06.022078 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 21 01:04:05.993000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:04:06.027233 kernel: audit: type=1327 audit(1768957445.993:914): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:04:06.028000 audit[5968]: USER_START pid=5968 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:06.038231 kernel: audit: type=1105 audit(1768957446.028:915): pid=5968 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:06.037000 audit[5972]: CRED_ACQ pid=5972 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:06.044235 kernel: audit: type=1103 audit(1768957446.037:916): pid=5972 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:06.352263 sshd[5972]: Connection closed by 68.220.241.50 port 42828 Jan 21 01:04:06.352424 sshd-session[5968]: pam_unix(sshd:session): session closed for user core Jan 21 01:04:06.354000 audit[5968]: USER_END pid=5968 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:06.361671 systemd[1]: sshd@22-172.31.28.215:22-68.220.241.50:42828.service: Deactivated successfully. Jan 21 01:04:06.354000 audit[5968]: CRED_DISP pid=5968 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:06.364770 kernel: audit: type=1106 audit(1768957446.354:917): pid=5968 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:06.364855 kernel: audit: type=1104 audit(1768957446.354:918): pid=5968 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:06.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.28.215:22-68.220.241.50:42828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:04:06.369737 systemd[1]: session-24.scope: Deactivated successfully. Jan 21 01:04:06.373694 systemd-logind[1938]: Session 24 logged out. Waiting for processes to exit. Jan 21 01:04:06.375790 systemd-logind[1938]: Removed session 24. Jan 21 01:04:09.134053 kubelet[3468]: E0121 01:04:09.133822 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-764d9fb657-cw7sh" podUID="5e4fb692-d42b-4062-b09a-d89e5a5a14cf" Jan 21 01:04:09.136397 kubelet[3468]: E0121 01:04:09.136230 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9xsfz" podUID="76d94f9b-071e-45b7-9881-314d22adc37f" Jan 21 01:04:10.131793 kubelet[3468]: E0121 01:04:10.131741 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b9ccb8fff-hdchl" podUID="0aa53f42-4d9d-4bec-8857-7799d5876dce" Jan 21 01:04:10.133365 kubelet[3468]: E0121 01:04:10.133288 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66b4f466fd-6gfxt" podUID="a6a90027-9f22-4c0c-9ffd-5b7564ee55c8" Jan 21 01:04:11.463499 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 01:04:11.463962 kernel: audit: type=1130 audit(1768957451.455:920): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.28.215:22-68.220.241.50:42838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:04:11.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.28.215:22-68.220.241.50:42838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:04:11.456605 systemd[1]: Started sshd@23-172.31.28.215:22-68.220.241.50:42838.service - OpenSSH per-connection server daemon (68.220.241.50:42838). Jan 21 01:04:11.942000 audit[5985]: USER_ACCT pid=5985 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:11.946690 sshd-session[5985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:04:11.949332 sshd[5985]: Accepted publickey for core from 68.220.241.50 port 42838 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:04:11.950317 kernel: audit: type=1101 audit(1768957451.942:921): pid=5985 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:11.950569 kernel: audit: type=1103 audit(1768957451.942:922): pid=5985 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:11.942000 audit[5985]: CRED_ACQ pid=5985 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:11.942000 audit[5985]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc04bfb200 a2=3 a3=0 items=0 ppid=1 pid=5985 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:11.961828 kernel: audit: type=1006 audit(1768957451.942:923): pid=5985 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 21 01:04:11.961918 kernel: audit: type=1300 audit(1768957451.942:923): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc04bfb200 a2=3 a3=0 items=0 ppid=1 pid=5985 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:11.942000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:04:11.965930 systemd-logind[1938]: New session 25 of user core. Jan 21 01:04:11.969645 kernel: audit: type=1327 audit(1768957451.942:923): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:04:11.973049 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 21 01:04:11.978000 audit[5985]: USER_START pid=5985 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:11.987344 kernel: audit: type=1105 audit(1768957451.978:924): pid=5985 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:11.987000 audit[5989]: CRED_ACQ pid=5989 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:11.993239 kernel: audit: type=1103 audit(1768957451.987:925): pid=5989 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:12.313717 sshd[5989]: Connection closed by 68.220.241.50 port 42838 Jan 21 01:04:12.315542 sshd-session[5985]: pam_unix(sshd:session): session closed for user core Jan 21 01:04:12.319000 audit[5985]: USER_END pid=5985 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:12.328243 kernel: audit: type=1106 audit(1768957452.319:926): pid=5985 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:12.328421 systemd[1]: sshd@23-172.31.28.215:22-68.220.241.50:42838.service: Deactivated successfully. Jan 21 01:04:12.331450 systemd[1]: session-25.scope: Deactivated successfully. Jan 21 01:04:12.334848 systemd-logind[1938]: Session 25 logged out. Waiting for processes to exit. Jan 21 01:04:12.319000 audit[5985]: CRED_DISP pid=5985 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:12.340246 kernel: audit: type=1104 audit(1768957452.319:927): pid=5985 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:12.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.28.215:22-68.220.241.50:42838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:04:12.340370 systemd-logind[1938]: Removed session 25. Jan 21 01:04:15.131159 kubelet[3468]: E0121 01:04:15.131112 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-95fb8fc69-kstsq" podUID="35190377-3d75-48f7-8c27-98f24edff14f" Jan 21 01:04:17.398652 systemd[1]: Started sshd@24-172.31.28.215:22-68.220.241.50:58984.service - OpenSSH per-connection server daemon (68.220.241.50:58984). Jan 21 01:04:17.401802 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 01:04:17.401931 kernel: audit: type=1130 audit(1768957457.397:929): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.28.215:22-68.220.241.50:58984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:04:17.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.28.215:22-68.220.241.50:58984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:04:17.918321 sshd[6029]: Accepted publickey for core from 68.220.241.50 port 58984 ssh2: RSA SHA256:ynuLn8tJCPqgpXkJmbCRq4xTnR0LSutdg0yVYFUgOn4 Jan 21 01:04:17.916000 audit[6029]: USER_ACCT pid=6029 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:17.924421 kernel: audit: type=1101 audit(1768957457.916:930): pid=6029 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:17.924000 audit[6029]: CRED_ACQ pid=6029 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:17.928603 sshd-session[6029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:04:17.933235 kernel: audit: type=1103 audit(1768957457.924:931): pid=6029 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:17.938239 kernel: audit: type=1006 audit(1768957457.924:932): pid=6029 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 21 01:04:17.924000 audit[6029]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff76897370 a2=3 a3=0 items=0 ppid=1 pid=6029 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:17.945237 kernel: audit: type=1300 audit(1768957457.924:932): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff76897370 a2=3 a3=0 items=0 ppid=1 pid=6029 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:17.950992 systemd-logind[1938]: New session 26 of user core. Jan 21 01:04:17.924000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:04:17.954245 kernel: audit: type=1327 audit(1768957457.924:932): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:04:17.955511 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 21 01:04:17.962000 audit[6029]: USER_START pid=6029 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:17.972244 kernel: audit: type=1105 audit(1768957457.962:933): pid=6029 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:17.971000 audit[6033]: CRED_ACQ pid=6033 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:17.978305 kernel: audit: type=1103 audit(1768957457.971:934): pid=6033 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:18.134393 containerd[1962]: time="2026-01-21T01:04:18.134331180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 21 01:04:18.327894 sshd[6033]: Connection closed by 68.220.241.50 port 58984 Jan 21 01:04:18.328407 sshd-session[6029]: pam_unix(sshd:session): session closed for user core Jan 21 01:04:18.332000 audit[6029]: USER_END pid=6029 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:18.341233 kernel: audit: type=1106 audit(1768957458.332:935): pid=6029 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:18.339000 audit[6029]: CRED_DISP pid=6029 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:18.344190 systemd-logind[1938]: Session 26 logged out. Waiting for processes to exit. Jan 21 01:04:18.346904 systemd[1]: sshd@24-172.31.28.215:22-68.220.241.50:58984.service: Deactivated successfully. Jan 21 01:04:18.349261 kernel: audit: type=1104 audit(1768957458.339:936): pid=6029 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 21 01:04:18.350859 systemd[1]: session-26.scope: Deactivated successfully. Jan 21 01:04:18.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.28.215:22-68.220.241.50:58984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:04:18.355148 systemd-logind[1938]: Removed session 26. Jan 21 01:04:18.600908 containerd[1962]: time="2026-01-21T01:04:18.600489429Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:04:18.602824 containerd[1962]: time="2026-01-21T01:04:18.602767192Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 21 01:04:18.604336 containerd[1962]: time="2026-01-21T01:04:18.602856055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 21 01:04:18.604412 kubelet[3468]: E0121 01:04:18.602997 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 01:04:18.604412 kubelet[3468]: E0121 01:04:18.603038 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 01:04:18.604412 kubelet[3468]: E0121 01:04:18.603166 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pqxsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vhxxc_calico-system(b0fd67ff-2b5d-470f-b242-daa718038f97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 21 01:04:18.604935 kubelet[3468]: E0121 01:04:18.604615 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vhxxc" podUID="b0fd67ff-2b5d-470f-b242-daa718038f97" Jan 21 01:04:19.132133 containerd[1962]: time="2026-01-21T01:04:19.131651897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 01:04:19.386963 containerd[1962]: time="2026-01-21T01:04:19.386807469Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:04:19.389362 containerd[1962]: time="2026-01-21T01:04:19.389299999Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 01:04:19.389501 containerd[1962]: time="2026-01-21T01:04:19.389415564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 01:04:19.389665 kubelet[3468]: E0121 01:04:19.389623 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:04:19.389872 kubelet[3468]: E0121 01:04:19.389846 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:04:19.390625 kubelet[3468]: E0121 01:04:19.390489 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-98vll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b9ccb8fff-8h9ht_calico-apiserver(bec76da0-2399-4e6c-952b-9dc7ad88a302): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 01:04:19.392048 kubelet[3468]: E0121 01:04:19.391954 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b9ccb8fff-8h9ht" podUID="bec76da0-2399-4e6c-952b-9dc7ad88a302" Jan 21 01:04:20.134609 kubelet[3468]: E0121 01:04:20.134560 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9xsfz" podUID="76d94f9b-071e-45b7-9881-314d22adc37f" Jan 21 01:04:21.132248 containerd[1962]: time="2026-01-21T01:04:21.132119061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 01:04:21.537728 containerd[1962]: time="2026-01-21T01:04:21.537683672Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:04:21.539911 containerd[1962]: time="2026-01-21T01:04:21.539852250Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 01:04:21.540170 containerd[1962]: time="2026-01-21T01:04:21.539939482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 01:04:21.540239 kubelet[3468]: E0121 01:04:21.540077 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:04:21.540239 kubelet[3468]: E0121 01:04:21.540118 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:04:21.540627 kubelet[3468]: E0121 01:04:21.540492 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-57wl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b9ccb8fff-hdchl_calico-apiserver(0aa53f42-4d9d-4bec-8857-7799d5876dce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 01:04:21.541726 kubelet[3468]: E0121 01:04:21.541689 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b9ccb8fff-hdchl" podUID="0aa53f42-4d9d-4bec-8857-7799d5876dce" Jan 21 01:04:24.132817 containerd[1962]: time="2026-01-21T01:04:24.132730445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 21 01:04:24.379764 containerd[1962]: time="2026-01-21T01:04:24.379720924Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:04:24.381905 containerd[1962]: time="2026-01-21T01:04:24.381854293Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 21 01:04:24.382169 containerd[1962]: time="2026-01-21T01:04:24.381949531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 21 01:04:24.382236 kubelet[3468]: E0121 01:04:24.382164 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 01:04:24.382236 kubelet[3468]: E0121 01:04:24.382227 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 01:04:24.382626 kubelet[3468]: E0121 01:04:24.382358 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxm28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-764d9fb657-cw7sh_calico-system(5e4fb692-d42b-4062-b09a-d89e5a5a14cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 21 01:04:24.383624 kubelet[3468]: E0121 01:04:24.383516 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-764d9fb657-cw7sh" podUID="5e4fb692-d42b-4062-b09a-d89e5a5a14cf" Jan 21 01:04:25.131854 containerd[1962]: time="2026-01-21T01:04:25.131808695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 21 01:04:25.401944 containerd[1962]: time="2026-01-21T01:04:25.401822989Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:04:25.404219 containerd[1962]: time="2026-01-21T01:04:25.404004467Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 21 01:04:25.404219 containerd[1962]: time="2026-01-21T01:04:25.404104347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 21 01:04:25.404938 kubelet[3468]: E0121 01:04:25.404467 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 01:04:25.404938 kubelet[3468]: E0121 01:04:25.404527 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 01:04:25.404938 kubelet[3468]: E0121 01:04:25.404643 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:70d67053a2a44bfca2c3c6f84dd42b97,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ssm6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66b4f466fd-6gfxt_calico-system(a6a90027-9f22-4c0c-9ffd-5b7564ee55c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 21 01:04:25.406614 containerd[1962]: time="2026-01-21T01:04:25.406563868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 21 01:04:25.666816 containerd[1962]: time="2026-01-21T01:04:25.666687000Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:04:25.669256 containerd[1962]: time="2026-01-21T01:04:25.669185183Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 21 01:04:25.669413 containerd[1962]: time="2026-01-21T01:04:25.669298073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 21 01:04:25.669522 kubelet[3468]: E0121 01:04:25.669485 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 01:04:25.669600 kubelet[3468]: E0121 01:04:25.669532 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 01:04:25.669672 kubelet[3468]: E0121 01:04:25.669637 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ssm6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66b4f466fd-6gfxt_calico-system(a6a90027-9f22-4c0c-9ffd-5b7564ee55c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 21 01:04:25.670853 kubelet[3468]: E0121 01:04:25.670801 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66b4f466fd-6gfxt" podUID="a6a90027-9f22-4c0c-9ffd-5b7564ee55c8" Jan 21 01:04:29.131672 containerd[1962]: time="2026-01-21T01:04:29.131606097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 01:04:29.406297 containerd[1962]: time="2026-01-21T01:04:29.406130150Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:04:29.408631 containerd[1962]: time="2026-01-21T01:04:29.408573776Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 01:04:29.408775 containerd[1962]: time="2026-01-21T01:04:29.408664208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 01:04:29.408873 kubelet[3468]: E0121 01:04:29.408836 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:04:29.409192 kubelet[3468]: E0121 01:04:29.408880 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:04:29.409192 kubelet[3468]: E0121 01:04:29.408998 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c6nw7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-95fb8fc69-kstsq_calico-apiserver(35190377-3d75-48f7-8c27-98f24edff14f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 01:04:29.410249 kubelet[3468]: E0121 01:04:29.410177 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-95fb8fc69-kstsq" podUID="35190377-3d75-48f7-8c27-98f24edff14f" Jan 21 01:04:31.131594 kubelet[3468]: E0121 01:04:31.131532 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b9ccb8fff-8h9ht" podUID="bec76da0-2399-4e6c-952b-9dc7ad88a302" Jan 21 01:04:32.132148 kubelet[3468]: E0121 01:04:32.131784 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b9ccb8fff-hdchl" podUID="0aa53f42-4d9d-4bec-8857-7799d5876dce" Jan 21 01:04:32.832636 systemd[1]: cri-containerd-774a4cf037534b99988d7faa8a7762e92f3d88f36d5d5c012b7f8b498838f9dd.scope: Deactivated successfully. Jan 21 01:04:32.832975 systemd[1]: cri-containerd-774a4cf037534b99988d7faa8a7762e92f3d88f36d5d5c012b7f8b498838f9dd.scope: Consumed 4.667s CPU time, 98.8M memory peak, 88.8M read from disk. Jan 21 01:04:32.835403 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 01:04:32.835478 kernel: audit: type=1334 audit(1768957472.832:938): prog-id=268 op=LOAD Jan 21 01:04:32.832000 audit: BPF prog-id=268 op=LOAD Jan 21 01:04:32.832000 audit: BPF prog-id=90 op=UNLOAD Jan 21 01:04:32.838203 kernel: audit: type=1334 audit(1768957472.832:939): prog-id=90 op=UNLOAD Jan 21 01:04:32.837000 audit: BPF prog-id=115 op=UNLOAD Jan 21 01:04:32.840247 kernel: audit: type=1334 audit(1768957472.837:940): prog-id=115 op=UNLOAD Jan 21 01:04:32.840332 kernel: audit: type=1334 audit(1768957472.837:941): prog-id=119 op=UNLOAD Jan 21 01:04:32.837000 audit: BPF prog-id=119 op=UNLOAD Jan 21 01:04:32.925601 containerd[1962]: time="2026-01-21T01:04:32.925536821Z" level=info msg="received container exit event container_id:\"774a4cf037534b99988d7faa8a7762e92f3d88f36d5d5c012b7f8b498838f9dd\" id:\"774a4cf037534b99988d7faa8a7762e92f3d88f36d5d5c012b7f8b498838f9dd\" pid:3132 exit_status:1 exited_at:{seconds:1768957472 nanos:839527508}" Jan 21 01:04:32.979702 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-774a4cf037534b99988d7faa8a7762e92f3d88f36d5d5c012b7f8b498838f9dd-rootfs.mount: Deactivated successfully. Jan 21 01:04:33.132468 containerd[1962]: time="2026-01-21T01:04:33.132282940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 21 01:04:33.244954 kubelet[3468]: I0121 01:04:33.244849 3468 scope.go:117] "RemoveContainer" containerID="774a4cf037534b99988d7faa8a7762e92f3d88f36d5d5c012b7f8b498838f9dd" Jan 21 01:04:33.275689 containerd[1962]: time="2026-01-21T01:04:33.275654417Z" level=info msg="CreateContainer within sandbox \"4b9b8d425f1b0456c5d2f163d77c50aaa5492fc13af358271d23f039583f0e0c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 21 01:04:33.335421 containerd[1962]: time="2026-01-21T01:04:33.335375619Z" level=info msg="Container 5ee72eb1ec9d0149cd156ed0b8a69cf3815305fe324c1ef2ab615bbd1f0a14e0: CDI devices from CRI Config.CDIDevices: []" Jan 21 01:04:33.337247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1661687234.mount: Deactivated successfully. Jan 21 01:04:33.349832 containerd[1962]: time="2026-01-21T01:04:33.349790688Z" level=info msg="CreateContainer within sandbox \"4b9b8d425f1b0456c5d2f163d77c50aaa5492fc13af358271d23f039583f0e0c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"5ee72eb1ec9d0149cd156ed0b8a69cf3815305fe324c1ef2ab615bbd1f0a14e0\"" Jan 21 01:04:33.350411 containerd[1962]: time="2026-01-21T01:04:33.350377396Z" level=info msg="StartContainer for \"5ee72eb1ec9d0149cd156ed0b8a69cf3815305fe324c1ef2ab615bbd1f0a14e0\"" Jan 21 01:04:33.351502 containerd[1962]: time="2026-01-21T01:04:33.351472945Z" level=info msg="connecting to shim 5ee72eb1ec9d0149cd156ed0b8a69cf3815305fe324c1ef2ab615bbd1f0a14e0" address="unix:///run/containerd/s/39cd39532a5345f5f687e58c3f9bde71bce131639639ebc20ef4d38b4e7d6e4d" protocol=ttrpc version=3 Jan 21 01:04:33.380677 systemd[1]: Started cri-containerd-5ee72eb1ec9d0149cd156ed0b8a69cf3815305fe324c1ef2ab615bbd1f0a14e0.scope - libcontainer container 5ee72eb1ec9d0149cd156ed0b8a69cf3815305fe324c1ef2ab615bbd1f0a14e0. Jan 21 01:04:33.398000 audit: BPF prog-id=269 op=LOAD Jan 21 01:04:33.401398 kernel: audit: type=1334 audit(1768957473.398:942): prog-id=269 op=LOAD Jan 21 01:04:33.400000 audit: BPF prog-id=270 op=LOAD Jan 21 01:04:33.400000 audit[6084]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2967 pid=6084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:33.405125 kernel: audit: type=1334 audit(1768957473.400:943): prog-id=270 op=LOAD Jan 21 01:04:33.405183 kernel: audit: type=1300 audit(1768957473.400:943): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2967 pid=6084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:33.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565653732656231656339643031343963643135366564306238613639 Jan 21 01:04:33.409301 containerd[1962]: time="2026-01-21T01:04:33.408787232Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:04:33.410254 kernel: audit: type=1327 audit(1768957473.400:943): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565653732656231656339643031343963643135366564306238613639 Jan 21 01:04:33.414492 kernel: audit: type=1334 audit(1768957473.400:944): prog-id=270 op=UNLOAD Jan 21 01:04:33.400000 audit: BPF prog-id=270 op=UNLOAD Jan 21 01:04:33.400000 audit[6084]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2967 pid=6084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:33.416534 containerd[1962]: time="2026-01-21T01:04:33.415405349Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 21 01:04:33.416534 containerd[1962]: time="2026-01-21T01:04:33.415493751Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 21 01:04:33.416641 kernel: audit: type=1300 audit(1768957473.400:944): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2967 pid=6084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:33.416740 kubelet[3468]: E0121 01:04:33.416711 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 01:04:33.417080 kubelet[3468]: E0121 01:04:33.416897 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 01:04:33.417080 kubelet[3468]: E0121 01:04:33.417018 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z7shk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9xsfz_calico-system(76d94f9b-071e-45b7-9881-314d22adc37f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 21 01:04:33.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565653732656231656339643031343963643135366564306238613639 Jan 21 01:04:33.401000 audit: BPF prog-id=271 op=LOAD Jan 21 01:04:33.401000 audit[6084]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2967 pid=6084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:33.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565653732656231656339643031343963643135366564306238613639 Jan 21 01:04:33.401000 audit: BPF prog-id=272 op=LOAD Jan 21 01:04:33.401000 audit[6084]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2967 pid=6084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:33.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565653732656231656339643031343963643135366564306238613639 Jan 21 01:04:33.401000 audit: BPF prog-id=272 op=UNLOAD Jan 21 01:04:33.401000 audit[6084]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2967 pid=6084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:33.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565653732656231656339643031343963643135366564306238613639 Jan 21 01:04:33.401000 audit: BPF prog-id=271 op=UNLOAD Jan 21 01:04:33.401000 audit[6084]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2967 pid=6084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:33.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565653732656231656339643031343963643135366564306238613639 Jan 21 01:04:33.401000 audit: BPF prog-id=273 op=LOAD Jan 21 01:04:33.401000 audit[6084]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2967 pid=6084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:33.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565653732656231656339643031343963643135366564306238613639 Jan 21 01:04:33.422197 containerd[1962]: time="2026-01-21T01:04:33.422176006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 21 01:04:33.466129 containerd[1962]: time="2026-01-21T01:04:33.466092592Z" level=info msg="StartContainer for \"5ee72eb1ec9d0149cd156ed0b8a69cf3815305fe324c1ef2ab615bbd1f0a14e0\" returns successfully" Jan 21 01:04:33.690641 containerd[1962]: time="2026-01-21T01:04:33.690516198Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:04:33.692768 containerd[1962]: time="2026-01-21T01:04:33.692711545Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 21 01:04:33.692886 containerd[1962]: time="2026-01-21T01:04:33.692798690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 21 01:04:33.693124 kubelet[3468]: E0121 01:04:33.693065 3468 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 01:04:33.693205 kubelet[3468]: E0121 01:04:33.693134 3468 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 01:04:33.693361 kubelet[3468]: E0121 01:04:33.693307 3468 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z7shk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9xsfz_calico-system(76d94f9b-071e-45b7-9881-314d22adc37f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 21 01:04:33.694590 kubelet[3468]: E0121 01:04:33.694480 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9xsfz" podUID="76d94f9b-071e-45b7-9881-314d22adc37f" Jan 21 01:04:33.781233 systemd[1]: cri-containerd-201103dc9e7f8039a1bc1d0da0e54085e85462bd085d806f0a507f55a45eb9ea.scope: Deactivated successfully. Jan 21 01:04:33.781677 systemd[1]: cri-containerd-201103dc9e7f8039a1bc1d0da0e54085e85462bd085d806f0a507f55a45eb9ea.scope: Consumed 16.540s CPU time, 109.3M memory peak, 48M read from disk. Jan 21 01:04:33.784966 containerd[1962]: time="2026-01-21T01:04:33.784826666Z" level=info msg="received container exit event container_id:\"201103dc9e7f8039a1bc1d0da0e54085e85462bd085d806f0a507f55a45eb9ea\" id:\"201103dc9e7f8039a1bc1d0da0e54085e85462bd085d806f0a507f55a45eb9ea\" pid:3709 exit_status:1 exited_at:{seconds:1768957473 nanos:783703819}" Jan 21 01:04:33.784000 audit: BPF prog-id=153 op=UNLOAD Jan 21 01:04:33.784000 audit: BPF prog-id=157 op=UNLOAD Jan 21 01:04:33.982346 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-201103dc9e7f8039a1bc1d0da0e54085e85462bd085d806f0a507f55a45eb9ea-rootfs.mount: Deactivated successfully. Jan 21 01:04:34.170270 kubelet[3468]: E0121 01:04:34.169757 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vhxxc" podUID="b0fd67ff-2b5d-470f-b242-daa718038f97" Jan 21 01:04:34.243590 kubelet[3468]: I0121 01:04:34.243482 3468 scope.go:117] "RemoveContainer" containerID="201103dc9e7f8039a1bc1d0da0e54085e85462bd085d806f0a507f55a45eb9ea" Jan 21 01:04:34.259642 containerd[1962]: time="2026-01-21T01:04:34.259594010Z" level=info msg="CreateContainer within sandbox \"f58941a7c84eb20ccb3ed5359ada202e94cab4f54e8107c00617efda4919f643\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 21 01:04:34.304237 containerd[1962]: time="2026-01-21T01:04:34.304157584Z" level=info msg="Container fa6d8d0e0d6eb21da16e17375c320fa0176419a6f76907cf74de1da7d7a6caf0: CDI devices from CRI Config.CDIDevices: []" Jan 21 01:04:34.325189 containerd[1962]: time="2026-01-21T01:04:34.325140404Z" level=info msg="CreateContainer within sandbox \"f58941a7c84eb20ccb3ed5359ada202e94cab4f54e8107c00617efda4919f643\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"fa6d8d0e0d6eb21da16e17375c320fa0176419a6f76907cf74de1da7d7a6caf0\"" Jan 21 01:04:34.325952 containerd[1962]: time="2026-01-21T01:04:34.325913237Z" level=info msg="StartContainer for \"fa6d8d0e0d6eb21da16e17375c320fa0176419a6f76907cf74de1da7d7a6caf0\"" Jan 21 01:04:34.327467 containerd[1962]: time="2026-01-21T01:04:34.327372367Z" level=info msg="connecting to shim fa6d8d0e0d6eb21da16e17375c320fa0176419a6f76907cf74de1da7d7a6caf0" address="unix:///run/containerd/s/7fd096551bd9d6c67ab91c4bdfa2e855f5647926d10d32d3a02168baa963ae9c" protocol=ttrpc version=3 Jan 21 01:04:34.364679 systemd[1]: Started cri-containerd-fa6d8d0e0d6eb21da16e17375c320fa0176419a6f76907cf74de1da7d7a6caf0.scope - libcontainer container fa6d8d0e0d6eb21da16e17375c320fa0176419a6f76907cf74de1da7d7a6caf0. Jan 21 01:04:34.427000 audit: BPF prog-id=274 op=LOAD Jan 21 01:04:34.428000 audit: BPF prog-id=275 op=LOAD Jan 21 01:04:34.428000 audit[6126]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3602 pid=6126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:34.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661366438643065306436656232316461313665313733373563333230 Jan 21 01:04:34.428000 audit: BPF prog-id=275 op=UNLOAD Jan 21 01:04:34.428000 audit[6126]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3602 pid=6126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:34.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661366438643065306436656232316461313665313733373563333230 Jan 21 01:04:34.428000 audit: BPF prog-id=276 op=LOAD Jan 21 01:04:34.428000 audit[6126]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3602 pid=6126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:34.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661366438643065306436656232316461313665313733373563333230 Jan 21 01:04:34.428000 audit: BPF prog-id=277 op=LOAD Jan 21 01:04:34.428000 audit[6126]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3602 pid=6126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:34.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661366438643065306436656232316461313665313733373563333230 Jan 21 01:04:34.428000 audit: BPF prog-id=277 op=UNLOAD Jan 21 01:04:34.428000 audit[6126]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3602 pid=6126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:34.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661366438643065306436656232316461313665313733373563333230 Jan 21 01:04:34.428000 audit: BPF prog-id=276 op=UNLOAD Jan 21 01:04:34.428000 audit[6126]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3602 pid=6126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:34.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661366438643065306436656232316461313665313733373563333230 Jan 21 01:04:34.428000 audit: BPF prog-id=278 op=LOAD Jan 21 01:04:34.428000 audit[6126]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3602 pid=6126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:34.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661366438643065306436656232316461313665313733373563333230 Jan 21 01:04:34.470945 containerd[1962]: time="2026-01-21T01:04:34.470903810Z" level=info msg="StartContainer for \"fa6d8d0e0d6eb21da16e17375c320fa0176419a6f76907cf74de1da7d7a6caf0\" returns successfully" Jan 21 01:04:37.131844 kubelet[3468]: E0121 01:04:37.131796 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-764d9fb657-cw7sh" podUID="5e4fb692-d42b-4062-b09a-d89e5a5a14cf" Jan 21 01:04:37.696549 systemd[1]: cri-containerd-522e7555334ca6e98b89d3e3005986365587286a99d0f29f88211ebf06030d8a.scope: Deactivated successfully. Jan 21 01:04:37.697417 systemd[1]: cri-containerd-522e7555334ca6e98b89d3e3005986365587286a99d0f29f88211ebf06030d8a.scope: Consumed 3.621s CPU time, 39.1M memory peak, 32.4M read from disk. Jan 21 01:04:37.697000 audit: BPF prog-id=110 op=UNLOAD Jan 21 01:04:37.697000 audit: BPF prog-id=114 op=UNLOAD Jan 21 01:04:37.698745 containerd[1962]: time="2026-01-21T01:04:37.698029127Z" level=info msg="received container exit event container_id:\"522e7555334ca6e98b89d3e3005986365587286a99d0f29f88211ebf06030d8a\" id:\"522e7555334ca6e98b89d3e3005986365587286a99d0f29f88211ebf06030d8a\" pid:3124 exit_status:1 exited_at:{seconds:1768957477 nanos:696977134}" Jan 21 01:04:37.697000 audit: BPF prog-id=279 op=LOAD Jan 21 01:04:37.697000 audit: BPF prog-id=100 op=UNLOAD Jan 21 01:04:37.729588 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-522e7555334ca6e98b89d3e3005986365587286a99d0f29f88211ebf06030d8a-rootfs.mount: Deactivated successfully. Jan 21 01:04:38.260015 kubelet[3468]: I0121 01:04:38.259372 3468 scope.go:117] "RemoveContainer" containerID="522e7555334ca6e98b89d3e3005986365587286a99d0f29f88211ebf06030d8a" Jan 21 01:04:38.262587 containerd[1962]: time="2026-01-21T01:04:38.262554276Z" level=info msg="CreateContainer within sandbox \"b9b6c94412847c06a2e6c9639da9efa42f1580051d23324414e9f19b414de701\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 21 01:04:38.279032 containerd[1962]: time="2026-01-21T01:04:38.278983071Z" level=info msg="Container f10be8e5e7c6ee4ccff9f03eb2c4b29280ec616378eec8ece2b3117341156c2d: CDI devices from CRI Config.CDIDevices: []" Jan 21 01:04:38.294410 containerd[1962]: time="2026-01-21T01:04:38.294349545Z" level=info msg="CreateContainer within sandbox \"b9b6c94412847c06a2e6c9639da9efa42f1580051d23324414e9f19b414de701\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"f10be8e5e7c6ee4ccff9f03eb2c4b29280ec616378eec8ece2b3117341156c2d\"" Jan 21 01:04:38.294960 containerd[1962]: time="2026-01-21T01:04:38.294928060Z" level=info msg="StartContainer for \"f10be8e5e7c6ee4ccff9f03eb2c4b29280ec616378eec8ece2b3117341156c2d\"" Jan 21 01:04:38.295965 containerd[1962]: time="2026-01-21T01:04:38.295938742Z" level=info msg="connecting to shim f10be8e5e7c6ee4ccff9f03eb2c4b29280ec616378eec8ece2b3117341156c2d" address="unix:///run/containerd/s/e9c98797218a5c402829dceb88208731e7a0c520f881c3d6caaca3f1ee4524b9" protocol=ttrpc version=3 Jan 21 01:04:38.329494 systemd[1]: Started cri-containerd-f10be8e5e7c6ee4ccff9f03eb2c4b29280ec616378eec8ece2b3117341156c2d.scope - libcontainer container f10be8e5e7c6ee4ccff9f03eb2c4b29280ec616378eec8ece2b3117341156c2d. Jan 21 01:04:38.344983 kernel: kauditd_printk_skb: 44 callbacks suppressed Jan 21 01:04:38.345115 kernel: audit: type=1334 audit(1768957478.342:964): prog-id=280 op=LOAD Jan 21 01:04:38.342000 audit: BPF prog-id=280 op=LOAD Jan 21 01:04:38.350733 kernel: audit: type=1334 audit(1768957478.349:965): prog-id=281 op=LOAD Jan 21 01:04:38.349000 audit: BPF prog-id=281 op=LOAD Jan 21 01:04:38.349000 audit[6169]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=2983 pid=6169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:38.356278 kernel: audit: type=1300 audit(1768957478.349:965): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=2983 pid=6169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:38.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631306265386535653763366565346363666639663033656232633462 Jan 21 01:04:38.369743 kernel: audit: type=1327 audit(1768957478.349:965): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631306265386535653763366565346363666639663033656232633462 Jan 21 01:04:38.375027 kernel: audit: type=1334 audit(1768957478.349:966): prog-id=281 op=UNLOAD Jan 21 01:04:38.375175 kernel: audit: type=1300 audit(1768957478.349:966): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2983 pid=6169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:38.349000 audit: BPF prog-id=281 op=UNLOAD Jan 21 01:04:38.349000 audit[6169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2983 pid=6169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:38.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631306265386535653763366565346363666639663033656232633462 Jan 21 01:04:38.382705 kernel: audit: type=1327 audit(1768957478.349:966): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631306265386535653763366565346363666639663033656232633462 Jan 21 01:04:38.349000 audit: BPF prog-id=282 op=LOAD Jan 21 01:04:38.387508 kernel: audit: type=1334 audit(1768957478.349:967): prog-id=282 op=LOAD Jan 21 01:04:38.349000 audit[6169]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=2983 pid=6169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:38.390017 kernel: audit: type=1300 audit(1768957478.349:967): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=2983 pid=6169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:38.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631306265386535653763366565346363666639663033656232633462 Jan 21 01:04:38.396087 kernel: audit: type=1327 audit(1768957478.349:967): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631306265386535653763366565346363666639663033656232633462 Jan 21 01:04:38.349000 audit: BPF prog-id=283 op=LOAD Jan 21 01:04:38.349000 audit[6169]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=2983 pid=6169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:38.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631306265386535653763366565346363666639663033656232633462 Jan 21 01:04:38.350000 audit: BPF prog-id=283 op=UNLOAD Jan 21 01:04:38.350000 audit[6169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2983 pid=6169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:38.350000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631306265386535653763366565346363666639663033656232633462 Jan 21 01:04:38.350000 audit: BPF prog-id=282 op=UNLOAD Jan 21 01:04:38.350000 audit[6169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2983 pid=6169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:38.350000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631306265386535653763366565346363666639663033656232633462 Jan 21 01:04:38.350000 audit: BPF prog-id=284 op=LOAD Jan 21 01:04:38.350000 audit[6169]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=2983 pid=6169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:04:38.350000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631306265386535653763366565346363666639663033656232633462 Jan 21 01:04:38.426249 containerd[1962]: time="2026-01-21T01:04:38.426150323Z" level=info msg="StartContainer for \"f10be8e5e7c6ee4ccff9f03eb2c4b29280ec616378eec8ece2b3117341156c2d\" returns successfully" Jan 21 01:04:39.800855 kubelet[3468]: E0121 01:04:39.800791 3468 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-215?timeout=10s\": context deadline exceeded" Jan 21 01:04:40.132631 kubelet[3468]: E0121 01:04:40.132405 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-95fb8fc69-kstsq" podUID="35190377-3d75-48f7-8c27-98f24edff14f" Jan 21 01:04:40.133638 kubelet[3468]: E0121 01:04:40.133592 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66b4f466fd-6gfxt" podUID="a6a90027-9f22-4c0c-9ffd-5b7564ee55c8" Jan 21 01:04:44.132710 kubelet[3468]: E0121 01:04:44.132656 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b9ccb8fff-8h9ht" podUID="bec76da0-2399-4e6c-952b-9dc7ad88a302" Jan 21 01:04:44.133206 kubelet[3468]: E0121 01:04:44.132970 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9xsfz" podUID="76d94f9b-071e-45b7-9881-314d22adc37f" Jan 21 01:04:45.132188 kubelet[3468]: E0121 01:04:45.132143 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b9ccb8fff-hdchl" podUID="0aa53f42-4d9d-4bec-8857-7799d5876dce" Jan 21 01:04:45.132568 kubelet[3468]: E0121 01:04:45.132252 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vhxxc" podUID="b0fd67ff-2b5d-470f-b242-daa718038f97" Jan 21 01:04:46.926023 systemd[1]: cri-containerd-fa6d8d0e0d6eb21da16e17375c320fa0176419a6f76907cf74de1da7d7a6caf0.scope: Deactivated successfully. Jan 21 01:04:46.926852 systemd[1]: cri-containerd-fa6d8d0e0d6eb21da16e17375c320fa0176419a6f76907cf74de1da7d7a6caf0.scope: Consumed 314ms CPU time, 68.8M memory peak, 34.2M read from disk. Jan 21 01:04:46.928865 containerd[1962]: time="2026-01-21T01:04:46.928710030Z" level=info msg="received container exit event container_id:\"fa6d8d0e0d6eb21da16e17375c320fa0176419a6f76907cf74de1da7d7a6caf0\" id:\"fa6d8d0e0d6eb21da16e17375c320fa0176419a6f76907cf74de1da7d7a6caf0\" pid:6138 exit_status:1 exited_at:{seconds:1768957486 nanos:927944305}" Jan 21 01:04:46.928000 audit: BPF prog-id=274 op=UNLOAD Jan 21 01:04:46.930489 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 21 01:04:46.931297 kernel: audit: type=1334 audit(1768957486.928:972): prog-id=274 op=UNLOAD Jan 21 01:04:46.930000 audit: BPF prog-id=278 op=UNLOAD Jan 21 01:04:46.932549 kernel: audit: type=1334 audit(1768957486.930:973): prog-id=278 op=UNLOAD Jan 21 01:04:46.955167 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa6d8d0e0d6eb21da16e17375c320fa0176419a6f76907cf74de1da7d7a6caf0-rootfs.mount: Deactivated successfully. Jan 21 01:04:47.283888 kubelet[3468]: I0121 01:04:47.283784 3468 scope.go:117] "RemoveContainer" containerID="201103dc9e7f8039a1bc1d0da0e54085e85462bd085d806f0a507f55a45eb9ea" Jan 21 01:04:47.284834 kubelet[3468]: I0121 01:04:47.284553 3468 scope.go:117] "RemoveContainer" containerID="fa6d8d0e0d6eb21da16e17375c320fa0176419a6f76907cf74de1da7d7a6caf0" Jan 21 01:04:47.284834 kubelet[3468]: E0121 01:04:47.284789 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-dsbx6_tigera-operator(611a37f6-2bab-4980-b128-e9e057421fbd)\"" pod="tigera-operator/tigera-operator-7dcd859c48-dsbx6" podUID="611a37f6-2bab-4980-b128-e9e057421fbd" Jan 21 01:04:47.357559 containerd[1962]: time="2026-01-21T01:04:47.357360502Z" level=info msg="RemoveContainer for \"201103dc9e7f8039a1bc1d0da0e54085e85462bd085d806f0a507f55a45eb9ea\"" Jan 21 01:04:47.408126 containerd[1962]: time="2026-01-21T01:04:47.408065089Z" level=info msg="RemoveContainer for \"201103dc9e7f8039a1bc1d0da0e54085e85462bd085d806f0a507f55a45eb9ea\" returns successfully" Jan 21 01:04:49.801631 kubelet[3468]: E0121 01:04:49.801580 3468 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-215?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 01:04:50.132322 kubelet[3468]: E0121 01:04:50.131905 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-764d9fb657-cw7sh" podUID="5e4fb692-d42b-4062-b09a-d89e5a5a14cf" Jan 21 01:04:51.131694 kubelet[3468]: E0121 01:04:51.131648 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-95fb8fc69-kstsq" podUID="35190377-3d75-48f7-8c27-98f24edff14f" Jan 21 01:04:52.131364 kubelet[3468]: E0121 01:04:52.131300 3468 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66b4f466fd-6gfxt" podUID="a6a90027-9f22-4c0c-9ffd-5b7564ee55c8"