Dec 16 12:50:12.539755 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Dec 16 00:18:19 -00 2025 Dec 16 12:50:12.539793 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 12:50:12.539812 kernel: BIOS-provided physical RAM map: Dec 16 12:50:12.539824 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 16 12:50:12.539836 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Dec 16 12:50:12.539847 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Dec 16 12:50:12.539862 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Dec 16 12:50:12.539875 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Dec 16 12:50:12.539887 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Dec 16 12:50:12.539915 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Dec 16 12:50:12.539931 kernel: NX (Execute Disable) protection: active Dec 16 12:50:12.539943 kernel: APIC: Static calls initialized Dec 16 12:50:12.539955 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Dec 16 12:50:12.539968 kernel: extended physical RAM map: Dec 16 12:50:12.539984 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 16 12:50:12.540000 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Dec 16 12:50:12.540014 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Dec 16 12:50:12.540027 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Dec 16 12:50:12.540041 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Dec 16 12:50:12.540055 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Dec 16 12:50:12.540068 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Dec 16 12:50:12.540082 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Dec 16 12:50:12.540096 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Dec 16 12:50:12.540109 kernel: efi: EFI v2.7 by EDK II Dec 16 12:50:12.540123 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77002518 Dec 16 12:50:12.540140 kernel: secureboot: Secure boot disabled Dec 16 12:50:12.540153 kernel: SMBIOS 2.7 present. Dec 16 12:50:12.540166 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Dec 16 12:50:12.540179 kernel: DMI: Memory slots populated: 1/1 Dec 16 12:50:12.540193 kernel: Hypervisor detected: KVM Dec 16 12:50:12.540206 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Dec 16 12:50:12.540219 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 12:50:12.540233 kernel: kvm-clock: using sched offset of 7608626360 cycles Dec 16 12:50:12.540248 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 12:50:12.540263 kernel: tsc: Detected 2499.996 MHz processor Dec 16 12:50:12.540279 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 12:50:12.540294 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 12:50:12.540308 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Dec 16 12:50:12.540322 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 16 12:50:12.540337 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 12:50:12.540357 kernel: Using GB pages for direct mapping Dec 16 12:50:12.540374 kernel: ACPI: Early table checksum verification disabled Dec 16 12:50:12.540389 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Dec 16 12:50:12.540404 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Dec 16 12:50:12.540419 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 16 12:50:12.540435 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 16 12:50:12.540450 kernel: ACPI: FACS 0x00000000789D0000 000040 Dec 16 12:50:12.540468 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Dec 16 12:50:12.540483 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 16 12:50:12.540498 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 16 12:50:12.540513 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Dec 16 12:50:12.540528 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Dec 16 12:50:12.540542 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 16 12:50:12.540557 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 16 12:50:12.540574 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Dec 16 12:50:12.540590 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Dec 16 12:50:12.540605 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Dec 16 12:50:12.540620 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Dec 16 12:50:12.540635 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Dec 16 12:50:12.540650 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Dec 16 12:50:12.540665 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Dec 16 12:50:12.540682 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Dec 16 12:50:12.540698 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Dec 16 12:50:12.540713 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Dec 16 12:50:12.540728 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Dec 16 12:50:12.540742 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Dec 16 12:50:12.540758 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Dec 16 12:50:12.540773 kernel: NUMA: Initialized distance table, cnt=1 Dec 16 12:50:12.540790 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Dec 16 12:50:12.540806 kernel: Zone ranges: Dec 16 12:50:12.540821 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 12:50:12.540836 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Dec 16 12:50:12.540851 kernel: Normal empty Dec 16 12:50:12.540866 kernel: Device empty Dec 16 12:50:12.540881 kernel: Movable zone start for each node Dec 16 12:50:12.540896 kernel: Early memory node ranges Dec 16 12:50:12.540963 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 16 12:50:12.540975 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Dec 16 12:50:12.540989 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Dec 16 12:50:12.541003 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Dec 16 12:50:12.541017 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 12:50:12.541031 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 16 12:50:12.541045 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Dec 16 12:50:12.541063 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Dec 16 12:50:12.541077 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 16 12:50:12.541091 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 12:50:12.541105 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Dec 16 12:50:12.541119 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 12:50:12.541134 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 12:50:12.541147 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 12:50:12.541161 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 12:50:12.541178 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 12:50:12.541192 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 16 12:50:12.541206 kernel: TSC deadline timer available Dec 16 12:50:12.541221 kernel: CPU topo: Max. logical packages: 1 Dec 16 12:50:12.541234 kernel: CPU topo: Max. logical dies: 1 Dec 16 12:50:12.541249 kernel: CPU topo: Max. dies per package: 1 Dec 16 12:50:12.541262 kernel: CPU topo: Max. threads per core: 2 Dec 16 12:50:12.541278 kernel: CPU topo: Num. cores per package: 1 Dec 16 12:50:12.541292 kernel: CPU topo: Num. threads per package: 2 Dec 16 12:50:12.541306 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 16 12:50:12.541320 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 12:50:12.541334 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Dec 16 12:50:12.541348 kernel: Booting paravirtualized kernel on KVM Dec 16 12:50:12.541363 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 12:50:12.541380 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 12:50:12.541394 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 16 12:50:12.541407 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 16 12:50:12.541421 kernel: pcpu-alloc: [0] 0 1 Dec 16 12:50:12.541435 kernel: kvm-guest: PV spinlocks enabled Dec 16 12:50:12.541449 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 12:50:12.541465 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 12:50:12.541482 kernel: random: crng init done Dec 16 12:50:12.541496 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 12:50:12.541511 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 12:50:12.541526 kernel: Fallback order for Node 0: 0 Dec 16 12:50:12.541540 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Dec 16 12:50:12.541555 kernel: Policy zone: DMA32 Dec 16 12:50:12.541581 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 12:50:12.541596 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 12:50:12.541611 kernel: Kernel/User page tables isolation: enabled Dec 16 12:50:12.541627 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 12:50:12.541642 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 12:50:12.541657 kernel: Dynamic Preempt: voluntary Dec 16 12:50:12.541671 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 12:50:12.541692 kernel: rcu: RCU event tracing is enabled. Dec 16 12:50:12.541706 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 12:50:12.541721 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 12:50:12.541739 kernel: Rude variant of Tasks RCU enabled. Dec 16 12:50:12.541753 kernel: Tracing variant of Tasks RCU enabled. Dec 16 12:50:12.541767 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 12:50:12.541782 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 12:50:12.541798 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:50:12.541815 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:50:12.541831 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:50:12.541845 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 16 12:50:12.541860 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 12:50:12.541875 kernel: Console: colour dummy device 80x25 Dec 16 12:50:12.541890 kernel: printk: legacy console [tty0] enabled Dec 16 12:50:12.541916 kernel: printk: legacy console [ttyS0] enabled Dec 16 12:50:12.541934 kernel: ACPI: Core revision 20240827 Dec 16 12:50:12.541950 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Dec 16 12:50:12.541964 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 12:50:12.541979 kernel: x2apic enabled Dec 16 12:50:12.541994 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 12:50:12.542009 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Dec 16 12:50:12.542024 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Dec 16 12:50:12.542040 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 16 12:50:12.542055 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Dec 16 12:50:12.542070 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 12:50:12.542084 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 12:50:12.542098 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 12:50:12.542112 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 16 12:50:12.542126 kernel: RETBleed: Vulnerable Dec 16 12:50:12.542141 kernel: Speculative Store Bypass: Vulnerable Dec 16 12:50:12.542155 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Dec 16 12:50:12.542171 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 16 12:50:12.542186 kernel: GDS: Unknown: Dependent on hypervisor status Dec 16 12:50:12.542200 kernel: active return thunk: its_return_thunk Dec 16 12:50:12.542214 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 12:50:12.542228 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 12:50:12.542243 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 12:50:12.542258 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 12:50:12.542272 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 16 12:50:12.542286 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 16 12:50:12.542300 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 16 12:50:12.542317 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 16 12:50:12.542331 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 16 12:50:12.542346 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 16 12:50:12.542360 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 12:50:12.542374 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 16 12:50:12.542389 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 16 12:50:12.542402 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Dec 16 12:50:12.542416 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Dec 16 12:50:12.542430 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Dec 16 12:50:12.542445 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Dec 16 12:50:12.542459 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Dec 16 12:50:12.542476 kernel: Freeing SMP alternatives memory: 32K Dec 16 12:50:12.542490 kernel: pid_max: default: 32768 minimum: 301 Dec 16 12:50:12.542504 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 12:50:12.542518 kernel: landlock: Up and running. Dec 16 12:50:12.542532 kernel: SELinux: Initializing. Dec 16 12:50:12.542546 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 12:50:12.542560 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 12:50:12.542575 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 16 12:50:12.542589 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 16 12:50:12.542604 kernel: signal: max sigframe size: 3632 Dec 16 12:50:12.542621 kernel: rcu: Hierarchical SRCU implementation. Dec 16 12:50:12.542636 kernel: rcu: Max phase no-delay instances is 400. Dec 16 12:50:12.542651 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 12:50:12.542666 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 16 12:50:12.542681 kernel: smp: Bringing up secondary CPUs ... Dec 16 12:50:12.542696 kernel: smpboot: x86: Booting SMP configuration: Dec 16 12:50:12.542711 kernel: .... node #0, CPUs: #1 Dec 16 12:50:12.542725 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 16 12:50:12.542744 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 16 12:50:12.542758 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 12:50:12.542774 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Dec 16 12:50:12.542789 kernel: Memory: 1924436K/2037804K available (14336K kernel code, 2444K rwdata, 31636K rodata, 15556K init, 2484K bss, 108804K reserved, 0K cma-reserved) Dec 16 12:50:12.542804 kernel: devtmpfs: initialized Dec 16 12:50:12.542818 kernel: x86/mm: Memory block size: 128MB Dec 16 12:50:12.542836 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Dec 16 12:50:12.542860 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 12:50:12.542875 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 12:50:12.542890 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 12:50:12.542942 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 12:50:12.542958 kernel: audit: initializing netlink subsys (disabled) Dec 16 12:50:12.542973 kernel: audit: type=2000 audit(1765889408.880:1): state=initialized audit_enabled=0 res=1 Dec 16 12:50:12.542992 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 12:50:12.543007 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 12:50:12.543031 kernel: cpuidle: using governor menu Dec 16 12:50:12.543046 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 12:50:12.543061 kernel: dca service started, version 1.12.1 Dec 16 12:50:12.543075 kernel: PCI: Using configuration type 1 for base access Dec 16 12:50:12.543090 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 12:50:12.543108 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 12:50:12.543123 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 12:50:12.543138 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 12:50:12.543153 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 12:50:12.543167 kernel: ACPI: Added _OSI(Module Device) Dec 16 12:50:12.543182 kernel: ACPI: Added _OSI(Processor Device) Dec 16 12:50:12.543197 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 12:50:12.543215 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 16 12:50:12.543230 kernel: ACPI: Interpreter enabled Dec 16 12:50:12.543245 kernel: ACPI: PM: (supports S0 S5) Dec 16 12:50:12.543260 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 12:50:12.543276 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 12:50:12.543293 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 12:50:12.543310 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 16 12:50:12.543330 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 12:50:12.543631 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 16 12:50:12.543829 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 16 12:50:12.544040 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 16 12:50:12.544061 kernel: acpiphp: Slot [3] registered Dec 16 12:50:12.544079 kernel: acpiphp: Slot [4] registered Dec 16 12:50:12.544100 kernel: acpiphp: Slot [5] registered Dec 16 12:50:12.544117 kernel: acpiphp: Slot [6] registered Dec 16 12:50:12.544135 kernel: acpiphp: Slot [7] registered Dec 16 12:50:12.544152 kernel: acpiphp: Slot [8] registered Dec 16 12:50:12.544168 kernel: acpiphp: Slot [9] registered Dec 16 12:50:12.544186 kernel: acpiphp: Slot [10] registered Dec 16 12:50:12.544203 kernel: acpiphp: Slot [11] registered Dec 16 12:50:12.544219 kernel: acpiphp: Slot [12] registered Dec 16 12:50:12.544239 kernel: acpiphp: Slot [13] registered Dec 16 12:50:12.544256 kernel: acpiphp: Slot [14] registered Dec 16 12:50:12.544273 kernel: acpiphp: Slot [15] registered Dec 16 12:50:12.544290 kernel: acpiphp: Slot [16] registered Dec 16 12:50:12.544307 kernel: acpiphp: Slot [17] registered Dec 16 12:50:12.544324 kernel: acpiphp: Slot [18] registered Dec 16 12:50:12.544340 kernel: acpiphp: Slot [19] registered Dec 16 12:50:12.544360 kernel: acpiphp: Slot [20] registered Dec 16 12:50:12.544377 kernel: acpiphp: Slot [21] registered Dec 16 12:50:12.544394 kernel: acpiphp: Slot [22] registered Dec 16 12:50:12.544411 kernel: acpiphp: Slot [23] registered Dec 16 12:50:12.544428 kernel: acpiphp: Slot [24] registered Dec 16 12:50:12.544444 kernel: acpiphp: Slot [25] registered Dec 16 12:50:12.544462 kernel: acpiphp: Slot [26] registered Dec 16 12:50:12.544481 kernel: acpiphp: Slot [27] registered Dec 16 12:50:12.544498 kernel: acpiphp: Slot [28] registered Dec 16 12:50:12.544515 kernel: acpiphp: Slot [29] registered Dec 16 12:50:12.544531 kernel: acpiphp: Slot [30] registered Dec 16 12:50:12.544549 kernel: acpiphp: Slot [31] registered Dec 16 12:50:12.544565 kernel: PCI host bridge to bus 0000:00 Dec 16 12:50:12.544748 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 12:50:12.544938 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 12:50:12.545110 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 12:50:12.545279 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 16 12:50:12.545447 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Dec 16 12:50:12.545614 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 12:50:12.545823 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Dec 16 12:50:12.546038 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Dec 16 12:50:12.546239 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Dec 16 12:50:12.546436 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 16 12:50:12.546625 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Dec 16 12:50:12.546827 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Dec 16 12:50:12.547066 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Dec 16 12:50:12.547265 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Dec 16 12:50:12.547458 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Dec 16 12:50:12.547646 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Dec 16 12:50:12.547835 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Dec 16 12:50:12.553331 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Dec 16 12:50:12.553561 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Dec 16 12:50:12.553749 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 12:50:12.553974 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Dec 16 12:50:12.554171 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Dec 16 12:50:12.554368 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Dec 16 12:50:12.554575 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Dec 16 12:50:12.554598 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 12:50:12.554615 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 12:50:12.554638 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 12:50:12.554654 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 12:50:12.554668 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 16 12:50:12.554685 kernel: iommu: Default domain type: Translated Dec 16 12:50:12.554707 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 12:50:12.554721 kernel: efivars: Registered efivars operations Dec 16 12:50:12.554735 kernel: PCI: Using ACPI for IRQ routing Dec 16 12:50:12.554752 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 12:50:12.554769 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Dec 16 12:50:12.554784 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Dec 16 12:50:12.554799 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Dec 16 12:50:12.555039 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Dec 16 12:50:12.555240 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Dec 16 12:50:12.555431 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 12:50:12.555452 kernel: vgaarb: loaded Dec 16 12:50:12.555469 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 16 12:50:12.555486 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Dec 16 12:50:12.555502 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 12:50:12.555522 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 12:50:12.555538 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 12:50:12.555555 kernel: pnp: PnP ACPI init Dec 16 12:50:12.555571 kernel: pnp: PnP ACPI: found 5 devices Dec 16 12:50:12.555588 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 12:50:12.555604 kernel: NET: Registered PF_INET protocol family Dec 16 12:50:12.555620 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:50:12.555637 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 16 12:50:12.555656 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 12:50:12.555673 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 12:50:12.555689 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 16 12:50:12.555706 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 16 12:50:12.555722 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 12:50:12.555739 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 12:50:12.555755 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 12:50:12.555775 kernel: NET: Registered PF_XDP protocol family Dec 16 12:50:12.555990 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 12:50:12.556164 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 12:50:12.556332 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 12:50:12.556495 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 16 12:50:12.556661 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Dec 16 12:50:12.556852 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 16 12:50:12.556872 kernel: PCI: CLS 0 bytes, default 64 Dec 16 12:50:12.556887 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 16 12:50:12.556915 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Dec 16 12:50:12.556931 kernel: clocksource: Switched to clocksource tsc Dec 16 12:50:12.556945 kernel: Initialise system trusted keyrings Dec 16 12:50:12.556961 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 16 12:50:12.556979 kernel: Key type asymmetric registered Dec 16 12:50:12.556995 kernel: Asymmetric key parser 'x509' registered Dec 16 12:50:12.557010 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 12:50:12.557025 kernel: io scheduler mq-deadline registered Dec 16 12:50:12.557040 kernel: io scheduler kyber registered Dec 16 12:50:12.557056 kernel: io scheduler bfq registered Dec 16 12:50:12.557071 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 12:50:12.557089 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 12:50:12.557105 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 12:50:12.557120 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 12:50:12.557135 kernel: i8042: Warning: Keylock active Dec 16 12:50:12.557150 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 12:50:12.557164 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 12:50:12.557357 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 16 12:50:12.557539 kernel: rtc_cmos 00:00: registered as rtc0 Dec 16 12:50:12.557709 kernel: rtc_cmos 00:00: setting system clock to 2025-12-16T12:50:09 UTC (1765889409) Dec 16 12:50:12.557878 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 16 12:50:12.557929 kernel: intel_pstate: CPU model not supported Dec 16 12:50:12.557950 kernel: efifb: probing for efifb Dec 16 12:50:12.557966 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Dec 16 12:50:12.557985 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Dec 16 12:50:12.558001 kernel: efifb: scrolling: redraw Dec 16 12:50:12.558017 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 12:50:12.558032 kernel: Console: switching to colour frame buffer device 100x37 Dec 16 12:50:12.558048 kernel: fb0: EFI VGA frame buffer device Dec 16 12:50:12.558064 kernel: pstore: Using crash dump compression: deflate Dec 16 12:50:12.558079 kernel: pstore: Registered efi_pstore as persistent store backend Dec 16 12:50:12.558097 kernel: NET: Registered PF_INET6 protocol family Dec 16 12:50:12.558112 kernel: Segment Routing with IPv6 Dec 16 12:50:12.558126 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 12:50:12.558141 kernel: NET: Registered PF_PACKET protocol family Dec 16 12:50:12.558156 kernel: Key type dns_resolver registered Dec 16 12:50:12.558183 kernel: IPI shorthand broadcast: enabled Dec 16 12:50:12.558218 kernel: sched_clock: Marking stable (1367001882, 144525139)->(1586633445, -75106424) Dec 16 12:50:12.558256 kernel: registered taskstats version 1 Dec 16 12:50:12.558296 kernel: Loading compiled-in X.509 certificates Dec 16 12:50:12.558331 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: aafd1eb27ea805b8231c3bede9210239fae84df8' Dec 16 12:50:12.558367 kernel: Demotion targets for Node 0: null Dec 16 12:50:12.558402 kernel: Key type .fscrypt registered Dec 16 12:50:12.558439 kernel: Key type fscrypt-provisioning registered Dec 16 12:50:12.558471 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 12:50:12.558508 kernel: ima: Allocated hash algorithm: sha1 Dec 16 12:50:12.558540 kernel: ima: No architecture policies found Dec 16 12:50:12.558554 kernel: clk: Disabling unused clocks Dec 16 12:50:12.558570 kernel: Freeing unused kernel image (initmem) memory: 15556K Dec 16 12:50:12.558588 kernel: Write protecting the kernel read-only data: 47104k Dec 16 12:50:12.558611 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Dec 16 12:50:12.558629 kernel: Run /init as init process Dec 16 12:50:12.558647 kernel: with arguments: Dec 16 12:50:12.558664 kernel: /init Dec 16 12:50:12.558681 kernel: with environment: Dec 16 12:50:12.558697 kernel: HOME=/ Dec 16 12:50:12.558714 kernel: TERM=linux Dec 16 12:50:12.558892 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 16 12:50:12.558943 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 16 12:50:12.559087 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 16 12:50:12.559109 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 12:50:12.559126 kernel: GPT:25804799 != 33554431 Dec 16 12:50:12.559143 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 12:50:12.559165 kernel: GPT:25804799 != 33554431 Dec 16 12:50:12.559181 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 12:50:12.559197 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 12:50:12.559214 kernel: SCSI subsystem initialized Dec 16 12:50:12.559231 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 12:50:12.559248 kernel: device-mapper: uevent: version 1.0.3 Dec 16 12:50:12.559265 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 12:50:12.559286 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 16 12:50:12.559303 kernel: raid6: avx512x4 gen() 17739 MB/s Dec 16 12:50:12.559320 kernel: raid6: avx512x2 gen() 18026 MB/s Dec 16 12:50:12.559338 kernel: raid6: avx512x1 gen() 17980 MB/s Dec 16 12:50:12.559355 kernel: raid6: avx2x4 gen() 17720 MB/s Dec 16 12:50:12.559372 kernel: raid6: avx2x2 gen() 17920 MB/s Dec 16 12:50:12.559390 kernel: raid6: avx2x1 gen() 13927 MB/s Dec 16 12:50:12.559410 kernel: raid6: using algorithm avx512x2 gen() 18026 MB/s Dec 16 12:50:12.559427 kernel: raid6: .... xor() 24152 MB/s, rmw enabled Dec 16 12:50:12.559444 kernel: raid6: using avx512x2 recovery algorithm Dec 16 12:50:12.559462 kernel: xor: automatically using best checksumming function avx Dec 16 12:50:12.559482 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 12:50:12.559498 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 12:50:12.559517 kernel: BTRFS: device fsid 57a8262f-2900-48ba-a17e-aafbd70d59c7 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (153) Dec 16 12:50:12.559537 kernel: BTRFS info (device dm-0): first mount of filesystem 57a8262f-2900-48ba-a17e-aafbd70d59c7 Dec 16 12:50:12.559554 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 12:50:12.559571 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 12:50:12.559588 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 12:50:12.559605 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 12:50:12.559623 kernel: loop: module loaded Dec 16 12:50:12.559640 kernel: loop0: detected capacity change from 0 to 100528 Dec 16 12:50:12.559659 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 12:50:12.559678 systemd[1]: Successfully made /usr/ read-only. Dec 16 12:50:12.559700 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:50:12.559719 systemd[1]: Detected virtualization amazon. Dec 16 12:50:12.559736 systemd[1]: Detected architecture x86-64. Dec 16 12:50:12.559752 systemd[1]: Running in initrd. Dec 16 12:50:12.559773 systemd[1]: No hostname configured, using default hostname. Dec 16 12:50:12.559791 systemd[1]: Hostname set to . Dec 16 12:50:12.559808 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 12:50:12.559826 systemd[1]: Queued start job for default target initrd.target. Dec 16 12:50:12.559843 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:50:12.559861 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:50:12.559879 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:50:12.559929 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 12:50:12.559949 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:50:12.559968 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 12:50:12.559987 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 12:50:12.560005 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:50:12.560026 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:50:12.560044 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:50:12.560060 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:50:12.560074 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:50:12.560089 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:50:12.560105 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:50:12.560122 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:50:12.560141 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:50:12.560157 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:50:12.560174 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 12:50:12.560190 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 12:50:12.560207 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:50:12.560223 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:50:12.560240 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:50:12.560260 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:50:12.560277 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 12:50:12.560293 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 12:50:12.560310 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:50:12.560327 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 12:50:12.560344 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 12:50:12.560363 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 12:50:12.560380 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:50:12.560396 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:50:12.560414 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:50:12.560433 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 12:50:12.560450 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:50:12.560466 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 12:50:12.560483 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:50:12.560537 systemd-journald[290]: Collecting audit messages is enabled. Dec 16 12:50:12.560577 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:50:12.560595 kernel: audit: type=1130 audit(1765889412.545:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:12.560611 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:50:12.560628 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 12:50:12.560647 systemd-journald[290]: Journal started Dec 16 12:50:12.560680 systemd-journald[290]: Runtime Journal (/run/log/journal/ec278c07ee14d852be44a4e841cf36c5) is 4.7M, max 38M, 33.2M free. Dec 16 12:50:12.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:12.565921 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:50:12.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:12.572168 kernel: audit: type=1130 audit(1765889412.564:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:12.580932 kernel: Bridge firewalling registered Dec 16 12:50:12.579765 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:50:12.580163 systemd-modules-load[292]: Inserted module 'br_netfilter' Dec 16 12:50:12.588135 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:50:12.595192 kernel: audit: type=1130 audit(1765889412.588:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:12.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:12.597978 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:50:12.615226 systemd-tmpfiles[305]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 12:50:12.618163 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:50:12.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:12.624559 kernel: audit: type=1130 audit(1765889412.617:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:12.625195 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:50:12.633218 kernel: audit: type=1130 audit(1765889412.623:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:12.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:12.633576 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:50:12.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:12.638360 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:50:12.643080 kernel: audit: type=1130 audit(1765889412.632:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:12.643109 kernel: audit: type=1130 audit(1765889412.636:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:12.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:12.642052 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 12:50:12.644662 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:50:12.647223 kernel: audit: type=1334 audit(1765889412.642:9): prog-id=6 op=LOAD Dec 16 12:50:12.642000 audit: BPF prog-id=6 op=LOAD Dec 16 12:50:12.667434 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:50:12.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:12.672922 kernel: audit: type=1130 audit(1765889412.666:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:12.673809 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 12:50:12.728805 systemd-resolved[317]: Positive Trust Anchors: Dec 16 12:50:12.729465 systemd-resolved[317]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:50:12.732234 dracut-cmdline[329]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 12:50:12.729470 systemd-resolved[317]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 12:50:12.729508 systemd-resolved[317]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:50:12.758953 systemd-resolved[317]: Defaulting to hostname 'linux'. Dec 16 12:50:12.759923 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:50:12.760886 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:50:12.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:12.905932 kernel: Loading iSCSI transport class v2.0-870. Dec 16 12:50:13.006947 kernel: iscsi: registered transport (tcp) Dec 16 12:50:13.059164 kernel: iscsi: registered transport (qla4xxx) Dec 16 12:50:13.059232 kernel: QLogic iSCSI HBA Driver Dec 16 12:50:13.087435 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:50:13.105594 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:50:13.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:13.106485 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:50:13.154133 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 12:50:13.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:13.156556 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 12:50:13.160066 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 12:50:13.193743 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:50:13.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:13.193000 audit: BPF prog-id=7 op=LOAD Dec 16 12:50:13.194000 audit: BPF prog-id=8 op=LOAD Dec 16 12:50:13.197080 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:50:13.226497 systemd-udevd[570]: Using default interface naming scheme 'v257'. Dec 16 12:50:13.237362 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:50:13.248424 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 16 12:50:13.248451 kernel: audit: type=1130 audit(1765889413.235:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:13.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:13.241843 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 12:50:13.264336 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:50:13.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:13.270334 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:50:13.271377 kernel: audit: type=1130 audit(1765889413.263:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:13.265000 audit: BPF prog-id=9 op=LOAD Dec 16 12:50:13.273966 kernel: audit: type=1334 audit(1765889413.265:19): prog-id=9 op=LOAD Dec 16 12:50:13.275756 dracut-pre-trigger[651]: rd.md=0: removing MD RAID activation Dec 16 12:50:13.302435 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:50:13.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:13.305065 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:50:13.309030 kernel: audit: type=1130 audit(1765889413.301:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:13.322826 systemd-networkd[673]: lo: Link UP Dec 16 12:50:13.322834 systemd-networkd[673]: lo: Gained carrier Dec 16 12:50:13.324743 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:50:13.329921 kernel: audit: type=1130 audit(1765889413.323:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:13.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:13.325242 systemd[1]: Reached target network.target - Network. Dec 16 12:50:13.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:13.374218 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:50:13.382092 kernel: audit: type=1130 audit(1765889413.372:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:13.381341 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 12:50:13.513693 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:50:13.514481 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:50:13.521367 kernel: audit: type=1131 audit(1765889413.514:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:13.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:13.520261 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:50:13.523765 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:50:13.529304 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 16 12:50:13.529588 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 16 12:50:13.533953 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Dec 16 12:50:13.542524 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:54:71:9c:38:b9 Dec 16 12:50:13.544741 (udev-worker)[718]: Network interface NamePolicy= disabled on kernel command line. Dec 16 12:50:13.550799 systemd-networkd[673]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:50:13.550808 systemd-networkd[673]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:50:13.555161 systemd-networkd[673]: eth0: Link UP Dec 16 12:50:13.555331 systemd-networkd[673]: eth0: Gained carrier Dec 16 12:50:13.555349 systemd-networkd[673]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:50:13.564994 systemd-networkd[673]: eth0: DHCPv4 address 172.31.17.11/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 16 12:50:13.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:13.580408 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:50:13.587504 kernel: audit: type=1130 audit(1765889413.579:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:13.590095 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 12:50:13.680940 kernel: AES CTR mode by8 optimization enabled Dec 16 12:50:13.686923 kernel: nvme nvme0: using unchecked data buffer Dec 16 12:50:13.687201 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Dec 16 12:50:13.785075 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 16 12:50:13.787266 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 12:50:13.809241 disk-uuid[827]: Primary Header is updated. Dec 16 12:50:13.809241 disk-uuid[827]: Secondary Entries is updated. Dec 16 12:50:13.809241 disk-uuid[827]: Secondary Header is updated. Dec 16 12:50:13.907270 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 16 12:50:13.953462 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 16 12:50:14.003270 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 16 12:50:14.233468 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 12:50:14.238814 kernel: audit: type=1130 audit(1765889414.232:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:14.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:14.234597 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:50:14.239451 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:50:14.240608 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:50:14.242569 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 12:50:14.265089 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:50:14.270542 kernel: audit: type=1130 audit(1765889414.263:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:14.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:14.956573 disk-uuid[828]: Warning: The kernel is still using the old partition table. Dec 16 12:50:14.956573 disk-uuid[828]: The new table will be used at the next reboot or after you Dec 16 12:50:14.956573 disk-uuid[828]: run partprobe(8) or kpartx(8) Dec 16 12:50:14.956573 disk-uuid[828]: The operation has completed successfully. Dec 16 12:50:14.966593 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 12:50:14.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:14.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:14.966705 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 12:50:14.968818 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 12:50:15.007950 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1073) Dec 16 12:50:15.008020 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 12:50:15.010186 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 12:50:15.049168 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 12:50:15.049251 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 12:50:15.058223 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 12:50:15.058751 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 12:50:15.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:15.060681 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 12:50:15.187107 systemd-networkd[673]: eth0: Gained IPv6LL Dec 16 12:50:16.346713 ignition[1092]: Ignition 2.24.0 Dec 16 12:50:16.346729 ignition[1092]: Stage: fetch-offline Dec 16 12:50:16.346815 ignition[1092]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:50:16.346825 ignition[1092]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:50:16.348480 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:50:16.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:16.347147 ignition[1092]: Ignition finished successfully Dec 16 12:50:16.351504 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 12:50:16.375620 ignition[1099]: Ignition 2.24.0 Dec 16 12:50:16.375636 ignition[1099]: Stage: fetch Dec 16 12:50:16.375872 ignition[1099]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:50:16.375885 ignition[1099]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:50:16.376010 ignition[1099]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:50:16.392718 ignition[1099]: PUT result: OK Dec 16 12:50:16.400364 ignition[1099]: parsed url from cmdline: "" Dec 16 12:50:16.400374 ignition[1099]: no config URL provided Dec 16 12:50:16.400387 ignition[1099]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:50:16.400403 ignition[1099]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:50:16.400428 ignition[1099]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:50:16.401047 ignition[1099]: PUT result: OK Dec 16 12:50:16.401106 ignition[1099]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 16 12:50:16.401728 ignition[1099]: GET result: OK Dec 16 12:50:16.401798 ignition[1099]: parsing config with SHA512: 02e4985139bff8fcf167ead4ff9816823a70938b887a48f7c988e82efe8a62638875f09b31b42d37e41d86a5a4dba8a924595bed6db2a81454f45256bfbb5ddc Dec 16 12:50:16.408162 unknown[1099]: fetched base config from "system" Dec 16 12:50:16.408716 unknown[1099]: fetched base config from "system" Dec 16 12:50:16.408723 unknown[1099]: fetched user config from "aws" Dec 16 12:50:16.409063 ignition[1099]: fetch: fetch complete Dec 16 12:50:16.409068 ignition[1099]: fetch: fetch passed Dec 16 12:50:16.409116 ignition[1099]: Ignition finished successfully Dec 16 12:50:16.411658 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 12:50:16.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:16.412979 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 12:50:16.442560 ignition[1105]: Ignition 2.24.0 Dec 16 12:50:16.442574 ignition[1105]: Stage: kargs Dec 16 12:50:16.442770 ignition[1105]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:50:16.442778 ignition[1105]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:50:16.442962 ignition[1105]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:50:16.443718 ignition[1105]: PUT result: OK Dec 16 12:50:16.448810 ignition[1105]: kargs: kargs passed Dec 16 12:50:16.449129 ignition[1105]: Ignition finished successfully Dec 16 12:50:16.451379 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 12:50:16.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:16.452767 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 12:50:16.480554 ignition[1112]: Ignition 2.24.0 Dec 16 12:50:16.480570 ignition[1112]: Stage: disks Dec 16 12:50:16.480854 ignition[1112]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:50:16.480867 ignition[1112]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:50:16.480987 ignition[1112]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:50:16.481834 ignition[1112]: PUT result: OK Dec 16 12:50:16.485384 ignition[1112]: disks: disks passed Dec 16 12:50:16.486079 ignition[1112]: Ignition finished successfully Dec 16 12:50:16.488072 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 12:50:16.488719 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 12:50:16.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:16.489097 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 12:50:16.489627 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:50:16.490175 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:50:16.490708 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:50:16.492484 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 12:50:16.605336 systemd-fsck[1121]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Dec 16 12:50:16.608076 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 12:50:16.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:16.610785 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 12:50:16.822920 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 1314c107-11a5-486b-9d52-be9f57b6bf1b r/w with ordered data mode. Quota mode: none. Dec 16 12:50:16.823604 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 12:50:16.824537 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 12:50:16.877160 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:50:16.879107 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 12:50:16.882278 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 12:50:16.883067 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 12:50:16.883552 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:50:16.889145 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 12:50:16.891199 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 12:50:16.904923 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1140) Dec 16 12:50:16.908011 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 12:50:16.908077 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 12:50:16.915592 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 12:50:16.915662 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 12:50:16.917327 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:50:19.095387 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 12:50:19.103785 kernel: kauditd_printk_skb: 8 callbacks suppressed Dec 16 12:50:19.103830 kernel: audit: type=1130 audit(1765889419.094:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:19.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:19.099021 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 12:50:19.105864 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 12:50:19.119842 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 12:50:19.121934 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 12:50:19.151304 ignition[1237]: INFO : Ignition 2.24.0 Dec 16 12:50:19.151962 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 12:50:19.152453 ignition[1237]: INFO : Stage: mount Dec 16 12:50:19.152453 ignition[1237]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:50:19.152453 ignition[1237]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:50:19.152453 ignition[1237]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:50:19.159524 kernel: audit: type=1130 audit(1765889419.151:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:19.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:19.159277 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 12:50:19.164106 kernel: audit: type=1130 audit(1765889419.157:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:19.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:19.164175 ignition[1237]: INFO : PUT result: OK Dec 16 12:50:19.164175 ignition[1237]: INFO : mount: mount passed Dec 16 12:50:19.164175 ignition[1237]: INFO : Ignition finished successfully Dec 16 12:50:19.161453 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 12:50:19.175755 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:50:19.207944 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1249) Dec 16 12:50:19.208017 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 12:50:19.210564 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 12:50:19.217854 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 12:50:19.217928 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 12:50:19.220258 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:50:19.249345 ignition[1265]: INFO : Ignition 2.24.0 Dec 16 12:50:19.249345 ignition[1265]: INFO : Stage: files Dec 16 12:50:19.250736 ignition[1265]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:50:19.250736 ignition[1265]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:50:19.250736 ignition[1265]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:50:19.250736 ignition[1265]: INFO : PUT result: OK Dec 16 12:50:19.254412 ignition[1265]: DEBUG : files: compiled without relabeling support, skipping Dec 16 12:50:19.256765 ignition[1265]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 12:50:19.256765 ignition[1265]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 12:50:19.293834 ignition[1265]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 12:50:19.294627 ignition[1265]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 12:50:19.294627 ignition[1265]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 12:50:19.294263 unknown[1265]: wrote ssh authorized keys file for user: core Dec 16 12:50:19.335067 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 12:50:19.336040 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 16 12:50:19.410746 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 12:50:19.580739 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 12:50:19.582185 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 12:50:19.582185 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 12:50:19.582185 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:50:19.582185 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:50:19.582185 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:50:19.582185 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:50:19.582185 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:50:19.582185 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:50:19.588142 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:50:19.588142 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:50:19.588142 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 12:50:19.590830 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 12:50:19.590830 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 12:50:19.590830 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Dec 16 12:50:20.048520 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 12:50:20.981618 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 12:50:20.981618 ignition[1265]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 12:50:21.025063 ignition[1265]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:50:21.029239 ignition[1265]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:50:21.029239 ignition[1265]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 12:50:21.029239 ignition[1265]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 12:50:21.037989 kernel: audit: type=1130 audit(1765889421.029:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.038072 ignition[1265]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 12:50:21.038072 ignition[1265]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:50:21.038072 ignition[1265]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:50:21.038072 ignition[1265]: INFO : files: files passed Dec 16 12:50:21.038072 ignition[1265]: INFO : Ignition finished successfully Dec 16 12:50:21.031161 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 12:50:21.034073 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 12:50:21.050846 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 12:50:21.053746 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 12:50:21.057023 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 12:50:21.067353 kernel: audit: type=1130 audit(1765889421.055:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.067382 kernel: audit: type=1131 audit(1765889421.055:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.069123 initrd-setup-root-after-ignition[1298]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:50:21.069123 initrd-setup-root-after-ignition[1298]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:50:21.070721 initrd-setup-root-after-ignition[1302]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:50:21.072158 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:50:21.077960 kernel: audit: type=1130 audit(1765889421.070:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.072751 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 12:50:21.079265 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 12:50:21.157422 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 12:50:21.157544 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 12:50:21.167214 kernel: audit: type=1130 audit(1765889421.156:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.167246 kernel: audit: type=1131 audit(1765889421.156:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.158648 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 12:50:21.167579 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 12:50:21.168630 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 12:50:21.169635 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 12:50:21.197209 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:50:21.204178 kernel: audit: type=1130 audit(1765889421.195:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.200085 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 12:50:21.224139 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:50:21.224364 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:50:21.225127 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:50:21.226106 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 12:50:21.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.227043 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 12:50:21.227274 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:50:21.228310 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 12:50:21.229240 systemd[1]: Stopped target basic.target - Basic System. Dec 16 12:50:21.229674 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 12:50:21.230166 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:50:21.231093 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 12:50:21.231768 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:50:21.232576 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 12:50:21.233343 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:50:21.234098 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 12:50:21.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.235003 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 12:50:21.235770 systemd[1]: Stopped target swap.target - Swaps. Dec 16 12:50:21.236508 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 12:50:21.236702 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:50:21.238220 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:50:21.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.239223 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:50:21.239878 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 12:50:21.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.240054 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:50:21.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.240686 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 12:50:21.240926 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 12:50:21.241960 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 12:50:21.242193 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:50:21.243033 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 12:50:21.243200 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 12:50:21.246029 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 12:50:21.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.249188 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 12:50:21.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.249797 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 12:50:21.250047 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:50:21.253373 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 12:50:21.253591 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:50:21.254355 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 12:50:21.254559 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:50:21.262640 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 12:50:21.262843 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 12:50:21.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.282503 ignition[1322]: INFO : Ignition 2.24.0 Dec 16 12:50:21.282503 ignition[1322]: INFO : Stage: umount Dec 16 12:50:21.286020 ignition[1322]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:50:21.286020 ignition[1322]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:50:21.286020 ignition[1322]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:50:21.286020 ignition[1322]: INFO : PUT result: OK Dec 16 12:50:21.290522 ignition[1322]: INFO : umount: umount passed Dec 16 12:50:21.290522 ignition[1322]: INFO : Ignition finished successfully Dec 16 12:50:21.292557 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 12:50:21.293365 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 12:50:21.293507 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 12:50:21.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.294399 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 12:50:21.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.294462 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 12:50:21.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.295722 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 12:50:21.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.295784 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 12:50:21.296483 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 12:50:21.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.296546 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 12:50:21.299259 systemd[1]: Stopped target network.target - Network. Dec 16 12:50:21.300154 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 12:50:21.300222 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:50:21.300826 systemd[1]: Stopped target paths.target - Path Units. Dec 16 12:50:21.301461 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 12:50:21.302969 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:50:21.303482 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 12:50:21.304084 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 12:50:21.305052 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 12:50:21.305111 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:50:21.305672 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 12:50:21.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.305716 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:50:21.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.306301 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 12:50:21.306340 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:50:21.307003 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 12:50:21.307078 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 12:50:21.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.307671 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 12:50:21.307726 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 12:50:21.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.308440 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 12:50:21.309172 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 12:50:21.311364 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 12:50:21.311483 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 12:50:21.317000 audit: BPF prog-id=6 op=UNLOAD Dec 16 12:50:21.313654 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 12:50:21.313759 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 12:50:21.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.315488 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 12:50:21.315617 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 12:50:21.318874 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 12:50:21.319521 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 12:50:21.323103 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 12:50:21.323559 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 12:50:21.323614 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:50:21.323000 audit: BPF prog-id=9 op=UNLOAD Dec 16 12:50:21.325437 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 12:50:21.325969 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 12:50:21.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.326042 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:50:21.326929 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:50:21.326991 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:50:21.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.330708 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 12:50:21.330830 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 12:50:21.331394 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:50:21.348575 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 12:50:21.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.348770 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:50:21.352510 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 12:50:21.352613 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 12:50:21.354373 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 12:50:21.355082 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:50:21.355819 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 12:50:21.355911 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:50:21.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.357126 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 12:50:21.357205 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 12:50:21.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.358259 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 12:50:21.358324 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:50:21.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.361734 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 12:50:21.362843 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 12:50:21.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.362966 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:50:21.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.363600 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 12:50:21.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.363660 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:50:21.365304 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 12:50:21.365364 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:50:21.366175 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 12:50:21.366232 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:50:21.367444 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:50:21.367504 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:50:21.369157 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 12:50:21.369288 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 12:50:21.382651 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 12:50:21.382879 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 12:50:21.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:21.384123 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 12:50:21.386141 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 12:50:21.405394 systemd[1]: Switching root. Dec 16 12:50:21.435723 systemd-journald[290]: Journal stopped Dec 16 12:50:24.247479 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Dec 16 12:50:24.247573 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 12:50:24.247603 kernel: SELinux: policy capability open_perms=1 Dec 16 12:50:24.247624 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 12:50:24.247644 kernel: SELinux: policy capability always_check_network=0 Dec 16 12:50:24.247670 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 12:50:24.247694 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 12:50:24.247714 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 12:50:24.247734 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 12:50:24.247762 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 12:50:24.247789 systemd[1]: Successfully loaded SELinux policy in 146.723ms. Dec 16 12:50:24.247820 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.705ms. Dec 16 12:50:24.247843 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:50:24.247863 systemd[1]: Detected virtualization amazon. Dec 16 12:50:24.247884 systemd[1]: Detected architecture x86-64. Dec 16 12:50:24.247923 systemd[1]: Detected first boot. Dec 16 12:50:24.247948 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 12:50:24.247968 zram_generator::config[1365]: No configuration found. Dec 16 12:50:24.247995 kernel: Guest personality initialized and is inactive Dec 16 12:50:24.248014 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 12:50:24.248032 kernel: Initialized host personality Dec 16 12:50:24.248052 kernel: NET: Registered PF_VSOCK protocol family Dec 16 12:50:24.248070 systemd[1]: Populated /etc with preset unit settings. Dec 16 12:50:24.248092 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 12:50:24.248111 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 12:50:24.248130 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 12:50:24.248157 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 12:50:24.248177 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 12:50:24.248198 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 12:50:24.248218 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 12:50:24.248241 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 12:50:24.248261 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 12:50:24.248281 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 12:50:24.248300 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 12:50:24.248320 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:50:24.248339 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:50:24.248360 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 12:50:24.248383 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 12:50:24.248404 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 12:50:24.248423 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:50:24.248443 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 12:50:24.248462 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:50:24.248482 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:50:24.248505 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 12:50:24.248525 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 12:50:24.248546 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 12:50:24.248564 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 12:50:24.248585 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:50:24.248605 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:50:24.248624 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 12:50:24.248647 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:50:24.248666 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:50:24.248688 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 12:50:24.248710 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 12:50:24.248729 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 12:50:24.248750 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:50:24.248769 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 12:50:24.248794 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:50:24.248815 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 12:50:24.248836 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 12:50:24.248857 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:50:24.248877 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:50:24.248896 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 12:50:24.248941 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 12:50:24.248960 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 12:50:24.248985 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 12:50:24.249007 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:50:24.249029 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 12:50:24.249051 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 12:50:24.249072 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 12:50:24.249094 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 12:50:24.249118 systemd[1]: Reached target machines.target - Containers. Dec 16 12:50:24.249140 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 12:50:24.249162 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:50:24.249185 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:50:24.249206 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:50:24.249227 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:50:24.249247 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:50:24.249272 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:50:24.249294 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:50:24.249315 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:50:24.249337 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 12:50:24.249359 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 12:50:24.249382 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 12:50:24.249415 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 12:50:24.249440 kernel: kauditd_printk_skb: 55 callbacks suppressed Dec 16 12:50:24.249466 kernel: audit: type=1131 audit(1765889424.123:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.249488 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 12:50:24.249510 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:50:24.249532 kernel: audit: type=1131 audit(1765889424.131:101): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.249552 kernel: audit: type=1334 audit(1765889424.133:102): prog-id=14 op=UNLOAD Dec 16 12:50:24.249574 kernel: audit: type=1334 audit(1765889424.133:103): prog-id=13 op=UNLOAD Dec 16 12:50:24.249591 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:50:24.249610 kernel: audit: type=1334 audit(1765889424.133:104): prog-id=15 op=LOAD Dec 16 12:50:24.249645 kernel: audit: type=1334 audit(1765889424.141:105): prog-id=16 op=LOAD Dec 16 12:50:24.249672 kernel: audit: type=1334 audit(1765889424.141:106): prog-id=17 op=LOAD Dec 16 12:50:24.249691 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:50:24.249711 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:50:24.249732 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 12:50:24.249756 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 12:50:24.249778 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:50:24.249800 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:50:24.249823 kernel: fuse: init (API version 7.41) Dec 16 12:50:24.249845 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 12:50:24.249868 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 12:50:24.249894 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 12:50:24.249937 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 12:50:24.249961 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 12:50:24.249985 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 12:50:24.250008 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:50:24.250039 kernel: audit: type=1130 audit(1765889424.220:107): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.250063 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:50:24.250088 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:50:24.250111 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:50:24.250134 kernel: audit: type=1130 audit(1765889424.228:108): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.250155 kernel: audit: type=1131 audit(1765889424.228:109): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.250180 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:50:24.250239 systemd-journald[1441]: Collecting audit messages is enabled. Dec 16 12:50:24.250281 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:50:24.250307 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:50:24.250329 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:50:24.250352 systemd-journald[1441]: Journal started Dec 16 12:50:24.250391 systemd-journald[1441]: Runtime Journal (/run/log/journal/ec278c07ee14d852be44a4e841cf36c5) is 4.7M, max 38M, 33.2M free. Dec 16 12:50:23.977000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 12:50:24.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.133000 audit: BPF prog-id=14 op=UNLOAD Dec 16 12:50:24.133000 audit: BPF prog-id=13 op=UNLOAD Dec 16 12:50:24.133000 audit: BPF prog-id=15 op=LOAD Dec 16 12:50:24.141000 audit: BPF prog-id=16 op=LOAD Dec 16 12:50:24.141000 audit: BPF prog-id=17 op=LOAD Dec 16 12:50:24.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.243000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 12:50:24.243000 audit[1441]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffe94211340 a2=4000 a3=0 items=0 ppid=1 pid=1441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:24.243000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 12:50:24.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:23.888050 systemd[1]: Queued start job for default target multi-user.target. Dec 16 12:50:23.912330 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 16 12:50:23.912877 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 12:50:24.255085 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:50:24.255137 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:50:24.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.258289 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:50:24.258514 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:50:24.259585 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:50:24.260652 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:50:24.262856 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 12:50:24.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.277096 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:50:24.279202 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 12:50:24.285490 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 12:50:24.291036 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 12:50:24.291704 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 12:50:24.291749 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:50:24.297424 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 12:50:24.301764 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:50:24.301997 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:50:24.312156 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 12:50:24.316166 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 12:50:24.318028 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:50:24.324063 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 12:50:24.326028 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:50:24.335421 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:50:24.344205 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 12:50:24.347554 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:50:24.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.353314 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 12:50:24.355321 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 12:50:24.356366 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 12:50:24.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.369004 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 12:50:24.370281 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 12:50:24.378176 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 12:50:24.403452 systemd-journald[1441]: Time spent on flushing to /var/log/journal/ec278c07ee14d852be44a4e841cf36c5 is 134.629ms for 1154 entries. Dec 16 12:50:24.403452 systemd-journald[1441]: System Journal (/var/log/journal/ec278c07ee14d852be44a4e841cf36c5) is 8M, max 588.1M, 580.1M free. Dec 16 12:50:24.551691 systemd-journald[1441]: Received client request to flush runtime journal. Dec 16 12:50:24.551766 kernel: ACPI: bus type drm_connector registered Dec 16 12:50:24.551810 kernel: loop1: detected capacity change from 0 to 50784 Dec 16 12:50:24.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.474086 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:50:24.480354 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:50:24.481006 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:50:24.519753 systemd-tmpfiles[1481]: ACLs are not supported, ignoring. Dec 16 12:50:24.519777 systemd-tmpfiles[1481]: ACLs are not supported, ignoring. Dec 16 12:50:24.534998 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 12:50:24.551669 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:50:24.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.555895 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 12:50:24.558350 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 12:50:24.561320 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:50:24.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.581208 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 12:50:24.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.613479 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 12:50:24.613000 audit: BPF prog-id=18 op=LOAD Dec 16 12:50:24.614000 audit: BPF prog-id=19 op=LOAD Dec 16 12:50:24.614000 audit: BPF prog-id=20 op=LOAD Dec 16 12:50:24.618156 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 12:50:24.619000 audit: BPF prog-id=21 op=LOAD Dec 16 12:50:24.622068 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:50:24.627199 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:50:24.631000 audit: BPF prog-id=22 op=LOAD Dec 16 12:50:24.631000 audit: BPF prog-id=23 op=LOAD Dec 16 12:50:24.631000 audit: BPF prog-id=24 op=LOAD Dec 16 12:50:24.635175 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 12:50:24.637000 audit: BPF prog-id=25 op=LOAD Dec 16 12:50:24.637000 audit: BPF prog-id=26 op=LOAD Dec 16 12:50:24.637000 audit: BPF prog-id=27 op=LOAD Dec 16 12:50:24.641386 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 12:50:24.681996 systemd-tmpfiles[1521]: ACLs are not supported, ignoring. Dec 16 12:50:24.682025 systemd-tmpfiles[1521]: ACLs are not supported, ignoring. Dec 16 12:50:24.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.695210 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:50:24.753502 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 12:50:24.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.769980 systemd-nsresourced[1523]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 12:50:24.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.774368 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 12:50:24.827240 kernel: loop2: detected capacity change from 0 to 73176 Dec 16 12:50:24.915699 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 12:50:24.972983 systemd-oomd[1519]: No swap; memory pressure usage will be degraded Dec 16 12:50:24.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:24.974315 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 12:50:25.055472 systemd-resolved[1520]: Positive Trust Anchors: Dec 16 12:50:25.055790 systemd-resolved[1520]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:50:25.055846 systemd-resolved[1520]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 12:50:25.055944 systemd-resolved[1520]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:50:25.062814 systemd-resolved[1520]: Defaulting to hostname 'linux'. Dec 16 12:50:25.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:25.064603 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:50:25.065153 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:50:25.135937 kernel: loop3: detected capacity change from 0 to 229808 Dec 16 12:50:25.261645 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 12:50:25.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:25.260000 audit: BPF prog-id=8 op=UNLOAD Dec 16 12:50:25.260000 audit: BPF prog-id=7 op=UNLOAD Dec 16 12:50:25.261000 audit: BPF prog-id=28 op=LOAD Dec 16 12:50:25.261000 audit: BPF prog-id=29 op=LOAD Dec 16 12:50:25.264208 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:50:25.309100 systemd-udevd[1543]: Using default interface naming scheme 'v257'. Dec 16 12:50:25.479938 kernel: loop4: detected capacity change from 0 to 111560 Dec 16 12:50:25.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:25.506000 audit: BPF prog-id=30 op=LOAD Dec 16 12:50:25.506776 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:50:25.510770 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:50:25.573628 (udev-worker)[1546]: Network interface NamePolicy= disabled on kernel command line. Dec 16 12:50:25.576225 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 12:50:25.591620 systemd-networkd[1548]: lo: Link UP Dec 16 12:50:25.591895 systemd-networkd[1548]: lo: Gained carrier Dec 16 12:50:25.593048 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:50:25.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:25.594014 systemd[1]: Reached target network.target - Network. Dec 16 12:50:25.596450 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 12:50:25.599061 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 12:50:25.628359 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 12:50:25.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:25.645758 systemd-networkd[1548]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:50:25.645923 systemd-networkd[1548]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:50:25.647989 systemd-networkd[1548]: eth0: Link UP Dec 16 12:50:25.648123 systemd-networkd[1548]: eth0: Gained carrier Dec 16 12:50:25.648152 systemd-networkd[1548]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:50:25.658933 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Dec 16 12:50:25.660191 systemd-networkd[1548]: eth0: DHCPv4 address 172.31.17.11/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 16 12:50:25.668936 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 16 12:50:25.669029 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 12:50:25.674226 kernel: ACPI: button: Power Button [PWRF] Dec 16 12:50:25.674313 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Dec 16 12:50:25.676048 kernel: ACPI: button: Sleep Button [SLPF] Dec 16 12:50:25.715155 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:50:25.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:25.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:25.747765 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:50:25.748006 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:50:25.754882 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:50:25.837946 kernel: loop5: detected capacity change from 0 to 50784 Dec 16 12:50:25.856935 kernel: loop6: detected capacity change from 0 to 73176 Dec 16 12:50:25.876933 kernel: loop7: detected capacity change from 0 to 229808 Dec 16 12:50:25.906334 kernel: loop1: detected capacity change from 0 to 111560 Dec 16 12:50:25.926735 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:50:25.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:25.929191 (sd-merge)[1609]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-ami.raw'. Dec 16 12:50:25.935249 (sd-merge)[1609]: Merged extensions into '/usr'. Dec 16 12:50:25.947285 systemd[1]: Reload requested from client PID 1480 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 12:50:25.947301 systemd[1]: Reloading... Dec 16 12:50:26.027931 zram_generator::config[1714]: No configuration found. Dec 16 12:50:26.262988 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 16 12:50:26.264074 systemd[1]: Reloading finished in 316 ms. Dec 16 12:50:26.286313 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 12:50:26.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:26.306086 systemd[1]: Starting ensure-sysext.service... Dec 16 12:50:26.311100 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 12:50:26.319000 audit: BPF prog-id=31 op=LOAD Dec 16 12:50:26.319000 audit: BPF prog-id=30 op=UNLOAD Dec 16 12:50:26.320000 audit: BPF prog-id=32 op=LOAD Dec 16 12:50:26.320000 audit: BPF prog-id=21 op=UNLOAD Dec 16 12:50:26.318161 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:50:26.322000 audit: BPF prog-id=33 op=LOAD Dec 16 12:50:26.322000 audit: BPF prog-id=15 op=UNLOAD Dec 16 12:50:26.322000 audit: BPF prog-id=34 op=LOAD Dec 16 12:50:26.322000 audit: BPF prog-id=35 op=LOAD Dec 16 12:50:26.322000 audit: BPF prog-id=16 op=UNLOAD Dec 16 12:50:26.322000 audit: BPF prog-id=17 op=UNLOAD Dec 16 12:50:26.323000 audit: BPF prog-id=36 op=LOAD Dec 16 12:50:26.323000 audit: BPF prog-id=25 op=UNLOAD Dec 16 12:50:26.323000 audit: BPF prog-id=37 op=LOAD Dec 16 12:50:26.323000 audit: BPF prog-id=38 op=LOAD Dec 16 12:50:26.323000 audit: BPF prog-id=26 op=UNLOAD Dec 16 12:50:26.323000 audit: BPF prog-id=27 op=UNLOAD Dec 16 12:50:26.330000 audit: BPF prog-id=39 op=LOAD Dec 16 12:50:26.330000 audit: BPF prog-id=40 op=LOAD Dec 16 12:50:26.330000 audit: BPF prog-id=28 op=UNLOAD Dec 16 12:50:26.330000 audit: BPF prog-id=29 op=UNLOAD Dec 16 12:50:26.331000 audit: BPF prog-id=41 op=LOAD Dec 16 12:50:26.331000 audit: BPF prog-id=18 op=UNLOAD Dec 16 12:50:26.331000 audit: BPF prog-id=42 op=LOAD Dec 16 12:50:26.331000 audit: BPF prog-id=43 op=LOAD Dec 16 12:50:26.331000 audit: BPF prog-id=19 op=UNLOAD Dec 16 12:50:26.332000 audit: BPF prog-id=20 op=UNLOAD Dec 16 12:50:26.332000 audit: BPF prog-id=44 op=LOAD Dec 16 12:50:26.332000 audit: BPF prog-id=22 op=UNLOAD Dec 16 12:50:26.333000 audit: BPF prog-id=45 op=LOAD Dec 16 12:50:26.333000 audit: BPF prog-id=46 op=LOAD Dec 16 12:50:26.333000 audit: BPF prog-id=23 op=UNLOAD Dec 16 12:50:26.333000 audit: BPF prog-id=24 op=UNLOAD Dec 16 12:50:26.346768 systemd[1]: Reload requested from client PID 1765 ('systemctl') (unit ensure-sysext.service)... Dec 16 12:50:26.346793 systemd[1]: Reloading... Dec 16 12:50:26.382330 systemd-tmpfiles[1767]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 12:50:26.382379 systemd-tmpfiles[1767]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 12:50:26.382972 systemd-tmpfiles[1767]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 12:50:26.384982 systemd-tmpfiles[1767]: ACLs are not supported, ignoring. Dec 16 12:50:26.385077 systemd-tmpfiles[1767]: ACLs are not supported, ignoring. Dec 16 12:50:26.398825 systemd-tmpfiles[1767]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:50:26.398850 systemd-tmpfiles[1767]: Skipping /boot Dec 16 12:50:26.414891 systemd-tmpfiles[1767]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:50:26.416764 systemd-tmpfiles[1767]: Skipping /boot Dec 16 12:50:26.450963 zram_generator::config[1803]: No configuration found. Dec 16 12:50:26.695472 systemd[1]: Reloading finished in 348 ms. Dec 16 12:50:26.718000 audit: BPF prog-id=47 op=LOAD Dec 16 12:50:26.718000 audit: BPF prog-id=33 op=UNLOAD Dec 16 12:50:26.718000 audit: BPF prog-id=48 op=LOAD Dec 16 12:50:26.718000 audit: BPF prog-id=49 op=LOAD Dec 16 12:50:26.718000 audit: BPF prog-id=34 op=UNLOAD Dec 16 12:50:26.718000 audit: BPF prog-id=35 op=UNLOAD Dec 16 12:50:26.719000 audit: BPF prog-id=50 op=LOAD Dec 16 12:50:26.719000 audit: BPF prog-id=41 op=UNLOAD Dec 16 12:50:26.719000 audit: BPF prog-id=51 op=LOAD Dec 16 12:50:26.719000 audit: BPF prog-id=52 op=LOAD Dec 16 12:50:26.719000 audit: BPF prog-id=42 op=UNLOAD Dec 16 12:50:26.719000 audit: BPF prog-id=43 op=UNLOAD Dec 16 12:50:26.720000 audit: BPF prog-id=53 op=LOAD Dec 16 12:50:26.720000 audit: BPF prog-id=36 op=UNLOAD Dec 16 12:50:26.720000 audit: BPF prog-id=54 op=LOAD Dec 16 12:50:26.720000 audit: BPF prog-id=55 op=LOAD Dec 16 12:50:26.720000 audit: BPF prog-id=37 op=UNLOAD Dec 16 12:50:26.720000 audit: BPF prog-id=38 op=UNLOAD Dec 16 12:50:26.721000 audit: BPF prog-id=56 op=LOAD Dec 16 12:50:26.721000 audit: BPF prog-id=31 op=UNLOAD Dec 16 12:50:26.721000 audit: BPF prog-id=57 op=LOAD Dec 16 12:50:26.721000 audit: BPF prog-id=58 op=LOAD Dec 16 12:50:26.721000 audit: BPF prog-id=39 op=UNLOAD Dec 16 12:50:26.721000 audit: BPF prog-id=40 op=UNLOAD Dec 16 12:50:26.722000 audit: BPF prog-id=59 op=LOAD Dec 16 12:50:26.722000 audit: BPF prog-id=32 op=UNLOAD Dec 16 12:50:26.723000 audit: BPF prog-id=60 op=LOAD Dec 16 12:50:26.723000 audit: BPF prog-id=44 op=UNLOAD Dec 16 12:50:26.723000 audit: BPF prog-id=61 op=LOAD Dec 16 12:50:26.723000 audit: BPF prog-id=62 op=LOAD Dec 16 12:50:26.723000 audit: BPF prog-id=45 op=UNLOAD Dec 16 12:50:26.723000 audit: BPF prog-id=46 op=UNLOAD Dec 16 12:50:26.729773 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 12:50:26.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:26.730935 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:50:26.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:26.742414 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:50:26.745631 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 12:50:26.757494 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 12:50:26.763039 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 12:50:26.765430 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 12:50:26.772017 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:50:26.772944 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:50:26.777293 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:50:26.784011 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:50:26.786614 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:50:26.787400 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:50:26.787699 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:50:26.787866 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:50:26.788078 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:50:26.794670 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:50:26.796024 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:50:26.794000 audit[1861]: SYSTEM_BOOT pid=1861 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 12:50:26.796313 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:50:26.796566 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:50:26.796729 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:50:26.796879 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:50:26.814754 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:50:26.816880 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:50:26.822424 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:50:26.823362 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:50:26.823659 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:50:26.823830 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:50:26.824119 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 12:50:26.825137 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:50:26.828237 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 12:50:26.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:26.830875 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:50:26.831186 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:50:26.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:26.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:26.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:26.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:26.832728 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:50:26.833025 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:50:26.834734 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:50:26.837452 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:50:26.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:26.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:26.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:26.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:26.849164 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:50:26.849475 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:50:26.860572 systemd[1]: Finished ensure-sysext.service. Dec 16 12:50:26.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:26.868135 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:50:26.868302 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:50:26.877742 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 12:50:26.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:27.027075 systemd-networkd[1548]: eth0: Gained IPv6LL Dec 16 12:50:27.029466 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 12:50:27.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:27.030128 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 12:50:27.076000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 12:50:27.076000 audit[1893]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff333a2b60 a2=420 a3=0 items=0 ppid=1857 pid=1893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:27.076000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:50:27.079129 augenrules[1893]: No rules Dec 16 12:50:27.080325 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:50:27.080609 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:50:27.346533 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 12:50:27.347640 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:50:29.452066 ldconfig[1859]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 12:50:29.457634 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 12:50:29.459483 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 12:50:29.482841 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 12:50:29.483543 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:50:29.484084 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 12:50:29.484479 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 12:50:29.484825 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 12:50:29.485319 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 12:50:29.485762 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 12:50:29.486117 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 12:50:29.486521 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 12:50:29.486831 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 12:50:29.487297 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 12:50:29.487330 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:50:29.487622 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:50:29.489057 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 12:50:29.490882 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 12:50:29.493532 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 12:50:29.494117 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 12:50:29.494553 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 12:50:29.499009 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 12:50:29.499817 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 12:50:29.501025 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 12:50:29.502345 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:50:29.502859 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:50:29.503319 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:50:29.503360 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:50:29.504490 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 12:50:29.509123 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 12:50:29.511645 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 12:50:29.518348 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 12:50:29.523091 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 12:50:29.528319 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 12:50:29.529472 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 12:50:29.534531 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 12:50:29.540171 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:50:29.544166 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 12:50:29.546650 jq[1909]: false Dec 16 12:50:29.548253 systemd[1]: Started ntpd.service - Network Time Service. Dec 16 12:50:29.553642 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 12:50:29.558052 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 12:50:29.575377 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 16 12:50:29.598170 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 12:50:29.606555 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 12:50:29.626085 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 12:50:29.629143 extend-filesystems[1910]: Found /dev/nvme0n1p6 Dec 16 12:50:29.627021 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 12:50:29.642480 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 12:50:29.644807 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 12:50:29.649638 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 12:50:29.663629 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 12:50:29.665722 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 12:50:29.667004 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 12:50:29.680461 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 12:50:29.680813 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 12:50:29.703101 extend-filesystems[1910]: Found /dev/nvme0n1p9 Dec 16 12:50:29.714352 google_oslogin_nss_cache[1911]: oslogin_cache_refresh[1911]: Refreshing passwd entry cache Dec 16 12:50:29.707979 oslogin_cache_refresh[1911]: Refreshing passwd entry cache Dec 16 12:50:29.733800 extend-filesystems[1910]: Checking size of /dev/nvme0n1p9 Dec 16 12:50:29.743829 jq[1935]: true Dec 16 12:50:29.752290 google_oslogin_nss_cache[1911]: oslogin_cache_refresh[1911]: Failure getting users, quitting Dec 16 12:50:29.752290 google_oslogin_nss_cache[1911]: oslogin_cache_refresh[1911]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 12:50:29.752290 google_oslogin_nss_cache[1911]: oslogin_cache_refresh[1911]: Refreshing group entry cache Dec 16 12:50:29.750809 oslogin_cache_refresh[1911]: Failure getting users, quitting Dec 16 12:50:29.750834 oslogin_cache_refresh[1911]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 12:50:29.750896 oslogin_cache_refresh[1911]: Refreshing group entry cache Dec 16 12:50:29.768932 google_oslogin_nss_cache[1911]: oslogin_cache_refresh[1911]: Failure getting groups, quitting Dec 16 12:50:29.768932 google_oslogin_nss_cache[1911]: oslogin_cache_refresh[1911]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 12:50:29.762108 oslogin_cache_refresh[1911]: Failure getting groups, quitting Dec 16 12:50:29.762124 oslogin_cache_refresh[1911]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 12:50:29.775138 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 12:50:29.787023 extend-filesystems[1910]: Resized partition /dev/nvme0n1p9 Dec 16 12:50:29.783489 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 12:50:29.791167 extend-filesystems[1968]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 12:50:29.828052 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 1617920 to 2604027 blocks Dec 16 12:50:29.828104 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 2604027 Dec 16 12:50:29.785016 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 12:50:29.828224 tar[1942]: linux-amd64/LICENSE Dec 16 12:50:29.828224 tar[1942]: linux-amd64/helm Dec 16 12:50:29.828598 extend-filesystems[1968]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 16 12:50:29.828598 extend-filesystems[1968]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 16 12:50:29.828598 extend-filesystems[1968]: The filesystem on /dev/nvme0n1p9 is now 2604027 (4k) blocks long. Dec 16 12:50:29.856437 coreos-metadata[1906]: Dec 16 12:50:29.826 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 16 12:50:29.856437 coreos-metadata[1906]: Dec 16 12:50:29.832 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 16 12:50:29.856437 coreos-metadata[1906]: Dec 16 12:50:29.834 INFO Fetch successful Dec 16 12:50:29.856437 coreos-metadata[1906]: Dec 16 12:50:29.836 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 16 12:50:29.856437 coreos-metadata[1906]: Dec 16 12:50:29.841 INFO Fetch successful Dec 16 12:50:29.856437 coreos-metadata[1906]: Dec 16 12:50:29.841 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 16 12:50:29.856437 coreos-metadata[1906]: Dec 16 12:50:29.846 INFO Fetch successful Dec 16 12:50:29.856437 coreos-metadata[1906]: Dec 16 12:50:29.846 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 16 12:50:29.856437 coreos-metadata[1906]: Dec 16 12:50:29.849 INFO Fetch successful Dec 16 12:50:29.856437 coreos-metadata[1906]: Dec 16 12:50:29.849 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 16 12:50:29.856437 coreos-metadata[1906]: Dec 16 12:50:29.853 INFO Fetch failed with 404: resource not found Dec 16 12:50:29.856437 coreos-metadata[1906]: Dec 16 12:50:29.853 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 16 12:50:29.856437 coreos-metadata[1906]: Dec 16 12:50:29.853 INFO Fetch successful Dec 16 12:50:29.856437 coreos-metadata[1906]: Dec 16 12:50:29.854 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 16 12:50:29.856437 coreos-metadata[1906]: Dec 16 12:50:29.856 INFO Fetch successful Dec 16 12:50:29.856437 coreos-metadata[1906]: Dec 16 12:50:29.856 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 16 12:50:29.793272 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 12:50:29.860437 update_engine[1934]: I20251216 12:50:29.830497 1934 main.cc:92] Flatcar Update Engine starting Dec 16 12:50:29.860685 extend-filesystems[1910]: Resized filesystem in /dev/nvme0n1p9 Dec 16 12:50:29.862332 coreos-metadata[1906]: Dec 16 12:50:29.857 INFO Fetch successful Dec 16 12:50:29.862332 coreos-metadata[1906]: Dec 16 12:50:29.857 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 16 12:50:29.862332 coreos-metadata[1906]: Dec 16 12:50:29.861 INFO Fetch successful Dec 16 12:50:29.862332 coreos-metadata[1906]: Dec 16 12:50:29.861 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 16 12:50:29.794213 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 12:50:29.862617 jq[1959]: true Dec 16 12:50:29.825440 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 12:50:29.871738 coreos-metadata[1906]: Dec 16 12:50:29.864 INFO Fetch successful Dec 16 12:50:29.863135 dbus-daemon[1907]: [system] SELinux support is enabled Dec 16 12:50:29.825820 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 12:50:29.863443 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 12:50:29.874225 dbus-daemon[1907]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1548 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 16 12:50:29.879532 update_engine[1934]: I20251216 12:50:29.879467 1934 update_check_scheduler.cc:74] Next update check in 3m54s Dec 16 12:50:29.884267 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 12:50:29.884325 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 12:50:29.885535 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 12:50:29.885563 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 12:50:29.903452 systemd[1]: Started update-engine.service - Update Engine. Dec 16 12:50:29.904311 dbus-daemon[1907]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 12:50:29.911218 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 16 12:50:29.924228 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 12:50:29.927655 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 16 12:50:29.934004 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 16 12:50:29.955780 ntpd[1914]: ntpd 4.2.8p18@1.4062-o Mon Dec 15 23:46:33 UTC 2025 (1): Starting Dec 16 12:50:29.957321 ntpd[1914]: 16 Dec 12:50:29 ntpd[1914]: ntpd 4.2.8p18@1.4062-o Mon Dec 15 23:46:33 UTC 2025 (1): Starting Dec 16 12:50:29.957321 ntpd[1914]: 16 Dec 12:50:29 ntpd[1914]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 12:50:29.957321 ntpd[1914]: 16 Dec 12:50:29 ntpd[1914]: ---------------------------------------------------- Dec 16 12:50:29.957321 ntpd[1914]: 16 Dec 12:50:29 ntpd[1914]: ntp-4 is maintained by Network Time Foundation, Dec 16 12:50:29.957321 ntpd[1914]: 16 Dec 12:50:29 ntpd[1914]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 12:50:29.957321 ntpd[1914]: 16 Dec 12:50:29 ntpd[1914]: corporation. Support and training for ntp-4 are Dec 16 12:50:29.957321 ntpd[1914]: 16 Dec 12:50:29 ntpd[1914]: available at https://www.nwtime.org/support Dec 16 12:50:29.957321 ntpd[1914]: 16 Dec 12:50:29 ntpd[1914]: ---------------------------------------------------- Dec 16 12:50:29.955856 ntpd[1914]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 12:50:29.955865 ntpd[1914]: ---------------------------------------------------- Dec 16 12:50:29.955874 ntpd[1914]: ntp-4 is maintained by Network Time Foundation, Dec 16 12:50:29.955883 ntpd[1914]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 12:50:29.955892 ntpd[1914]: corporation. Support and training for ntp-4 are Dec 16 12:50:29.955918 ntpd[1914]: available at https://www.nwtime.org/support Dec 16 12:50:29.955928 ntpd[1914]: ---------------------------------------------------- Dec 16 12:50:29.962102 ntpd[1914]: proto: precision = 0.083 usec (-23) Dec 16 12:50:29.964042 ntpd[1914]: 16 Dec 12:50:29 ntpd[1914]: proto: precision = 0.083 usec (-23) Dec 16 12:50:29.967009 ntpd[1914]: basedate set to 2025-12-03 Dec 16 12:50:29.970056 ntpd[1914]: 16 Dec 12:50:29 ntpd[1914]: basedate set to 2025-12-03 Dec 16 12:50:29.970056 ntpd[1914]: 16 Dec 12:50:29 ntpd[1914]: gps base set to 2025-12-07 (week 2396) Dec 16 12:50:29.970056 ntpd[1914]: 16 Dec 12:50:29 ntpd[1914]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 12:50:29.970056 ntpd[1914]: 16 Dec 12:50:29 ntpd[1914]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 12:50:29.970056 ntpd[1914]: 16 Dec 12:50:29 ntpd[1914]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 12:50:29.970056 ntpd[1914]: 16 Dec 12:50:29 ntpd[1914]: Listen normally on 3 eth0 172.31.17.11:123 Dec 16 12:50:29.970056 ntpd[1914]: 16 Dec 12:50:29 ntpd[1914]: Listen normally on 4 lo [::1]:123 Dec 16 12:50:29.970056 ntpd[1914]: 16 Dec 12:50:29 ntpd[1914]: Listen normally on 5 eth0 [fe80::454:71ff:fe9c:38b9%2]:123 Dec 16 12:50:29.970056 ntpd[1914]: 16 Dec 12:50:29 ntpd[1914]: Listening on routing socket on fd #22 for interface updates Dec 16 12:50:29.967036 ntpd[1914]: gps base set to 2025-12-07 (week 2396) Dec 16 12:50:29.967177 ntpd[1914]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 12:50:29.967206 ntpd[1914]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 12:50:29.967439 ntpd[1914]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 12:50:29.967471 ntpd[1914]: Listen normally on 3 eth0 172.31.17.11:123 Dec 16 12:50:29.967504 ntpd[1914]: Listen normally on 4 lo [::1]:123 Dec 16 12:50:29.967535 ntpd[1914]: Listen normally on 5 eth0 [fe80::454:71ff:fe9c:38b9%2]:123 Dec 16 12:50:29.967562 ntpd[1914]: Listening on routing socket on fd #22 for interface updates Dec 16 12:50:29.980003 ntpd[1914]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 12:50:29.981337 ntpd[1914]: 16 Dec 12:50:29 ntpd[1914]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 12:50:29.981337 ntpd[1914]: 16 Dec 12:50:29 ntpd[1914]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 12:50:29.980035 ntpd[1914]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 12:50:30.041386 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 12:50:30.042336 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 12:50:30.060345 systemd-logind[1926]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 12:50:30.063810 systemd-logind[1926]: Watching system buttons on /dev/input/event3 (Sleep Button) Dec 16 12:50:30.063959 systemd-logind[1926]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 12:50:30.066062 systemd-logind[1926]: New seat seat0. Dec 16 12:50:30.067301 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 12:50:30.104864 bash[2014]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:50:30.108687 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 12:50:30.116245 systemd[1]: Starting sshkeys.service... Dec 16 12:50:30.190873 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 12:50:30.198150 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 12:50:30.453687 amazon-ssm-agent[1992]: Initializing new seelog logger Dec 16 12:50:30.464310 amazon-ssm-agent[1992]: New Seelog Logger Creation Complete Dec 16 12:50:30.465208 amazon-ssm-agent[1992]: 2025/12/16 12:50:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:50:30.465208 amazon-ssm-agent[1992]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:50:30.475169 amazon-ssm-agent[1992]: 2025/12/16 12:50:30 processing appconfig overrides Dec 16 12:50:30.475527 amazon-ssm-agent[1992]: 2025/12/16 12:50:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:50:30.475527 amazon-ssm-agent[1992]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:50:30.475629 amazon-ssm-agent[1992]: 2025/12/16 12:50:30 processing appconfig overrides Dec 16 12:50:30.479264 amazon-ssm-agent[1992]: 2025/12/16 12:50:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:50:30.479264 amazon-ssm-agent[1992]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:50:30.479264 amazon-ssm-agent[1992]: 2025/12/16 12:50:30 processing appconfig overrides Dec 16 12:50:30.479424 amazon-ssm-agent[1992]: 2025-12-16 12:50:30.4754 INFO Proxy environment variables: Dec 16 12:50:30.496747 amazon-ssm-agent[1992]: 2025/12/16 12:50:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:50:30.496747 amazon-ssm-agent[1992]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:50:30.496880 amazon-ssm-agent[1992]: 2025/12/16 12:50:30 processing appconfig overrides Dec 16 12:50:30.502409 locksmithd[1989]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 12:50:30.528293 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 16 12:50:30.552040 dbus-daemon[1907]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 16 12:50:30.563086 dbus-daemon[1907]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1988 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 16 12:50:30.579445 systemd[1]: Starting polkit.service - Authorization Manager... Dec 16 12:50:30.580294 amazon-ssm-agent[1992]: 2025-12-16 12:50:30.4754 INFO http_proxy: Dec 16 12:50:30.601425 coreos-metadata[2041]: Dec 16 12:50:30.601 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 16 12:50:30.604964 coreos-metadata[2041]: Dec 16 12:50:30.603 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 16 12:50:30.607843 coreos-metadata[2041]: Dec 16 12:50:30.607 INFO Fetch successful Dec 16 12:50:30.608083 coreos-metadata[2041]: Dec 16 12:50:30.608 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 16 12:50:30.613134 coreos-metadata[2041]: Dec 16 12:50:30.612 INFO Fetch successful Dec 16 12:50:30.623305 unknown[2041]: wrote ssh authorized keys file for user: core Dec 16 12:50:30.681572 amazon-ssm-agent[1992]: 2025-12-16 12:50:30.4754 INFO no_proxy: Dec 16 12:50:30.741385 update-ssh-keys[2130]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:50:30.742379 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 12:50:30.746861 systemd[1]: Finished sshkeys.service. Dec 16 12:50:30.787303 amazon-ssm-agent[1992]: 2025-12-16 12:50:30.4754 INFO https_proxy: Dec 16 12:50:30.806349 polkitd[2123]: Started polkitd version 126 Dec 16 12:50:30.823734 polkitd[2123]: Loading rules from directory /etc/polkit-1/rules.d Dec 16 12:50:30.826721 polkitd[2123]: Loading rules from directory /run/polkit-1/rules.d Dec 16 12:50:30.826799 polkitd[2123]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 12:50:30.830290 polkitd[2123]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 16 12:50:30.830352 polkitd[2123]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 12:50:30.830402 polkitd[2123]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 16 12:50:30.834090 polkitd[2123]: Finished loading, compiling and executing 2 rules Dec 16 12:50:30.837746 systemd[1]: Started polkit.service - Authorization Manager. Dec 16 12:50:30.843177 dbus-daemon[1907]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 16 12:50:30.848530 polkitd[2123]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 16 12:50:30.889921 amazon-ssm-agent[1992]: 2025-12-16 12:50:30.4756 INFO Checking if agent identity type OnPrem can be assumed Dec 16 12:50:30.923096 systemd-hostnamed[1988]: Hostname set to (transient) Dec 16 12:50:30.925168 systemd-resolved[1520]: System hostname changed to 'ip-172-31-17-11'. Dec 16 12:50:30.991972 amazon-ssm-agent[1992]: 2025-12-16 12:50:30.4758 INFO Checking if agent identity type EC2 can be assumed Dec 16 12:50:31.092023 amazon-ssm-agent[1992]: 2025-12-16 12:50:30.8107 INFO Agent will take identity from EC2 Dec 16 12:50:31.105075 containerd[1969]: time="2025-12-16T12:50:31Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 12:50:31.105916 containerd[1969]: time="2025-12-16T12:50:31.105867451Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 12:50:31.153623 amazon-ssm-agent[1992]: 2025/12/16 12:50:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:50:31.153623 amazon-ssm-agent[1992]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:50:31.153790 amazon-ssm-agent[1992]: 2025/12/16 12:50:31 processing appconfig overrides Dec 16 12:50:31.158831 containerd[1969]: time="2025-12-16T12:50:31.157411152Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.832µs" Dec 16 12:50:31.161151 containerd[1969]: time="2025-12-16T12:50:31.160802405Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 12:50:31.161362 containerd[1969]: time="2025-12-16T12:50:31.161327131Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 12:50:31.161446 containerd[1969]: time="2025-12-16T12:50:31.161433057Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 12:50:31.161782 containerd[1969]: time="2025-12-16T12:50:31.161755351Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 12:50:31.161958 containerd[1969]: time="2025-12-16T12:50:31.161940542Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:50:31.162442 containerd[1969]: time="2025-12-16T12:50:31.162416738Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:50:31.162547 containerd[1969]: time="2025-12-16T12:50:31.162530907Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:50:31.163767 containerd[1969]: time="2025-12-16T12:50:31.163739994Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:50:31.163874 containerd[1969]: time="2025-12-16T12:50:31.163857527Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:50:31.163965 containerd[1969]: time="2025-12-16T12:50:31.163948697Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:50:31.164070 containerd[1969]: time="2025-12-16T12:50:31.164054887Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 12:50:31.165185 containerd[1969]: time="2025-12-16T12:50:31.165159878Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 12:50:31.165290 containerd[1969]: time="2025-12-16T12:50:31.165275265Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 12:50:31.165841 containerd[1969]: time="2025-12-16T12:50:31.165811565Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 12:50:31.166881 containerd[1969]: time="2025-12-16T12:50:31.166859352Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:50:31.167049 containerd[1969]: time="2025-12-16T12:50:31.166998145Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:50:31.167593 containerd[1969]: time="2025-12-16T12:50:31.167448657Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 12:50:31.167593 containerd[1969]: time="2025-12-16T12:50:31.167522955Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 12:50:31.168657 containerd[1969]: time="2025-12-16T12:50:31.168634670Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 12:50:31.169175 containerd[1969]: time="2025-12-16T12:50:31.169128115Z" level=info msg="metadata content store policy set" policy=shared Dec 16 12:50:31.176117 containerd[1969]: time="2025-12-16T12:50:31.175942312Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 12:50:31.176117 containerd[1969]: time="2025-12-16T12:50:31.176018640Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 12:50:31.176275 containerd[1969]: time="2025-12-16T12:50:31.176133973Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 12:50:31.176275 containerd[1969]: time="2025-12-16T12:50:31.176153303Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 12:50:31.176275 containerd[1969]: time="2025-12-16T12:50:31.176169269Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 12:50:31.176275 containerd[1969]: time="2025-12-16T12:50:31.176183716Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 12:50:31.176275 containerd[1969]: time="2025-12-16T12:50:31.176199574Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 12:50:31.176275 containerd[1969]: time="2025-12-16T12:50:31.176214052Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 12:50:31.176275 containerd[1969]: time="2025-12-16T12:50:31.176230642Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 12:50:31.176275 containerd[1969]: time="2025-12-16T12:50:31.176246837Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 12:50:31.176275 containerd[1969]: time="2025-12-16T12:50:31.176263897Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 12:50:31.176561 containerd[1969]: time="2025-12-16T12:50:31.176279290Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 12:50:31.176561 containerd[1969]: time="2025-12-16T12:50:31.176293578Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 12:50:31.176561 containerd[1969]: time="2025-12-16T12:50:31.176312680Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 12:50:31.176561 containerd[1969]: time="2025-12-16T12:50:31.176469512Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 12:50:31.176561 containerd[1969]: time="2025-12-16T12:50:31.176493940Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 12:50:31.176561 containerd[1969]: time="2025-12-16T12:50:31.176515370Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 12:50:31.176561 containerd[1969]: time="2025-12-16T12:50:31.176529994Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 12:50:31.176561 containerd[1969]: time="2025-12-16T12:50:31.176546368Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 12:50:31.176816 containerd[1969]: time="2025-12-16T12:50:31.176562600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 12:50:31.176816 containerd[1969]: time="2025-12-16T12:50:31.176586959Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 12:50:31.176816 containerd[1969]: time="2025-12-16T12:50:31.176602309Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 12:50:31.176816 containerd[1969]: time="2025-12-16T12:50:31.176619557Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 12:50:31.176816 containerd[1969]: time="2025-12-16T12:50:31.176635911Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 12:50:31.176816 containerd[1969]: time="2025-12-16T12:50:31.176651395Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 12:50:31.176816 containerd[1969]: time="2025-12-16T12:50:31.176688143Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 12:50:31.176816 containerd[1969]: time="2025-12-16T12:50:31.176744332Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 12:50:31.176816 containerd[1969]: time="2025-12-16T12:50:31.176763658Z" level=info msg="Start snapshots syncer" Dec 16 12:50:31.176816 containerd[1969]: time="2025-12-16T12:50:31.176786535Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 12:50:31.182724 containerd[1969]: time="2025-12-16T12:50:31.182622447Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 12:50:31.182963 containerd[1969]: time="2025-12-16T12:50:31.182760758Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 12:50:31.182963 containerd[1969]: time="2025-12-16T12:50:31.182856118Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 12:50:31.183439 containerd[1969]: time="2025-12-16T12:50:31.183125148Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 12:50:31.183439 containerd[1969]: time="2025-12-16T12:50:31.183182114Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 12:50:31.183439 containerd[1969]: time="2025-12-16T12:50:31.183201588Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 12:50:31.183439 containerd[1969]: time="2025-12-16T12:50:31.183218056Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 12:50:31.184999 containerd[1969]: time="2025-12-16T12:50:31.184935254Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 12:50:31.184999 containerd[1969]: time="2025-12-16T12:50:31.184970757Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 12:50:31.185114 containerd[1969]: time="2025-12-16T12:50:31.185007954Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 12:50:31.185114 containerd[1969]: time="2025-12-16T12:50:31.185026325Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 12:50:31.185114 containerd[1969]: time="2025-12-16T12:50:31.185043294Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 12:50:31.185217 containerd[1969]: time="2025-12-16T12:50:31.185115339Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:50:31.185217 containerd[1969]: time="2025-12-16T12:50:31.185137263Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:50:31.185294 containerd[1969]: time="2025-12-16T12:50:31.185235293Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:50:31.185294 containerd[1969]: time="2025-12-16T12:50:31.185252237Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:50:31.185294 containerd[1969]: time="2025-12-16T12:50:31.185268385Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 12:50:31.185294 containerd[1969]: time="2025-12-16T12:50:31.185283964Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 12:50:31.185426 containerd[1969]: time="2025-12-16T12:50:31.185315093Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 12:50:31.185426 containerd[1969]: time="2025-12-16T12:50:31.185334299Z" level=info msg="runtime interface created" Dec 16 12:50:31.185426 containerd[1969]: time="2025-12-16T12:50:31.185342310Z" level=info msg="created NRI interface" Dec 16 12:50:31.185426 containerd[1969]: time="2025-12-16T12:50:31.185355694Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 12:50:31.185426 containerd[1969]: time="2025-12-16T12:50:31.185390638Z" level=info msg="Connect containerd service" Dec 16 12:50:31.185426 containerd[1969]: time="2025-12-16T12:50:31.185420438Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 12:50:31.188592 containerd[1969]: time="2025-12-16T12:50:31.188500546Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:50:31.191088 amazon-ssm-agent[1992]: 2025-12-16 12:50:30.8207 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Dec 16 12:50:31.191878 amazon-ssm-agent[1992]: 2025-12-16 12:50:30.8207 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Dec 16 12:50:31.191878 amazon-ssm-agent[1992]: 2025-12-16 12:50:30.8207 INFO [amazon-ssm-agent] Starting Core Agent Dec 16 12:50:31.191878 amazon-ssm-agent[1992]: 2025-12-16 12:50:30.8207 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Dec 16 12:50:31.191878 amazon-ssm-agent[1992]: 2025-12-16 12:50:30.8207 INFO [Registrar] Starting registrar module Dec 16 12:50:31.191878 amazon-ssm-agent[1992]: 2025-12-16 12:50:30.8294 INFO [EC2Identity] Checking disk for registration info Dec 16 12:50:31.191878 amazon-ssm-agent[1992]: 2025-12-16 12:50:30.8295 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Dec 16 12:50:31.191878 amazon-ssm-agent[1992]: 2025-12-16 12:50:30.8295 INFO [EC2Identity] Generating registration keypair Dec 16 12:50:31.191878 amazon-ssm-agent[1992]: 2025-12-16 12:50:31.0948 INFO [EC2Identity] Checking write access before registering Dec 16 12:50:31.191878 amazon-ssm-agent[1992]: 2025-12-16 12:50:31.0967 INFO [EC2Identity] Registering EC2 instance with Systems Manager Dec 16 12:50:31.191878 amazon-ssm-agent[1992]: 2025-12-16 12:50:31.1534 INFO [EC2Identity] EC2 registration was successful. Dec 16 12:50:31.191878 amazon-ssm-agent[1992]: 2025-12-16 12:50:31.1534 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Dec 16 12:50:31.191878 amazon-ssm-agent[1992]: 2025-12-16 12:50:31.1535 INFO [CredentialRefresher] credentialRefresher has started Dec 16 12:50:31.191878 amazon-ssm-agent[1992]: 2025-12-16 12:50:31.1535 INFO [CredentialRefresher] Starting credentials refresher loop Dec 16 12:50:31.191878 amazon-ssm-agent[1992]: 2025-12-16 12:50:31.1891 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 16 12:50:31.191878 amazon-ssm-agent[1992]: 2025-12-16 12:50:31.1913 INFO [CredentialRefresher] Credentials ready Dec 16 12:50:31.241220 tar[1942]: linux-amd64/README.md Dec 16 12:50:31.278584 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 12:50:31.289250 amazon-ssm-agent[1992]: 2025-12-16 12:50:31.1916 INFO [CredentialRefresher] Next credential rotation will be in 29.999958910666667 minutes Dec 16 12:50:31.426664 sshd_keygen[1943]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 12:50:31.450488 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 12:50:31.453313 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 12:50:31.471612 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 12:50:31.471961 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 12:50:31.477474 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 12:50:31.502894 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 12:50:31.505958 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 12:50:31.510268 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 12:50:31.510892 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 12:50:31.548653 containerd[1969]: time="2025-12-16T12:50:31.548565371Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 12:50:31.548856 containerd[1969]: time="2025-12-16T12:50:31.548763298Z" level=info msg="Start subscribing containerd event" Dec 16 12:50:31.548969 containerd[1969]: time="2025-12-16T12:50:31.548942332Z" level=info msg="Start recovering state" Dec 16 12:50:31.549081 containerd[1969]: time="2025-12-16T12:50:31.549071015Z" level=info msg="Start event monitor" Dec 16 12:50:31.549139 containerd[1969]: time="2025-12-16T12:50:31.549129319Z" level=info msg="Start cni network conf syncer for default" Dec 16 12:50:31.549216 containerd[1969]: time="2025-12-16T12:50:31.549188178Z" level=info msg="Start streaming server" Dec 16 12:50:31.549216 containerd[1969]: time="2025-12-16T12:50:31.549208693Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 12:50:31.549216 containerd[1969]: time="2025-12-16T12:50:31.549216017Z" level=info msg="runtime interface starting up..." Dec 16 12:50:31.549216 containerd[1969]: time="2025-12-16T12:50:31.549230248Z" level=info msg="starting plugins..." Dec 16 12:50:31.549410 containerd[1969]: time="2025-12-16T12:50:31.549244162Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 12:50:31.549470 containerd[1969]: time="2025-12-16T12:50:31.549437916Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 12:50:31.549529 containerd[1969]: time="2025-12-16T12:50:31.549513662Z" level=info msg="containerd successfully booted in 0.450890s" Dec 16 12:50:31.550096 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 12:50:32.205680 amazon-ssm-agent[1992]: 2025-12-16 12:50:32.2054 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 16 12:50:32.306090 amazon-ssm-agent[1992]: 2025-12-16 12:50:32.2083 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2188) started Dec 16 12:50:32.406954 amazon-ssm-agent[1992]: 2025-12-16 12:50:32.2083 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 16 12:50:34.664068 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:50:34.665060 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 12:50:34.667710 systemd[1]: Startup finished in 3.465s (kernel) + 10.446s (initrd) + 12.606s (userspace) = 26.517s. Dec 16 12:50:34.673575 (kubelet)[2205]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:50:36.161472 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 12:50:36.164185 systemd[1]: Started sshd@0-172.31.17.11:22-147.75.109.163:42720.service - OpenSSH per-connection server daemon (147.75.109.163:42720). Dec 16 12:50:36.416578 sshd[2215]: Accepted publickey for core from 147.75.109.163 port 42720 ssh2: RSA SHA256:U/DoHe1xJfGISnzLa/L/V6WMQEeU/gRWF7ew5p5tFyo Dec 16 12:50:36.419192 sshd-session[2215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:50:36.431045 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 12:50:36.433173 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 12:50:36.442982 systemd-logind[1926]: New session 1 of user core. Dec 16 12:50:36.456351 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 12:50:36.459649 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 12:50:36.480714 (systemd)[2221]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:50:36.483951 systemd-logind[1926]: New session 2 of user core. Dec 16 12:50:36.555625 kubelet[2205]: E1216 12:50:36.555577 2205 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:50:36.557796 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:50:36.557941 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:50:36.558282 systemd[1]: kubelet.service: Consumed 1.063s CPU time, 268.4M memory peak. Dec 16 12:50:36.632720 systemd[2221]: Queued start job for default target default.target. Dec 16 12:50:36.636939 systemd[2221]: Created slice app.slice - User Application Slice. Dec 16 12:50:36.636976 systemd[2221]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 12:50:36.636992 systemd[2221]: Reached target paths.target - Paths. Dec 16 12:50:36.637043 systemd[2221]: Reached target timers.target - Timers. Dec 16 12:50:36.638267 systemd[2221]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 12:50:36.641066 systemd[2221]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 12:50:36.659061 systemd[2221]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 12:50:36.659147 systemd[2221]: Reached target sockets.target - Sockets. Dec 16 12:50:36.659932 systemd[2221]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 12:50:36.660039 systemd[2221]: Reached target basic.target - Basic System. Dec 16 12:50:36.660090 systemd[2221]: Reached target default.target - Main User Target. Dec 16 12:50:36.660118 systemd[2221]: Startup finished in 169ms. Dec 16 12:50:36.660309 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 12:50:36.667195 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 12:50:36.747070 systemd[1]: Started sshd@1-172.31.17.11:22-147.75.109.163:42724.service - OpenSSH per-connection server daemon (147.75.109.163:42724). Dec 16 12:50:36.900356 sshd[2237]: Accepted publickey for core from 147.75.109.163 port 42724 ssh2: RSA SHA256:U/DoHe1xJfGISnzLa/L/V6WMQEeU/gRWF7ew5p5tFyo Dec 16 12:50:36.901787 sshd-session[2237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:50:36.907598 systemd-logind[1926]: New session 3 of user core. Dec 16 12:50:36.914169 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 12:50:37.855389 systemd-resolved[1520]: Clock change detected. Flushing caches. Dec 16 12:50:37.870600 sshd[2241]: Connection closed by 147.75.109.163 port 42724 Dec 16 12:50:37.871142 sshd-session[2237]: pam_unix(sshd:session): session closed for user core Dec 16 12:50:37.874861 systemd[1]: sshd@1-172.31.17.11:22-147.75.109.163:42724.service: Deactivated successfully. Dec 16 12:50:37.876502 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 12:50:37.877943 systemd-logind[1926]: Session 3 logged out. Waiting for processes to exit. Dec 16 12:50:37.879073 systemd-logind[1926]: Removed session 3. Dec 16 12:50:37.902187 systemd[1]: Started sshd@2-172.31.17.11:22-147.75.109.163:42728.service - OpenSSH per-connection server daemon (147.75.109.163:42728). Dec 16 12:50:38.061375 sshd[2247]: Accepted publickey for core from 147.75.109.163 port 42728 ssh2: RSA SHA256:U/DoHe1xJfGISnzLa/L/V6WMQEeU/gRWF7ew5p5tFyo Dec 16 12:50:38.062845 sshd-session[2247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:50:38.068936 systemd-logind[1926]: New session 4 of user core. Dec 16 12:50:38.076153 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 12:50:38.129011 sshd[2251]: Connection closed by 147.75.109.163 port 42728 Dec 16 12:50:38.129897 sshd-session[2247]: pam_unix(sshd:session): session closed for user core Dec 16 12:50:38.133894 systemd[1]: sshd@2-172.31.17.11:22-147.75.109.163:42728.service: Deactivated successfully. Dec 16 12:50:38.135706 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 12:50:38.137842 systemd-logind[1926]: Session 4 logged out. Waiting for processes to exit. Dec 16 12:50:38.138826 systemd-logind[1926]: Removed session 4. Dec 16 12:50:38.161720 systemd[1]: Started sshd@3-172.31.17.11:22-147.75.109.163:42740.service - OpenSSH per-connection server daemon (147.75.109.163:42740). Dec 16 12:50:38.322326 sshd[2257]: Accepted publickey for core from 147.75.109.163 port 42740 ssh2: RSA SHA256:U/DoHe1xJfGISnzLa/L/V6WMQEeU/gRWF7ew5p5tFyo Dec 16 12:50:38.323784 sshd-session[2257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:50:38.330028 systemd-logind[1926]: New session 5 of user core. Dec 16 12:50:38.337153 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 12:50:38.392250 sshd[2261]: Connection closed by 147.75.109.163 port 42740 Dec 16 12:50:38.393092 sshd-session[2257]: pam_unix(sshd:session): session closed for user core Dec 16 12:50:38.397363 systemd[1]: sshd@3-172.31.17.11:22-147.75.109.163:42740.service: Deactivated successfully. Dec 16 12:50:38.399352 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 12:50:38.400130 systemd-logind[1926]: Session 5 logged out. Waiting for processes to exit. Dec 16 12:50:38.401813 systemd-logind[1926]: Removed session 5. Dec 16 12:50:38.428977 systemd[1]: Started sshd@4-172.31.17.11:22-147.75.109.163:42752.service - OpenSSH per-connection server daemon (147.75.109.163:42752). Dec 16 12:50:38.585012 sshd[2267]: Accepted publickey for core from 147.75.109.163 port 42752 ssh2: RSA SHA256:U/DoHe1xJfGISnzLa/L/V6WMQEeU/gRWF7ew5p5tFyo Dec 16 12:50:38.586402 sshd-session[2267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:50:38.592059 systemd-logind[1926]: New session 6 of user core. Dec 16 12:50:38.598175 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 12:50:38.714125 sudo[2272]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 12:50:38.714437 sudo[2272]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:50:38.727968 sudo[2272]: pam_unix(sudo:session): session closed for user root Dec 16 12:50:38.749676 sshd[2271]: Connection closed by 147.75.109.163 port 42752 Dec 16 12:50:38.750381 sshd-session[2267]: pam_unix(sshd:session): session closed for user core Dec 16 12:50:38.754371 systemd[1]: sshd@4-172.31.17.11:22-147.75.109.163:42752.service: Deactivated successfully. Dec 16 12:50:38.756302 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 12:50:38.758376 systemd-logind[1926]: Session 6 logged out. Waiting for processes to exit. Dec 16 12:50:38.759464 systemd-logind[1926]: Removed session 6. Dec 16 12:50:38.782016 systemd[1]: Started sshd@5-172.31.17.11:22-147.75.109.163:42768.service - OpenSSH per-connection server daemon (147.75.109.163:42768). Dec 16 12:50:38.929188 sshd[2279]: Accepted publickey for core from 147.75.109.163 port 42768 ssh2: RSA SHA256:U/DoHe1xJfGISnzLa/L/V6WMQEeU/gRWF7ew5p5tFyo Dec 16 12:50:38.930635 sshd-session[2279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:50:38.935971 systemd-logind[1926]: New session 7 of user core. Dec 16 12:50:38.950155 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 12:50:38.984617 sudo[2285]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 12:50:38.984926 sudo[2285]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:50:38.989400 sudo[2285]: pam_unix(sudo:session): session closed for user root Dec 16 12:50:38.996160 sudo[2284]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 12:50:38.996441 sudo[2284]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:50:39.004494 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:50:39.051000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 12:50:39.052951 augenrules[2309]: No rules Dec 16 12:50:39.053375 kernel: kauditd_printk_skb: 136 callbacks suppressed Dec 16 12:50:39.053403 kernel: audit: type=1305 audit(1765889439.051:242): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 12:50:39.051000 audit[2309]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff05d9d4e0 a2=420 a3=0 items=0 ppid=2290 pid=2309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:39.056610 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:50:39.057390 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:50:39.060285 kernel: audit: type=1300 audit(1765889439.051:242): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff05d9d4e0 a2=420 a3=0 items=0 ppid=2290 pid=2309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:39.059916 sudo[2284]: pam_unix(sudo:session): session closed for user root Dec 16 12:50:39.051000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:50:39.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:39.065052 kernel: audit: type=1327 audit(1765889439.051:242): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:50:39.065108 kernel: audit: type=1130 audit(1765889439.057:243): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:39.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:39.065936 kernel: audit: type=1131 audit(1765889439.057:244): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:39.059000 audit[2284]: USER_END pid=2284 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:50:39.068930 kernel: audit: type=1106 audit(1765889439.059:245): pid=2284 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:50:39.059000 audit[2284]: CRED_DISP pid=2284 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:50:39.074487 kernel: audit: type=1104 audit(1765889439.059:246): pid=2284 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:50:39.081196 sshd[2283]: Connection closed by 147.75.109.163 port 42768 Dec 16 12:50:39.081817 sshd-session[2279]: pam_unix(sshd:session): session closed for user core Dec 16 12:50:39.082000 audit[2279]: USER_END pid=2279 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:50:39.085671 systemd-logind[1926]: Session 7 logged out. Waiting for processes to exit. Dec 16 12:50:39.087787 systemd[1]: sshd@5-172.31.17.11:22-147.75.109.163:42768.service: Deactivated successfully. Dec 16 12:50:39.092972 kernel: audit: type=1106 audit(1765889439.082:247): pid=2279 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:50:39.093042 kernel: audit: type=1104 audit(1765889439.082:248): pid=2279 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:50:39.082000 audit[2279]: CRED_DISP pid=2279 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:50:39.090009 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 12:50:39.092313 systemd-logind[1926]: Removed session 7. Dec 16 12:50:39.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.17.11:22-147.75.109.163:42768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:39.098068 kernel: audit: type=1131 audit(1765889439.087:249): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.17.11:22-147.75.109.163:42768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:39.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.17.11:22-147.75.109.163:42784 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:39.113892 systemd[1]: Started sshd@6-172.31.17.11:22-147.75.109.163:42784.service - OpenSSH per-connection server daemon (147.75.109.163:42784). Dec 16 12:50:39.264000 audit[2318]: USER_ACCT pid=2318 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:50:39.265817 sshd[2318]: Accepted publickey for core from 147.75.109.163 port 42784 ssh2: RSA SHA256:U/DoHe1xJfGISnzLa/L/V6WMQEeU/gRWF7ew5p5tFyo Dec 16 12:50:39.265000 audit[2318]: CRED_ACQ pid=2318 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:50:39.265000 audit[2318]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe6b81c300 a2=3 a3=0 items=0 ppid=1 pid=2318 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:39.265000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:50:39.266561 sshd-session[2318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:50:39.270955 systemd-logind[1926]: New session 8 of user core. Dec 16 12:50:39.274067 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 12:50:39.275000 audit[2318]: USER_START pid=2318 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:50:39.277000 audit[2322]: CRED_ACQ pid=2322 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:50:39.314000 audit[2323]: USER_ACCT pid=2323 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:50:39.315213 sudo[2323]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 12:50:39.314000 audit[2323]: CRED_REFR pid=2323 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:50:39.314000 audit[2323]: USER_START pid=2323 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:50:39.315510 sudo[2323]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:50:40.682050 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 12:50:40.693386 (dockerd)[2344]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 12:50:41.820580 dockerd[2344]: time="2025-12-16T12:50:41.820297713Z" level=info msg="Starting up" Dec 16 12:50:41.823007 dockerd[2344]: time="2025-12-16T12:50:41.822967590Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 12:50:41.836744 dockerd[2344]: time="2025-12-16T12:50:41.836693302Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 12:50:41.883610 systemd[1]: var-lib-docker-metacopy\x2dcheck2177074742-merged.mount: Deactivated successfully. Dec 16 12:50:41.908626 dockerd[2344]: time="2025-12-16T12:50:41.908426967Z" level=info msg="Loading containers: start." Dec 16 12:50:41.956937 kernel: Initializing XFRM netlink socket Dec 16 12:50:42.090000 audit[2393]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2393 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:50:42.090000 audit[2393]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffea63d5d00 a2=0 a3=0 items=0 ppid=2344 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.090000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 12:50:42.093000 audit[2395]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2395 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:50:42.093000 audit[2395]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd2c58b6f0 a2=0 a3=0 items=0 ppid=2344 pid=2395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.093000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 12:50:42.095000 audit[2397]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2397 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:50:42.095000 audit[2397]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffed0d176a0 a2=0 a3=0 items=0 ppid=2344 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.095000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 12:50:42.097000 audit[2399]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2399 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:50:42.097000 audit[2399]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff48d8e3f0 a2=0 a3=0 items=0 ppid=2344 pid=2399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.097000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 12:50:42.099000 audit[2401]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2401 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:50:42.099000 audit[2401]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffda8735610 a2=0 a3=0 items=0 ppid=2344 pid=2401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.099000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 12:50:42.101000 audit[2403]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2403 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:50:42.101000 audit[2403]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe80ade550 a2=0 a3=0 items=0 ppid=2344 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.101000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:50:42.104000 audit[2405]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2405 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:50:42.104000 audit[2405]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff5d522ee0 a2=0 a3=0 items=0 ppid=2344 pid=2405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.104000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:50:42.108000 audit[2407]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2407 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:50:42.108000 audit[2407]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff96209ed0 a2=0 a3=0 items=0 ppid=2344 pid=2407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.108000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 12:50:42.172000 audit[2410]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2410 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:50:42.172000 audit[2410]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffff2ffdb70 a2=0 a3=0 items=0 ppid=2344 pid=2410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.172000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 12:50:42.175000 audit[2412]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2412 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:50:42.175000 audit[2412]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffd09c8290 a2=0 a3=0 items=0 ppid=2344 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.175000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 12:50:42.177000 audit[2414]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2414 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:50:42.177000 audit[2414]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd7ebe72f0 a2=0 a3=0 items=0 ppid=2344 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.177000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 12:50:42.179000 audit[2416]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2416 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:50:42.179000 audit[2416]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fff47acf4f0 a2=0 a3=0 items=0 ppid=2344 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.179000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:50:42.182000 audit[2418]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2418 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:50:42.182000 audit[2418]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffd6fbd4430 a2=0 a3=0 items=0 ppid=2344 pid=2418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.182000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 12:50:42.223000 audit[2448]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2448 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:50:42.223000 audit[2448]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd52091d40 a2=0 a3=0 items=0 ppid=2344 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.223000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 12:50:42.225000 audit[2450]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2450 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:50:42.225000 audit[2450]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd2c024260 a2=0 a3=0 items=0 ppid=2344 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.225000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 12:50:42.228000 audit[2452]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2452 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:50:42.228000 audit[2452]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd27a627f0 a2=0 a3=0 items=0 ppid=2344 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.228000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 12:50:42.230000 audit[2454]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2454 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:50:42.230000 audit[2454]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc318ded90 a2=0 a3=0 items=0 ppid=2344 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.230000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 12:50:42.232000 audit[2456]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2456 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:50:42.232000 audit[2456]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff885c0910 a2=0 a3=0 items=0 ppid=2344 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.232000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 12:50:42.235000 audit[2458]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2458 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:50:42.235000 audit[2458]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd095dd310 a2=0 a3=0 items=0 ppid=2344 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.235000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:50:42.237000 audit[2460]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2460 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:50:42.237000 audit[2460]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd80ff23d0 a2=0 a3=0 items=0 ppid=2344 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.237000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:50:42.240000 audit[2462]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2462 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:50:42.240000 audit[2462]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd2e7994f0 a2=0 a3=0 items=0 ppid=2344 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.240000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 12:50:42.242000 audit[2464]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2464 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:50:42.242000 audit[2464]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7fff988bea50 a2=0 a3=0 items=0 ppid=2344 pid=2464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.242000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 12:50:42.245000 audit[2466]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2466 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:50:42.245000 audit[2466]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff56bab3e0 a2=0 a3=0 items=0 ppid=2344 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.245000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 12:50:42.247000 audit[2468]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2468 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:50:42.247000 audit[2468]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffcef2d5500 a2=0 a3=0 items=0 ppid=2344 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.247000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 12:50:42.249000 audit[2470]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2470 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:50:42.249000 audit[2470]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffca6a65ae0 a2=0 a3=0 items=0 ppid=2344 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.249000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:50:42.251000 audit[2472]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2472 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:50:42.251000 audit[2472]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffcb96582c0 a2=0 a3=0 items=0 ppid=2344 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.251000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 12:50:42.257000 audit[2477]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2477 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:50:42.257000 audit[2477]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd98141de0 a2=0 a3=0 items=0 ppid=2344 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.257000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 12:50:42.259000 audit[2479]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2479 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:50:42.259000 audit[2479]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd7144e750 a2=0 a3=0 items=0 ppid=2344 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.259000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 12:50:42.262000 audit[2481]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2481 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:50:42.262000 audit[2481]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffb2c471b0 a2=0 a3=0 items=0 ppid=2344 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.262000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 12:50:42.264000 audit[2483]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2483 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:50:42.264000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd88348fa0 a2=0 a3=0 items=0 ppid=2344 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.264000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 12:50:42.266000 audit[2485]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2485 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:50:42.266000 audit[2485]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffca9d04530 a2=0 a3=0 items=0 ppid=2344 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.266000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 12:50:42.269000 audit[2487]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2487 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:50:42.269000 audit[2487]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe2c4e0de0 a2=0 a3=0 items=0 ppid=2344 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.269000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 12:50:42.286603 (udev-worker)[2366]: Network interface NamePolicy= disabled on kernel command line. Dec 16 12:50:42.296000 audit[2492]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2492 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:50:42.296000 audit[2492]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffe1a4be340 a2=0 a3=0 items=0 ppid=2344 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.296000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 12:50:42.299000 audit[2495]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2495 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:50:42.299000 audit[2495]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffc3f3811f0 a2=0 a3=0 items=0 ppid=2344 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.299000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 12:50:42.308000 audit[2503]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2503 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:50:42.308000 audit[2503]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffed64766c0 a2=0 a3=0 items=0 ppid=2344 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.308000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 12:50:42.319000 audit[2509]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2509 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:50:42.319000 audit[2509]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffcdae1ff20 a2=0 a3=0 items=0 ppid=2344 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.319000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 12:50:42.321000 audit[2511]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2511 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:50:42.321000 audit[2511]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffc2b04adc0 a2=0 a3=0 items=0 ppid=2344 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.321000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 12:50:42.324000 audit[2513]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2513 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:50:42.324000 audit[2513]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc69c180a0 a2=0 a3=0 items=0 ppid=2344 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.324000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 12:50:42.326000 audit[2515]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2515 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:50:42.326000 audit[2515]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffeffa3a680 a2=0 a3=0 items=0 ppid=2344 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.326000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:50:42.329000 audit[2517]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2517 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:50:42.329000 audit[2517]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc3fe1c3c0 a2=0 a3=0 items=0 ppid=2344 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:50:42.329000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 12:50:42.330453 systemd-networkd[1548]: docker0: Link UP Dec 16 12:50:42.341480 dockerd[2344]: time="2025-12-16T12:50:42.341406387Z" level=info msg="Loading containers: done." Dec 16 12:50:42.389521 dockerd[2344]: time="2025-12-16T12:50:42.389376858Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 12:50:42.389521 dockerd[2344]: time="2025-12-16T12:50:42.389471083Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 12:50:42.389822 dockerd[2344]: time="2025-12-16T12:50:42.389624418Z" level=info msg="Initializing buildkit" Dec 16 12:50:42.430429 dockerd[2344]: time="2025-12-16T12:50:42.430388744Z" level=info msg="Completed buildkit initialization" Dec 16 12:50:42.437227 dockerd[2344]: time="2025-12-16T12:50:42.437148022Z" level=info msg="Daemon has completed initialization" Dec 16 12:50:42.438057 dockerd[2344]: time="2025-12-16T12:50:42.437352010Z" level=info msg="API listen on /run/docker.sock" Dec 16 12:50:42.437650 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 12:50:42.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:44.227295 containerd[1969]: time="2025-12-16T12:50:44.227249861Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 16 12:50:44.953413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount363466200.mount: Deactivated successfully. Dec 16 12:50:46.184639 containerd[1969]: time="2025-12-16T12:50:46.184585437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:50:46.186030 containerd[1969]: time="2025-12-16T12:50:46.185990513Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=28445968" Dec 16 12:50:46.187936 containerd[1969]: time="2025-12-16T12:50:46.187224815Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:50:46.190187 containerd[1969]: time="2025-12-16T12:50:46.190134574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:50:46.191277 containerd[1969]: time="2025-12-16T12:50:46.191147440Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 1.963860959s" Dec 16 12:50:46.191277 containerd[1969]: time="2025-12-16T12:50:46.191226259Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Dec 16 12:50:46.192430 containerd[1969]: time="2025-12-16T12:50:46.192384759Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 16 12:50:47.707884 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 12:50:47.712180 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:50:47.985925 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:50:47.991958 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 16 12:50:47.992065 kernel: audit: type=1130 audit(1765889447.985:300): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:47.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:48.001584 (kubelet)[2627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:50:48.075046 kubelet[2627]: E1216 12:50:48.074970 2627 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:50:48.081788 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:50:48.082679 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:50:48.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:50:48.083780 systemd[1]: kubelet.service: Consumed 214ms CPU time, 110.3M memory peak. Dec 16 12:50:48.088061 kernel: audit: type=1131 audit(1765889448.082:301): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:50:48.123771 containerd[1969]: time="2025-12-16T12:50:48.123717871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:50:48.125592 containerd[1969]: time="2025-12-16T12:50:48.125552417Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26011378" Dec 16 12:50:48.127898 containerd[1969]: time="2025-12-16T12:50:48.127845300Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:50:48.132199 containerd[1969]: time="2025-12-16T12:50:48.131358910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:50:48.132199 containerd[1969]: time="2025-12-16T12:50:48.132079037Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.939646622s" Dec 16 12:50:48.132199 containerd[1969]: time="2025-12-16T12:50:48.132104065Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Dec 16 12:50:48.132975 containerd[1969]: time="2025-12-16T12:50:48.132949210Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 16 12:50:49.526985 containerd[1969]: time="2025-12-16T12:50:49.526940350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:50:49.529623 containerd[1969]: time="2025-12-16T12:50:49.529258008Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Dec 16 12:50:49.531575 containerd[1969]: time="2025-12-16T12:50:49.531541178Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:50:49.535426 containerd[1969]: time="2025-12-16T12:50:49.535373353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:50:49.536438 containerd[1969]: time="2025-12-16T12:50:49.536407863Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.403428306s" Dec 16 12:50:49.536574 containerd[1969]: time="2025-12-16T12:50:49.536555903Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Dec 16 12:50:49.537317 containerd[1969]: time="2025-12-16T12:50:49.537257217Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 16 12:50:50.661100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1640016737.mount: Deactivated successfully. Dec 16 12:50:51.292255 containerd[1969]: time="2025-12-16T12:50:51.292202569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:50:51.293645 containerd[1969]: time="2025-12-16T12:50:51.293363102Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=20340589" Dec 16 12:50:51.295176 containerd[1969]: time="2025-12-16T12:50:51.295140766Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:50:51.297627 containerd[1969]: time="2025-12-16T12:50:51.297595492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:50:51.298230 containerd[1969]: time="2025-12-16T12:50:51.298205685Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.760913914s" Dec 16 12:50:51.298318 containerd[1969]: time="2025-12-16T12:50:51.298304969Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Dec 16 12:50:51.298804 containerd[1969]: time="2025-12-16T12:50:51.298772715Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 16 12:50:51.975712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2063510253.mount: Deactivated successfully. Dec 16 12:50:53.113660 containerd[1969]: time="2025-12-16T12:50:53.113591135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:50:53.115833 containerd[1969]: time="2025-12-16T12:50:53.115571787Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20128467" Dec 16 12:50:53.117853 containerd[1969]: time="2025-12-16T12:50:53.117811290Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:50:53.122484 containerd[1969]: time="2025-12-16T12:50:53.122414780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:50:53.123474 containerd[1969]: time="2025-12-16T12:50:53.123337756Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.824535762s" Dec 16 12:50:53.123474 containerd[1969]: time="2025-12-16T12:50:53.123371932Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Dec 16 12:50:53.124335 containerd[1969]: time="2025-12-16T12:50:53.124295043Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 12:50:53.606075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2923135403.mount: Deactivated successfully. Dec 16 12:50:53.617815 containerd[1969]: time="2025-12-16T12:50:53.617766745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:50:53.619854 containerd[1969]: time="2025-12-16T12:50:53.619666850Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:50:53.622123 containerd[1969]: time="2025-12-16T12:50:53.621989306Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:50:53.625935 containerd[1969]: time="2025-12-16T12:50:53.625180528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:50:53.625935 containerd[1969]: time="2025-12-16T12:50:53.625802833Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 501.477577ms" Dec 16 12:50:53.625935 containerd[1969]: time="2025-12-16T12:50:53.625831278Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 12:50:53.626526 containerd[1969]: time="2025-12-16T12:50:53.626494576Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 16 12:50:54.220045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1066201122.mount: Deactivated successfully. Dec 16 12:50:56.467670 containerd[1969]: time="2025-12-16T12:50:56.467619080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:50:56.468864 containerd[1969]: time="2025-12-16T12:50:56.468661373Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=56977083" Dec 16 12:50:56.469787 containerd[1969]: time="2025-12-16T12:50:56.469738946Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:50:56.472754 containerd[1969]: time="2025-12-16T12:50:56.472722380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:50:56.473818 containerd[1969]: time="2025-12-16T12:50:56.473791441Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.847267909s" Dec 16 12:50:56.473947 containerd[1969]: time="2025-12-16T12:50:56.473928687Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Dec 16 12:50:58.332769 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 12:50:58.336179 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:50:58.606217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:50:58.613494 kernel: audit: type=1130 audit(1765889458.606:302): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:58.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:58.617257 (kubelet)[2786]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:50:58.697056 kubelet[2786]: E1216 12:50:58.697007 2786 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:50:58.702488 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:50:58.702823 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:50:58.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:50:58.708939 kernel: audit: type=1131 audit(1765889458.702:303): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:50:58.703650 systemd[1]: kubelet.service: Consumed 217ms CPU time, 111.1M memory peak. Dec 16 12:50:59.525443 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:50:59.525789 systemd[1]: kubelet.service: Consumed 217ms CPU time, 111.1M memory peak. Dec 16 12:50:59.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:59.535360 kernel: audit: type=1130 audit(1765889459.524:304): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:59.535464 kernel: audit: type=1131 audit(1765889459.524:305): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:59.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:50:59.532256 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:50:59.568982 systemd[1]: Reload requested from client PID 2801 ('systemctl') (unit session-8.scope)... Dec 16 12:50:59.569006 systemd[1]: Reloading... Dec 16 12:50:59.741940 zram_generator::config[2849]: No configuration found. Dec 16 12:51:00.030568 systemd[1]: Reloading finished in 460 ms. Dec 16 12:51:00.114157 kernel: audit: type=1334 audit(1765889460.110:306): prog-id=70 op=LOAD Dec 16 12:51:00.110000 audit: BPF prog-id=70 op=LOAD Dec 16 12:51:00.113000 audit: BPF prog-id=66 op=UNLOAD Dec 16 12:51:00.116028 kernel: audit: type=1334 audit(1765889460.113:307): prog-id=66 op=UNLOAD Dec 16 12:51:00.133565 kernel: audit: type=1334 audit(1765889460.113:308): prog-id=71 op=LOAD Dec 16 12:51:00.133691 kernel: audit: type=1334 audit(1765889460.113:309): prog-id=72 op=LOAD Dec 16 12:51:00.113000 audit: BPF prog-id=71 op=LOAD Dec 16 12:51:00.113000 audit: BPF prog-id=72 op=LOAD Dec 16 12:51:00.113000 audit: BPF prog-id=67 op=UNLOAD Dec 16 12:51:00.136991 kernel: audit: type=1334 audit(1765889460.113:310): prog-id=67 op=UNLOAD Dec 16 12:51:00.113000 audit: BPF prog-id=68 op=UNLOAD Dec 16 12:51:00.139938 kernel: audit: type=1334 audit(1765889460.113:311): prog-id=68 op=UNLOAD Dec 16 12:51:00.114000 audit: BPF prog-id=73 op=LOAD Dec 16 12:51:00.114000 audit: BPF prog-id=56 op=UNLOAD Dec 16 12:51:00.116000 audit: BPF prog-id=74 op=LOAD Dec 16 12:51:00.116000 audit: BPF prog-id=59 op=UNLOAD Dec 16 12:51:00.124000 audit: BPF prog-id=75 op=LOAD Dec 16 12:51:00.124000 audit: BPF prog-id=69 op=UNLOAD Dec 16 12:51:00.126000 audit: BPF prog-id=76 op=LOAD Dec 16 12:51:00.133000 audit: BPF prog-id=63 op=UNLOAD Dec 16 12:51:00.133000 audit: BPF prog-id=77 op=LOAD Dec 16 12:51:00.133000 audit: BPF prog-id=78 op=LOAD Dec 16 12:51:00.133000 audit: BPF prog-id=64 op=UNLOAD Dec 16 12:51:00.133000 audit: BPF prog-id=65 op=UNLOAD Dec 16 12:51:00.133000 audit: BPF prog-id=79 op=LOAD Dec 16 12:51:00.133000 audit: BPF prog-id=50 op=UNLOAD Dec 16 12:51:00.134000 audit: BPF prog-id=80 op=LOAD Dec 16 12:51:00.134000 audit: BPF prog-id=81 op=LOAD Dec 16 12:51:00.134000 audit: BPF prog-id=51 op=UNLOAD Dec 16 12:51:00.134000 audit: BPF prog-id=52 op=UNLOAD Dec 16 12:51:00.136000 audit: BPF prog-id=82 op=LOAD Dec 16 12:51:00.136000 audit: BPF prog-id=47 op=UNLOAD Dec 16 12:51:00.136000 audit: BPF prog-id=83 op=LOAD Dec 16 12:51:00.136000 audit: BPF prog-id=84 op=LOAD Dec 16 12:51:00.136000 audit: BPF prog-id=48 op=UNLOAD Dec 16 12:51:00.136000 audit: BPF prog-id=49 op=UNLOAD Dec 16 12:51:00.137000 audit: BPF prog-id=85 op=LOAD Dec 16 12:51:00.137000 audit: BPF prog-id=53 op=UNLOAD Dec 16 12:51:00.137000 audit: BPF prog-id=86 op=LOAD Dec 16 12:51:00.137000 audit: BPF prog-id=87 op=LOAD Dec 16 12:51:00.138000 audit: BPF prog-id=54 op=UNLOAD Dec 16 12:51:00.138000 audit: BPF prog-id=55 op=UNLOAD Dec 16 12:51:00.139000 audit: BPF prog-id=88 op=LOAD Dec 16 12:51:00.139000 audit: BPF prog-id=89 op=LOAD Dec 16 12:51:00.139000 audit: BPF prog-id=57 op=UNLOAD Dec 16 12:51:00.139000 audit: BPF prog-id=58 op=UNLOAD Dec 16 12:51:00.140000 audit: BPF prog-id=90 op=LOAD Dec 16 12:51:00.140000 audit: BPF prog-id=60 op=UNLOAD Dec 16 12:51:00.140000 audit: BPF prog-id=91 op=LOAD Dec 16 12:51:00.140000 audit: BPF prog-id=92 op=LOAD Dec 16 12:51:00.140000 audit: BPF prog-id=61 op=UNLOAD Dec 16 12:51:00.140000 audit: BPF prog-id=62 op=UNLOAD Dec 16 12:51:00.170534 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 12:51:00.170628 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 12:51:00.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:51:00.170949 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:51:00.171011 systemd[1]: kubelet.service: Consumed 146ms CPU time, 98.5M memory peak. Dec 16 12:51:00.172735 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:51:00.463459 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:51:00.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:51:00.474425 (kubelet)[2912]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:51:00.520420 kubelet[2912]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:51:00.520420 kubelet[2912]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:51:00.520420 kubelet[2912]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:51:00.520774 kubelet[2912]: I1216 12:51:00.520462 2912 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:51:01.275827 kubelet[2912]: I1216 12:51:01.275774 2912 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 12:51:01.275827 kubelet[2912]: I1216 12:51:01.275809 2912 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:51:01.276151 kubelet[2912]: I1216 12:51:01.276127 2912 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:51:01.350371 kubelet[2912]: I1216 12:51:01.350326 2912 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:51:01.359611 kubelet[2912]: E1216 12:51:01.359565 2912 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.17.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.11:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 12:51:01.386151 kubelet[2912]: I1216 12:51:01.386115 2912 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:51:01.410050 kubelet[2912]: I1216 12:51:01.409999 2912 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:51:01.416152 kubelet[2912]: I1216 12:51:01.415997 2912 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:51:01.420918 kubelet[2912]: I1216 12:51:01.416139 2912 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-11","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:51:01.425137 kubelet[2912]: I1216 12:51:01.425020 2912 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:51:01.425137 kubelet[2912]: I1216 12:51:01.425134 2912 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 12:51:01.425683 kubelet[2912]: I1216 12:51:01.425662 2912 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:51:01.433925 kubelet[2912]: I1216 12:51:01.433871 2912 kubelet.go:480] "Attempting to sync node with API server" Dec 16 12:51:01.433925 kubelet[2912]: I1216 12:51:01.433938 2912 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:51:01.438706 kubelet[2912]: I1216 12:51:01.438667 2912 kubelet.go:386] "Adding apiserver pod source" Dec 16 12:51:01.438818 kubelet[2912]: I1216 12:51:01.438725 2912 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:51:01.462642 kubelet[2912]: E1216 12:51:01.462559 2912 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.17.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-11&limit=500&resourceVersion=0\": dial tcp 172.31.17.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:51:01.467099 kubelet[2912]: I1216 12:51:01.463379 2912 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 12:51:01.470229 kubelet[2912]: I1216 12:51:01.470187 2912 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:51:01.471789 kubelet[2912]: W1216 12:51:01.471761 2912 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 12:51:01.477633 kubelet[2912]: E1216 12:51:01.477524 2912 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.17.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:51:01.491028 kubelet[2912]: I1216 12:51:01.490625 2912 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:51:01.491028 kubelet[2912]: I1216 12:51:01.490737 2912 server.go:1289] "Started kubelet" Dec 16 12:51:01.547384 kubelet[2912]: I1216 12:51:01.544347 2912 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:51:01.557004 kubelet[2912]: I1216 12:51:01.556963 2912 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:51:01.562082 kubelet[2912]: I1216 12:51:01.562033 2912 server.go:317] "Adding debug handlers to kubelet server" Dec 16 12:51:01.571473 kubelet[2912]: I1216 12:51:01.569267 2912 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:51:01.574852 kubelet[2912]: E1216 12:51:01.569405 2912 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.11:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-11.1881b31bf0c19d79 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-11,UID:ip-172-31-17-11,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-11,},FirstTimestamp:2025-12-16 12:51:01.490654585 +0000 UTC m=+1.010435523,LastTimestamp:2025-12-16 12:51:01.490654585 +0000 UTC m=+1.010435523,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-11,}" Dec 16 12:51:01.577457 kubelet[2912]: I1216 12:51:01.577389 2912 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:51:01.581072 kubelet[2912]: I1216 12:51:01.581035 2912 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:51:01.582009 kubelet[2912]: E1216 12:51:01.581975 2912 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-11\" not found" Dec 16 12:51:01.584239 kubelet[2912]: I1216 12:51:01.584193 2912 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:51:01.587462 kubelet[2912]: I1216 12:51:01.584483 2912 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:51:01.587462 kubelet[2912]: I1216 12:51:01.584555 2912 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:51:01.588706 kubelet[2912]: E1216 12:51:01.588665 2912 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.17.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:51:01.589161 kubelet[2912]: E1216 12:51:01.588998 2912 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-11?timeout=10s\": dial tcp 172.31.17.11:6443: connect: connection refused" interval="200ms" Dec 16 12:51:01.591361 kubelet[2912]: I1216 12:51:01.591330 2912 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:51:01.592740 kubelet[2912]: I1216 12:51:01.592709 2912 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:51:01.599672 kubelet[2912]: I1216 12:51:01.599353 2912 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:51:01.603000 audit[2928]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2928 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:51:01.603000 audit[2928]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe49f66340 a2=0 a3=0 items=0 ppid=2912 pid=2928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:01.603000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 12:51:01.611000 audit[2931]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2931 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:51:01.611000 audit[2931]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe96915d70 a2=0 a3=0 items=0 ppid=2912 pid=2931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:01.611000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 12:51:01.617000 audit[2933]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2933 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:51:01.617000 audit[2933]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd87d40fe0 a2=0 a3=0 items=0 ppid=2912 pid=2933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:01.617000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:51:01.623000 audit[2935]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2935 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:51:01.623000 audit[2935]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffbac34480 a2=0 a3=0 items=0 ppid=2912 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:01.623000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:51:01.625236 kubelet[2912]: I1216 12:51:01.625201 2912 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:51:01.625236 kubelet[2912]: I1216 12:51:01.625225 2912 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:51:01.625437 kubelet[2912]: I1216 12:51:01.625244 2912 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:51:01.627573 kubelet[2912]: I1216 12:51:01.627541 2912 policy_none.go:49] "None policy: Start" Dec 16 12:51:01.627573 kubelet[2912]: I1216 12:51:01.627571 2912 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:51:01.627838 kubelet[2912]: I1216 12:51:01.627587 2912 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:51:01.648527 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 12:51:01.669000 audit[2938]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2938 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:51:01.669000 audit[2938]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd7a747400 a2=0 a3=0 items=0 ppid=2912 pid=2938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:01.669000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 16 12:51:01.672584 kubelet[2912]: I1216 12:51:01.671481 2912 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 12:51:01.672000 audit[2939]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2939 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:51:01.672000 audit[2939]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc5deb5430 a2=0 a3=0 items=0 ppid=2912 pid=2939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:01.672000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 12:51:01.674709 kubelet[2912]: I1216 12:51:01.674682 2912 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 12:51:01.674709 kubelet[2912]: I1216 12:51:01.674712 2912 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 12:51:01.674844 kubelet[2912]: I1216 12:51:01.674759 2912 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:51:01.674844 kubelet[2912]: I1216 12:51:01.674768 2912 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 12:51:01.674844 kubelet[2912]: E1216 12:51:01.674826 2912 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:51:01.675872 kubelet[2912]: E1216 12:51:01.675844 2912 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.17.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:51:01.675000 audit[2940]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2940 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:51:01.676000 audit[2943]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2943 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:51:01.675000 audit[2940]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc893ba140 a2=0 a3=0 items=0 ppid=2912 pid=2940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:01.675000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 12:51:01.676000 audit[2943]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe56282dd0 a2=0 a3=0 items=0 ppid=2912 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:01.676000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 12:51:01.679000 audit[2945]: NETFILTER_CFG table=nat:50 family=10 entries=1 op=nft_register_chain pid=2945 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:51:01.679000 audit[2944]: NETFILTER_CFG table=nat:51 family=2 entries=1 op=nft_register_chain pid=2944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:51:01.679000 audit[2945]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffceff195a0 a2=0 a3=0 items=0 ppid=2912 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:01.679000 audit[2944]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb8cf7780 a2=0 a3=0 items=0 ppid=2912 pid=2944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:01.679000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 12:51:01.679000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 12:51:01.687307 kubelet[2912]: E1216 12:51:01.687269 2912 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-11\" not found" Dec 16 12:51:01.687000 audit[2947]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_chain pid=2947 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:51:01.687000 audit[2947]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc3ce3c6c0 a2=0 a3=0 items=0 ppid=2912 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:01.687000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 12:51:01.689000 audit[2946]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=2946 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:51:01.689000 audit[2946]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb4d17010 a2=0 a3=0 items=0 ppid=2912 pid=2946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:01.689000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 12:51:01.690718 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 12:51:01.705067 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 12:51:01.731590 kubelet[2912]: E1216 12:51:01.731547 2912 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:51:01.732091 kubelet[2912]: I1216 12:51:01.731929 2912 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:51:01.732091 kubelet[2912]: I1216 12:51:01.731950 2912 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:51:01.733765 kubelet[2912]: I1216 12:51:01.732994 2912 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:51:01.734920 kubelet[2912]: E1216 12:51:01.734886 2912 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:51:01.735107 kubelet[2912]: E1216 12:51:01.735080 2912 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-11\" not found" Dec 16 12:51:01.790410 kubelet[2912]: I1216 12:51:01.788489 2912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee5d4f242643d8dc0642d18806c477af-ca-certs\") pod \"kube-apiserver-ip-172-31-17-11\" (UID: \"ee5d4f242643d8dc0642d18806c477af\") " pod="kube-system/kube-apiserver-ip-172-31-17-11" Dec 16 12:51:01.790410 kubelet[2912]: I1216 12:51:01.788672 2912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee5d4f242643d8dc0642d18806c477af-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-11\" (UID: \"ee5d4f242643d8dc0642d18806c477af\") " pod="kube-system/kube-apiserver-ip-172-31-17-11" Dec 16 12:51:01.790410 kubelet[2912]: I1216 12:51:01.789178 2912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee5d4f242643d8dc0642d18806c477af-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-11\" (UID: \"ee5d4f242643d8dc0642d18806c477af\") " pod="kube-system/kube-apiserver-ip-172-31-17-11" Dec 16 12:51:01.792246 kubelet[2912]: E1216 12:51:01.791546 2912 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-11?timeout=10s\": dial tcp 172.31.17.11:6443: connect: connection refused" interval="400ms" Dec 16 12:51:01.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:51:01.840771 kubelet[2912]: I1216 12:51:01.837898 2912 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-11" Dec 16 12:51:01.840771 kubelet[2912]: E1216 12:51:01.838374 2912 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.11:6443/api/v1/nodes\": dial tcp 172.31.17.11:6443: connect: connection refused" node="ip-172-31-17-11" Dec 16 12:51:01.834670 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 16 12:51:01.862000 audit: BPF prog-id=70 op=UNLOAD Dec 16 12:51:01.870197 systemd[1]: Created slice kubepods-burstable-podee5d4f242643d8dc0642d18806c477af.slice - libcontainer container kubepods-burstable-podee5d4f242643d8dc0642d18806c477af.slice. Dec 16 12:51:01.890573 kubelet[2912]: I1216 12:51:01.890532 2912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/48b5d68e38ac2798f613c6b3d1901cc2-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-11\" (UID: \"48b5d68e38ac2798f613c6b3d1901cc2\") " pod="kube-system/kube-controller-manager-ip-172-31-17-11" Dec 16 12:51:01.891689 kubelet[2912]: I1216 12:51:01.890591 2912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/48b5d68e38ac2798f613c6b3d1901cc2-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-11\" (UID: \"48b5d68e38ac2798f613c6b3d1901cc2\") " pod="kube-system/kube-controller-manager-ip-172-31-17-11" Dec 16 12:51:01.891689 kubelet[2912]: I1216 12:51:01.890619 2912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/48b5d68e38ac2798f613c6b3d1901cc2-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-11\" (UID: \"48b5d68e38ac2798f613c6b3d1901cc2\") " pod="kube-system/kube-controller-manager-ip-172-31-17-11" Dec 16 12:51:01.891689 kubelet[2912]: I1216 12:51:01.890639 2912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/48b5d68e38ac2798f613c6b3d1901cc2-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-11\" (UID: \"48b5d68e38ac2798f613c6b3d1901cc2\") " pod="kube-system/kube-controller-manager-ip-172-31-17-11" Dec 16 12:51:01.891689 kubelet[2912]: I1216 12:51:01.890660 2912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/48b5d68e38ac2798f613c6b3d1901cc2-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-11\" (UID: \"48b5d68e38ac2798f613c6b3d1901cc2\") " pod="kube-system/kube-controller-manager-ip-172-31-17-11" Dec 16 12:51:01.891689 kubelet[2912]: I1216 12:51:01.890697 2912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b96b9d4630ace3041ba5aed5b1ed71d-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-11\" (UID: \"9b96b9d4630ace3041ba5aed5b1ed71d\") " pod="kube-system/kube-scheduler-ip-172-31-17-11" Dec 16 12:51:01.891964 kubelet[2912]: E1216 12:51:01.891373 2912 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-11\" not found" node="ip-172-31-17-11" Dec 16 12:51:01.898184 containerd[1969]: time="2025-12-16T12:51:01.896976838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-11,Uid:ee5d4f242643d8dc0642d18806c477af,Namespace:kube-system,Attempt:0,}" Dec 16 12:51:01.911971 systemd[1]: Created slice kubepods-burstable-pod48b5d68e38ac2798f613c6b3d1901cc2.slice - libcontainer container kubepods-burstable-pod48b5d68e38ac2798f613c6b3d1901cc2.slice. Dec 16 12:51:01.925860 kubelet[2912]: E1216 12:51:01.925813 2912 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-11\" not found" node="ip-172-31-17-11" Dec 16 12:51:01.938144 systemd[1]: Created slice kubepods-burstable-pod9b96b9d4630ace3041ba5aed5b1ed71d.slice - libcontainer container kubepods-burstable-pod9b96b9d4630ace3041ba5aed5b1ed71d.slice. Dec 16 12:51:01.949231 kubelet[2912]: E1216 12:51:01.949199 2912 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-11\" not found" node="ip-172-31-17-11" Dec 16 12:51:02.045388 kubelet[2912]: I1216 12:51:02.045348 2912 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-11" Dec 16 12:51:02.046511 kubelet[2912]: E1216 12:51:02.046457 2912 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.11:6443/api/v1/nodes\": dial tcp 172.31.17.11:6443: connect: connection refused" node="ip-172-31-17-11" Dec 16 12:51:02.180480 containerd[1969]: time="2025-12-16T12:51:02.180282414Z" level=info msg="connecting to shim d0b85cfa061bc9c7965f23f622b14c1e5b957df4edb5d3275bd4d1df2e31e61a" address="unix:///run/containerd/s/08628c4ec0ed5992f092ca9d4ca4cf49015e2f5d63a4fca059f3ed6cb02bea45" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:51:02.192462 kubelet[2912]: E1216 12:51:02.192394 2912 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-11?timeout=10s\": dial tcp 172.31.17.11:6443: connect: connection refused" interval="800ms" Dec 16 12:51:02.237666 containerd[1969]: time="2025-12-16T12:51:02.237617113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-11,Uid:48b5d68e38ac2798f613c6b3d1901cc2,Namespace:kube-system,Attempt:0,}" Dec 16 12:51:02.251237 containerd[1969]: time="2025-12-16T12:51:02.251191620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-11,Uid:9b96b9d4630ace3041ba5aed5b1ed71d,Namespace:kube-system,Attempt:0,}" Dec 16 12:51:02.386167 containerd[1969]: time="2025-12-16T12:51:02.386109969Z" level=info msg="connecting to shim 51dc2255cb111bd00a51931e0e52a571c44b1a7d2d8918b53e9ccc83b2cf16af" address="unix:///run/containerd/s/2e21a8e396de733680d4baae5d95f7bf589b8ad9391f9d79495d569123797f28" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:51:02.398459 containerd[1969]: time="2025-12-16T12:51:02.398402384Z" level=info msg="connecting to shim eb3deb725e374506e5f16ba373f63b1530bddda23191f54d75d8f7fc6b7e98cf" address="unix:///run/containerd/s/67e5560b8eabc64d1d8fda1e469d862b9c7c952b00a0e59467b162f3f3bbec4e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:51:02.451672 kubelet[2912]: I1216 12:51:02.450754 2912 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-11" Dec 16 12:51:02.453599 kubelet[2912]: E1216 12:51:02.452155 2912 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.11:6443/api/v1/nodes\": dial tcp 172.31.17.11:6443: connect: connection refused" node="ip-172-31-17-11" Dec 16 12:51:02.489579 systemd[1]: Started cri-containerd-d0b85cfa061bc9c7965f23f622b14c1e5b957df4edb5d3275bd4d1df2e31e61a.scope - libcontainer container d0b85cfa061bc9c7965f23f622b14c1e5b957df4edb5d3275bd4d1df2e31e61a. Dec 16 12:51:02.517519 systemd[1]: Started cri-containerd-51dc2255cb111bd00a51931e0e52a571c44b1a7d2d8918b53e9ccc83b2cf16af.scope - libcontainer container 51dc2255cb111bd00a51931e0e52a571c44b1a7d2d8918b53e9ccc83b2cf16af. Dec 16 12:51:02.534213 systemd[1]: Started cri-containerd-eb3deb725e374506e5f16ba373f63b1530bddda23191f54d75d8f7fc6b7e98cf.scope - libcontainer container eb3deb725e374506e5f16ba373f63b1530bddda23191f54d75d8f7fc6b7e98cf. Dec 16 12:51:02.555000 audit: BPF prog-id=93 op=LOAD Dec 16 12:51:02.555000 audit: BPF prog-id=94 op=LOAD Dec 16 12:51:02.555000 audit[3018]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2986 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531646332323535636231313162643030613531393331653065353261 Dec 16 12:51:02.555000 audit: BPF prog-id=94 op=UNLOAD Dec 16 12:51:02.555000 audit[3018]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2986 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531646332323535636231313162643030613531393331653065353261 Dec 16 12:51:02.558000 audit: BPF prog-id=95 op=LOAD Dec 16 12:51:02.559000 audit: BPF prog-id=96 op=LOAD Dec 16 12:51:02.559000 audit[2970]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2960 pid=2970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430623835636661303631626339633739363566323366363232623134 Dec 16 12:51:02.559000 audit: BPF prog-id=96 op=UNLOAD Dec 16 12:51:02.559000 audit[2970]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2960 pid=2970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430623835636661303631626339633739363566323366363232623134 Dec 16 12:51:02.562000 audit: BPF prog-id=97 op=LOAD Dec 16 12:51:02.562000 audit[2970]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2960 pid=2970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.562000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430623835636661303631626339633739363566323366363232623134 Dec 16 12:51:02.562000 audit: BPF prog-id=98 op=LOAD Dec 16 12:51:02.562000 audit[2970]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2960 pid=2970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.562000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430623835636661303631626339633739363566323366363232623134 Dec 16 12:51:02.562000 audit: BPF prog-id=98 op=UNLOAD Dec 16 12:51:02.562000 audit[2970]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2960 pid=2970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.562000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430623835636661303631626339633739363566323366363232623134 Dec 16 12:51:02.562000 audit: BPF prog-id=97 op=UNLOAD Dec 16 12:51:02.562000 audit[2970]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2960 pid=2970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.562000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430623835636661303631626339633739363566323366363232623134 Dec 16 12:51:02.562000 audit: BPF prog-id=99 op=LOAD Dec 16 12:51:02.562000 audit[2970]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2960 pid=2970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.563000 audit: BPF prog-id=100 op=LOAD Dec 16 12:51:02.563000 audit[3018]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2986 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531646332323535636231313162643030613531393331653065353261 Dec 16 12:51:02.563000 audit: BPF prog-id=101 op=LOAD Dec 16 12:51:02.563000 audit[3018]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2986 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531646332323535636231313162643030613531393331653065353261 Dec 16 12:51:02.563000 audit: BPF prog-id=101 op=UNLOAD Dec 16 12:51:02.563000 audit[3018]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2986 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531646332323535636231313162643030613531393331653065353261 Dec 16 12:51:02.563000 audit: BPF prog-id=100 op=UNLOAD Dec 16 12:51:02.563000 audit[3018]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2986 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531646332323535636231313162643030613531393331653065353261 Dec 16 12:51:02.563000 audit: BPF prog-id=102 op=LOAD Dec 16 12:51:02.563000 audit[3018]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2986 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531646332323535636231313162643030613531393331653065353261 Dec 16 12:51:02.562000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430623835636661303631626339633739363566323366363232623134 Dec 16 12:51:02.572000 audit: BPF prog-id=103 op=LOAD Dec 16 12:51:02.574000 audit: BPF prog-id=104 op=LOAD Dec 16 12:51:02.574000 audit[3020]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2993 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562336465623732356533373435303665356631366261333733663633 Dec 16 12:51:02.574000 audit: BPF prog-id=104 op=UNLOAD Dec 16 12:51:02.574000 audit[3020]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2993 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562336465623732356533373435303665356631366261333733663633 Dec 16 12:51:02.574000 audit: BPF prog-id=105 op=LOAD Dec 16 12:51:02.574000 audit[3020]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2993 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562336465623732356533373435303665356631366261333733663633 Dec 16 12:51:02.574000 audit: BPF prog-id=106 op=LOAD Dec 16 12:51:02.574000 audit[3020]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2993 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562336465623732356533373435303665356631366261333733663633 Dec 16 12:51:02.574000 audit: BPF prog-id=106 op=UNLOAD Dec 16 12:51:02.574000 audit[3020]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2993 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562336465623732356533373435303665356631366261333733663633 Dec 16 12:51:02.575000 audit: BPF prog-id=105 op=UNLOAD Dec 16 12:51:02.575000 audit[3020]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2993 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562336465623732356533373435303665356631366261333733663633 Dec 16 12:51:02.575000 audit: BPF prog-id=107 op=LOAD Dec 16 12:51:02.575000 audit[3020]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2993 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562336465623732356533373435303665356631366261333733663633 Dec 16 12:51:02.659926 containerd[1969]: time="2025-12-16T12:51:02.659766901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-11,Uid:ee5d4f242643d8dc0642d18806c477af,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0b85cfa061bc9c7965f23f622b14c1e5b957df4edb5d3275bd4d1df2e31e61a\"" Dec 16 12:51:02.660272 containerd[1969]: time="2025-12-16T12:51:02.660245807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-11,Uid:9b96b9d4630ace3041ba5aed5b1ed71d,Namespace:kube-system,Attempt:0,} returns sandbox id \"51dc2255cb111bd00a51931e0e52a571c44b1a7d2d8918b53e9ccc83b2cf16af\"" Dec 16 12:51:02.668944 containerd[1969]: time="2025-12-16T12:51:02.668605793Z" level=info msg="CreateContainer within sandbox \"d0b85cfa061bc9c7965f23f622b14c1e5b957df4edb5d3275bd4d1df2e31e61a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 12:51:02.672803 containerd[1969]: time="2025-12-16T12:51:02.672435793Z" level=info msg="CreateContainer within sandbox \"51dc2255cb111bd00a51931e0e52a571c44b1a7d2d8918b53e9ccc83b2cf16af\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 12:51:02.679744 containerd[1969]: time="2025-12-16T12:51:02.679702641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-11,Uid:48b5d68e38ac2798f613c6b3d1901cc2,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb3deb725e374506e5f16ba373f63b1530bddda23191f54d75d8f7fc6b7e98cf\"" Dec 16 12:51:02.687288 containerd[1969]: time="2025-12-16T12:51:02.685998790Z" level=info msg="Container 3c47bd76520a8ac1866f6c9f633d2738c9e593dded74147bf80c115efccb6db7: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:51:02.687288 containerd[1969]: time="2025-12-16T12:51:02.686068530Z" level=info msg="CreateContainer within sandbox \"eb3deb725e374506e5f16ba373f63b1530bddda23191f54d75d8f7fc6b7e98cf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 12:51:02.689243 containerd[1969]: time="2025-12-16T12:51:02.689194156Z" level=info msg="Container b9c63722573e7d4cbe093ec6202dc534de2c4d2c6ec3bb61cc68b43bc832d846: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:51:02.702717 containerd[1969]: time="2025-12-16T12:51:02.702574877Z" level=info msg="Container be075226b52f53a519e004eaee22d1b7024b2bcb30af6be32b97739aa3f67068: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:51:02.718101 containerd[1969]: time="2025-12-16T12:51:02.718004714Z" level=info msg="CreateContainer within sandbox \"51dc2255cb111bd00a51931e0e52a571c44b1a7d2d8918b53e9ccc83b2cf16af\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b9c63722573e7d4cbe093ec6202dc534de2c4d2c6ec3bb61cc68b43bc832d846\"" Dec 16 12:51:02.719121 containerd[1969]: time="2025-12-16T12:51:02.719091175Z" level=info msg="StartContainer for \"b9c63722573e7d4cbe093ec6202dc534de2c4d2c6ec3bb61cc68b43bc832d846\"" Dec 16 12:51:02.719312 containerd[1969]: time="2025-12-16T12:51:02.719104011Z" level=info msg="CreateContainer within sandbox \"d0b85cfa061bc9c7965f23f622b14c1e5b957df4edb5d3275bd4d1df2e31e61a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3c47bd76520a8ac1866f6c9f633d2738c9e593dded74147bf80c115efccb6db7\"" Dec 16 12:51:02.725953 containerd[1969]: time="2025-12-16T12:51:02.725566171Z" level=info msg="StartContainer for \"3c47bd76520a8ac1866f6c9f633d2738c9e593dded74147bf80c115efccb6db7\"" Dec 16 12:51:02.727262 containerd[1969]: time="2025-12-16T12:51:02.727216421Z" level=info msg="connecting to shim 3c47bd76520a8ac1866f6c9f633d2738c9e593dded74147bf80c115efccb6db7" address="unix:///run/containerd/s/08628c4ec0ed5992f092ca9d4ca4cf49015e2f5d63a4fca059f3ed6cb02bea45" protocol=ttrpc version=3 Dec 16 12:51:02.727956 containerd[1969]: time="2025-12-16T12:51:02.727225376Z" level=info msg="connecting to shim b9c63722573e7d4cbe093ec6202dc534de2c4d2c6ec3bb61cc68b43bc832d846" address="unix:///run/containerd/s/2e21a8e396de733680d4baae5d95f7bf589b8ad9391f9d79495d569123797f28" protocol=ttrpc version=3 Dec 16 12:51:02.730353 containerd[1969]: time="2025-12-16T12:51:02.730317532Z" level=info msg="CreateContainer within sandbox \"eb3deb725e374506e5f16ba373f63b1530bddda23191f54d75d8f7fc6b7e98cf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"be075226b52f53a519e004eaee22d1b7024b2bcb30af6be32b97739aa3f67068\"" Dec 16 12:51:02.731254 containerd[1969]: time="2025-12-16T12:51:02.731226219Z" level=info msg="StartContainer for \"be075226b52f53a519e004eaee22d1b7024b2bcb30af6be32b97739aa3f67068\"" Dec 16 12:51:02.734536 containerd[1969]: time="2025-12-16T12:51:02.734440870Z" level=info msg="connecting to shim be075226b52f53a519e004eaee22d1b7024b2bcb30af6be32b97739aa3f67068" address="unix:///run/containerd/s/67e5560b8eabc64d1d8fda1e469d862b9c7c952b00a0e59467b162f3f3bbec4e" protocol=ttrpc version=3 Dec 16 12:51:02.760143 kubelet[2912]: E1216 12:51:02.760094 2912 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.17.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-11&limit=500&resourceVersion=0\": dial tcp 172.31.17.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:51:02.761374 systemd[1]: Started cri-containerd-b9c63722573e7d4cbe093ec6202dc534de2c4d2c6ec3bb61cc68b43bc832d846.scope - libcontainer container b9c63722573e7d4cbe093ec6202dc534de2c4d2c6ec3bb61cc68b43bc832d846. Dec 16 12:51:02.775189 systemd[1]: Started cri-containerd-3c47bd76520a8ac1866f6c9f633d2738c9e593dded74147bf80c115efccb6db7.scope - libcontainer container 3c47bd76520a8ac1866f6c9f633d2738c9e593dded74147bf80c115efccb6db7. Dec 16 12:51:02.791600 systemd[1]: Started cri-containerd-be075226b52f53a519e004eaee22d1b7024b2bcb30af6be32b97739aa3f67068.scope - libcontainer container be075226b52f53a519e004eaee22d1b7024b2bcb30af6be32b97739aa3f67068. Dec 16 12:51:02.795000 audit: BPF prog-id=108 op=LOAD Dec 16 12:51:02.796000 audit: BPF prog-id=109 op=LOAD Dec 16 12:51:02.796000 audit[3092]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2986 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.796000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239633633373232353733653764346362653039336563363230326463 Dec 16 12:51:02.796000 audit: BPF prog-id=109 op=UNLOAD Dec 16 12:51:02.796000 audit[3092]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2986 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.796000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239633633373232353733653764346362653039336563363230326463 Dec 16 12:51:02.797000 audit: BPF prog-id=110 op=LOAD Dec 16 12:51:02.797000 audit[3092]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2986 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239633633373232353733653764346362653039336563363230326463 Dec 16 12:51:02.797000 audit: BPF prog-id=111 op=LOAD Dec 16 12:51:02.797000 audit[3092]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2986 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239633633373232353733653764346362653039336563363230326463 Dec 16 12:51:02.797000 audit: BPF prog-id=111 op=UNLOAD Dec 16 12:51:02.797000 audit[3092]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2986 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239633633373232353733653764346362653039336563363230326463 Dec 16 12:51:02.797000 audit: BPF prog-id=110 op=UNLOAD Dec 16 12:51:02.797000 audit[3092]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2986 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239633633373232353733653764346362653039336563363230326463 Dec 16 12:51:02.800603 kubelet[2912]: E1216 12:51:02.799730 2912 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.17.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:51:02.797000 audit: BPF prog-id=112 op=LOAD Dec 16 12:51:02.797000 audit[3092]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2986 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239633633373232353733653764346362653039336563363230326463 Dec 16 12:51:02.808000 audit: BPF prog-id=113 op=LOAD Dec 16 12:51:02.809000 audit: BPF prog-id=114 op=LOAD Dec 16 12:51:02.809000 audit[3091]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2960 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363343762643736353230613861633138363666366339663633336432 Dec 16 12:51:02.810000 audit: BPF prog-id=114 op=UNLOAD Dec 16 12:51:02.810000 audit[3091]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2960 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363343762643736353230613861633138363666366339663633336432 Dec 16 12:51:02.810000 audit: BPF prog-id=115 op=LOAD Dec 16 12:51:02.810000 audit[3091]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2960 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363343762643736353230613861633138363666366339663633336432 Dec 16 12:51:02.810000 audit: BPF prog-id=116 op=LOAD Dec 16 12:51:02.810000 audit[3091]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2960 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363343762643736353230613861633138363666366339663633336432 Dec 16 12:51:02.810000 audit: BPF prog-id=116 op=UNLOAD Dec 16 12:51:02.810000 audit[3091]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2960 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363343762643736353230613861633138363666366339663633336432 Dec 16 12:51:02.810000 audit: BPF prog-id=115 op=UNLOAD Dec 16 12:51:02.810000 audit[3091]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2960 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363343762643736353230613861633138363666366339663633336432 Dec 16 12:51:02.810000 audit: BPF prog-id=117 op=LOAD Dec 16 12:51:02.810000 audit[3091]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2960 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363343762643736353230613861633138363666366339663633336432 Dec 16 12:51:02.823000 audit: BPF prog-id=118 op=LOAD Dec 16 12:51:02.823000 audit: BPF prog-id=119 op=LOAD Dec 16 12:51:02.823000 audit[3096]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2993 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265303735323236623532663533613531396530303465616565323264 Dec 16 12:51:02.824000 audit: BPF prog-id=119 op=UNLOAD Dec 16 12:51:02.824000 audit[3096]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2993 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265303735323236623532663533613531396530303465616565323264 Dec 16 12:51:02.824000 audit: BPF prog-id=120 op=LOAD Dec 16 12:51:02.824000 audit[3096]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2993 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265303735323236623532663533613531396530303465616565323264 Dec 16 12:51:02.824000 audit: BPF prog-id=121 op=LOAD Dec 16 12:51:02.824000 audit[3096]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2993 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265303735323236623532663533613531396530303465616565323264 Dec 16 12:51:02.824000 audit: BPF prog-id=121 op=UNLOAD Dec 16 12:51:02.824000 audit[3096]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2993 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265303735323236623532663533613531396530303465616565323264 Dec 16 12:51:02.824000 audit: BPF prog-id=120 op=UNLOAD Dec 16 12:51:02.824000 audit[3096]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2993 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265303735323236623532663533613531396530303465616565323264 Dec 16 12:51:02.824000 audit: BPF prog-id=122 op=LOAD Dec 16 12:51:02.824000 audit[3096]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2993 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:02.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265303735323236623532663533613531396530303465616565323264 Dec 16 12:51:02.908086 containerd[1969]: time="2025-12-16T12:51:02.908040056Z" level=info msg="StartContainer for \"b9c63722573e7d4cbe093ec6202dc534de2c4d2c6ec3bb61cc68b43bc832d846\" returns successfully" Dec 16 12:51:02.915801 containerd[1969]: time="2025-12-16T12:51:02.915761307Z" level=info msg="StartContainer for \"3c47bd76520a8ac1866f6c9f633d2738c9e593dded74147bf80c115efccb6db7\" returns successfully" Dec 16 12:51:02.937710 kubelet[2912]: E1216 12:51:02.937655 2912 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.17.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:51:02.976515 containerd[1969]: time="2025-12-16T12:51:02.976398244Z" level=info msg="StartContainer for \"be075226b52f53a519e004eaee22d1b7024b2bcb30af6be32b97739aa3f67068\" returns successfully" Dec 16 12:51:02.995161 kubelet[2912]: E1216 12:51:02.995112 2912 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-11?timeout=10s\": dial tcp 172.31.17.11:6443: connect: connection refused" interval="1.6s" Dec 16 12:51:03.026102 kubelet[2912]: E1216 12:51:03.026041 2912 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.17.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:51:03.255298 kubelet[2912]: I1216 12:51:03.255194 2912 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-11" Dec 16 12:51:03.255992 kubelet[2912]: E1216 12:51:03.255956 2912 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.11:6443/api/v1/nodes\": dial tcp 172.31.17.11:6443: connect: connection refused" node="ip-172-31-17-11" Dec 16 12:51:03.434267 kubelet[2912]: E1216 12:51:03.434222 2912 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.17.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.11:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 12:51:03.700424 kubelet[2912]: E1216 12:51:03.700064 2912 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-11\" not found" node="ip-172-31-17-11" Dec 16 12:51:03.707589 kubelet[2912]: E1216 12:51:03.707556 2912 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-11\" not found" node="ip-172-31-17-11" Dec 16 12:51:03.709420 kubelet[2912]: E1216 12:51:03.709391 2912 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-11\" not found" node="ip-172-31-17-11" Dec 16 12:51:04.712724 kubelet[2912]: E1216 12:51:04.711577 2912 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-11\" not found" node="ip-172-31-17-11" Dec 16 12:51:04.713179 kubelet[2912]: E1216 12:51:04.713019 2912 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-11\" not found" node="ip-172-31-17-11" Dec 16 12:51:04.713318 kubelet[2912]: E1216 12:51:04.713294 2912 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-11\" not found" node="ip-172-31-17-11" Dec 16 12:51:04.860424 kubelet[2912]: I1216 12:51:04.860391 2912 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-11" Dec 16 12:51:05.715323 kubelet[2912]: E1216 12:51:05.715101 2912 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-11\" not found" node="ip-172-31-17-11" Dec 16 12:51:05.717106 kubelet[2912]: E1216 12:51:05.717076 2912 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-11\" not found" node="ip-172-31-17-11" Dec 16 12:51:05.718455 kubelet[2912]: E1216 12:51:05.718421 2912 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-17-11\" not found" node="ip-172-31-17-11" Dec 16 12:51:05.752930 kubelet[2912]: I1216 12:51:05.752877 2912 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-11" Dec 16 12:51:05.789656 kubelet[2912]: I1216 12:51:05.789564 2912 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-11" Dec 16 12:51:05.812674 kubelet[2912]: E1216 12:51:05.812582 2912 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-17-11\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-17-11" Dec 16 12:51:05.812674 kubelet[2912]: I1216 12:51:05.812662 2912 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-11" Dec 16 12:51:05.816038 kubelet[2912]: E1216 12:51:05.815882 2912 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-17-11\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-17-11" Dec 16 12:51:05.816038 kubelet[2912]: I1216 12:51:05.815978 2912 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-11" Dec 16 12:51:05.821589 kubelet[2912]: E1216 12:51:05.821556 2912 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-11\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-17-11" Dec 16 12:51:06.473626 kubelet[2912]: I1216 12:51:06.473590 2912 apiserver.go:52] "Watching apiserver" Dec 16 12:51:06.485681 kubelet[2912]: I1216 12:51:06.485635 2912 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:51:06.713146 kubelet[2912]: I1216 12:51:06.713115 2912 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-11" Dec 16 12:51:07.949040 systemd[1]: Reload requested from client PID 3193 ('systemctl') (unit session-8.scope)... Dec 16 12:51:07.949059 systemd[1]: Reloading... Dec 16 12:51:08.062013 zram_generator::config[3240]: No configuration found. Dec 16 12:51:08.334377 systemd[1]: Reloading finished in 384 ms. Dec 16 12:51:08.360392 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:51:08.380468 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:51:08.380817 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:51:08.383217 kernel: kauditd_printk_skb: 212 callbacks suppressed Dec 16 12:51:08.383290 kernel: audit: type=1131 audit(1765889468.379:416): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:51:08.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:51:08.380940 systemd[1]: kubelet.service: Consumed 1.397s CPU time, 127.8M memory peak. Dec 16 12:51:08.389535 kernel: audit: type=1334 audit(1765889468.386:417): prog-id=123 op=LOAD Dec 16 12:51:08.386000 audit: BPF prog-id=123 op=LOAD Dec 16 12:51:08.386246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:51:08.386000 audit: BPF prog-id=90 op=UNLOAD Dec 16 12:51:08.391783 kernel: audit: type=1334 audit(1765889468.386:418): prog-id=90 op=UNLOAD Dec 16 12:51:08.386000 audit: BPF prog-id=124 op=LOAD Dec 16 12:51:08.386000 audit: BPF prog-id=125 op=LOAD Dec 16 12:51:08.396664 kernel: audit: type=1334 audit(1765889468.386:419): prog-id=124 op=LOAD Dec 16 12:51:08.396726 kernel: audit: type=1334 audit(1765889468.386:420): prog-id=125 op=LOAD Dec 16 12:51:08.386000 audit: BPF prog-id=91 op=UNLOAD Dec 16 12:51:08.386000 audit: BPF prog-id=92 op=UNLOAD Dec 16 12:51:08.400419 kernel: audit: type=1334 audit(1765889468.386:421): prog-id=91 op=UNLOAD Dec 16 12:51:08.400483 kernel: audit: type=1334 audit(1765889468.386:422): prog-id=92 op=UNLOAD Dec 16 12:51:08.400518 kernel: audit: type=1334 audit(1765889468.387:423): prog-id=126 op=LOAD Dec 16 12:51:08.387000 audit: BPF prog-id=126 op=LOAD Dec 16 12:51:08.401580 kernel: audit: type=1334 audit(1765889468.387:424): prog-id=75 op=UNLOAD Dec 16 12:51:08.387000 audit: BPF prog-id=75 op=UNLOAD Dec 16 12:51:08.403974 kernel: audit: type=1334 audit(1765889468.389:425): prog-id=127 op=LOAD Dec 16 12:51:08.389000 audit: BPF prog-id=127 op=LOAD Dec 16 12:51:08.389000 audit: BPF prog-id=128 op=LOAD Dec 16 12:51:08.389000 audit: BPF prog-id=88 op=UNLOAD Dec 16 12:51:08.389000 audit: BPF prog-id=89 op=UNLOAD Dec 16 12:51:08.392000 audit: BPF prog-id=129 op=LOAD Dec 16 12:51:08.392000 audit: BPF prog-id=76 op=UNLOAD Dec 16 12:51:08.392000 audit: BPF prog-id=130 op=LOAD Dec 16 12:51:08.392000 audit: BPF prog-id=131 op=LOAD Dec 16 12:51:08.392000 audit: BPF prog-id=77 op=UNLOAD Dec 16 12:51:08.392000 audit: BPF prog-id=78 op=UNLOAD Dec 16 12:51:08.393000 audit: BPF prog-id=132 op=LOAD Dec 16 12:51:08.393000 audit: BPF prog-id=73 op=UNLOAD Dec 16 12:51:08.395000 audit: BPF prog-id=133 op=LOAD Dec 16 12:51:08.395000 audit: BPF prog-id=79 op=UNLOAD Dec 16 12:51:08.395000 audit: BPF prog-id=134 op=LOAD Dec 16 12:51:08.395000 audit: BPF prog-id=135 op=LOAD Dec 16 12:51:08.395000 audit: BPF prog-id=80 op=UNLOAD Dec 16 12:51:08.395000 audit: BPF prog-id=81 op=UNLOAD Dec 16 12:51:08.396000 audit: BPF prog-id=136 op=LOAD Dec 16 12:51:08.396000 audit: BPF prog-id=74 op=UNLOAD Dec 16 12:51:08.404000 audit: BPF prog-id=137 op=LOAD Dec 16 12:51:08.404000 audit: BPF prog-id=85 op=UNLOAD Dec 16 12:51:08.404000 audit: BPF prog-id=138 op=LOAD Dec 16 12:51:08.404000 audit: BPF prog-id=139 op=LOAD Dec 16 12:51:08.404000 audit: BPF prog-id=86 op=UNLOAD Dec 16 12:51:08.404000 audit: BPF prog-id=87 op=UNLOAD Dec 16 12:51:08.405000 audit: BPF prog-id=140 op=LOAD Dec 16 12:51:08.405000 audit: BPF prog-id=82 op=UNLOAD Dec 16 12:51:08.405000 audit: BPF prog-id=141 op=LOAD Dec 16 12:51:08.405000 audit: BPF prog-id=142 op=LOAD Dec 16 12:51:08.405000 audit: BPF prog-id=83 op=UNLOAD Dec 16 12:51:08.405000 audit: BPF prog-id=84 op=UNLOAD Dec 16 12:51:08.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:51:08.743718 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:51:08.755290 (kubelet)[3300]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:51:08.817775 kubelet[3300]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:51:08.817775 kubelet[3300]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:51:08.817775 kubelet[3300]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:51:08.818135 kubelet[3300]: I1216 12:51:08.817844 3300 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:51:08.831591 kubelet[3300]: I1216 12:51:08.831556 3300 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 12:51:08.832143 kubelet[3300]: I1216 12:51:08.831770 3300 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:51:08.832477 kubelet[3300]: I1216 12:51:08.832463 3300 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:51:08.837899 kubelet[3300]: I1216 12:51:08.837872 3300 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 12:51:08.844540 kubelet[3300]: I1216 12:51:08.844500 3300 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:51:08.857188 kubelet[3300]: I1216 12:51:08.857151 3300 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:51:08.861466 kubelet[3300]: I1216 12:51:08.861420 3300 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:51:08.861752 kubelet[3300]: I1216 12:51:08.861712 3300 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:51:08.861933 kubelet[3300]: I1216 12:51:08.861743 3300 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-11","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:51:08.861933 kubelet[3300]: I1216 12:51:08.861896 3300 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:51:08.862124 kubelet[3300]: I1216 12:51:08.861947 3300 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 12:51:08.863734 kubelet[3300]: I1216 12:51:08.863548 3300 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:51:08.865373 kubelet[3300]: I1216 12:51:08.865340 3300 kubelet.go:480] "Attempting to sync node with API server" Dec 16 12:51:08.865466 kubelet[3300]: I1216 12:51:08.865391 3300 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:51:08.865466 kubelet[3300]: I1216 12:51:08.865415 3300 kubelet.go:386] "Adding apiserver pod source" Dec 16 12:51:08.865466 kubelet[3300]: I1216 12:51:08.865429 3300 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:51:08.869944 kubelet[3300]: I1216 12:51:08.869631 3300 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 12:51:08.871334 kubelet[3300]: I1216 12:51:08.871133 3300 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:51:08.886663 kubelet[3300]: I1216 12:51:08.886643 3300 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:51:08.886817 kubelet[3300]: I1216 12:51:08.886810 3300 server.go:1289] "Started kubelet" Dec 16 12:51:08.889532 kubelet[3300]: I1216 12:51:08.889495 3300 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:51:08.892930 kubelet[3300]: I1216 12:51:08.892890 3300 server.go:317] "Adding debug handlers to kubelet server" Dec 16 12:51:08.899808 kubelet[3300]: I1216 12:51:08.899031 3300 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:51:08.900275 kubelet[3300]: I1216 12:51:08.900260 3300 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:51:08.900372 kubelet[3300]: I1216 12:51:08.899614 3300 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:51:08.901497 kubelet[3300]: I1216 12:51:08.899430 3300 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:51:08.904786 kubelet[3300]: I1216 12:51:08.904768 3300 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:51:08.906419 kubelet[3300]: I1216 12:51:08.906399 3300 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:51:08.906611 kubelet[3300]: I1216 12:51:08.906602 3300 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:51:08.912226 kubelet[3300]: I1216 12:51:08.911688 3300 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:51:08.916199 kubelet[3300]: I1216 12:51:08.916156 3300 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 12:51:08.918937 kubelet[3300]: E1216 12:51:08.917431 3300 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:51:08.920394 kubelet[3300]: I1216 12:51:08.920319 3300 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 12:51:08.920394 kubelet[3300]: I1216 12:51:08.920339 3300 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 12:51:08.920394 kubelet[3300]: I1216 12:51:08.920372 3300 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:51:08.920394 kubelet[3300]: I1216 12:51:08.920379 3300 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 12:51:08.920564 kubelet[3300]: E1216 12:51:08.920435 3300 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:51:08.927568 kubelet[3300]: I1216 12:51:08.927541 3300 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:51:08.927675 kubelet[3300]: I1216 12:51:08.927652 3300 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:51:08.997202 kubelet[3300]: I1216 12:51:08.997100 3300 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:51:08.997202 kubelet[3300]: I1216 12:51:08.997121 3300 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:51:08.997202 kubelet[3300]: I1216 12:51:08.997145 3300 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:51:08.998756 kubelet[3300]: I1216 12:51:08.997437 3300 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 12:51:08.998756 kubelet[3300]: I1216 12:51:08.997451 3300 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 12:51:08.998756 kubelet[3300]: I1216 12:51:08.997474 3300 policy_none.go:49] "None policy: Start" Dec 16 12:51:08.998756 kubelet[3300]: I1216 12:51:08.997487 3300 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:51:08.998756 kubelet[3300]: I1216 12:51:08.997500 3300 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:51:08.998756 kubelet[3300]: I1216 12:51:08.997622 3300 state_mem.go:75] "Updated machine memory state" Dec 16 12:51:09.006739 kubelet[3300]: E1216 12:51:09.005325 3300 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:51:09.006739 kubelet[3300]: I1216 12:51:09.005527 3300 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:51:09.006739 kubelet[3300]: I1216 12:51:09.005540 3300 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:51:09.008603 kubelet[3300]: I1216 12:51:09.008425 3300 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:51:09.012668 kubelet[3300]: E1216 12:51:09.010072 3300 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:51:09.023654 kubelet[3300]: I1216 12:51:09.023630 3300 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-11" Dec 16 12:51:09.024181 kubelet[3300]: I1216 12:51:09.023727 3300 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-11" Dec 16 12:51:09.024378 kubelet[3300]: I1216 12:51:09.023841 3300 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-11" Dec 16 12:51:09.036097 kubelet[3300]: E1216 12:51:09.035958 3300 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-17-11\" already exists" pod="kube-system/kube-scheduler-ip-172-31-17-11" Dec 16 12:51:09.107809 kubelet[3300]: I1216 12:51:09.107544 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/48b5d68e38ac2798f613c6b3d1901cc2-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-11\" (UID: \"48b5d68e38ac2798f613c6b3d1901cc2\") " pod="kube-system/kube-controller-manager-ip-172-31-17-11" Dec 16 12:51:09.107809 kubelet[3300]: I1216 12:51:09.107583 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/48b5d68e38ac2798f613c6b3d1901cc2-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-11\" (UID: \"48b5d68e38ac2798f613c6b3d1901cc2\") " pod="kube-system/kube-controller-manager-ip-172-31-17-11" Dec 16 12:51:09.107809 kubelet[3300]: I1216 12:51:09.107601 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b96b9d4630ace3041ba5aed5b1ed71d-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-11\" (UID: \"9b96b9d4630ace3041ba5aed5b1ed71d\") " pod="kube-system/kube-scheduler-ip-172-31-17-11" Dec 16 12:51:09.107809 kubelet[3300]: I1216 12:51:09.107634 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee5d4f242643d8dc0642d18806c477af-ca-certs\") pod \"kube-apiserver-ip-172-31-17-11\" (UID: \"ee5d4f242643d8dc0642d18806c477af\") " pod="kube-system/kube-apiserver-ip-172-31-17-11" Dec 16 12:51:09.107809 kubelet[3300]: I1216 12:51:09.107649 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee5d4f242643d8dc0642d18806c477af-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-11\" (UID: \"ee5d4f242643d8dc0642d18806c477af\") " pod="kube-system/kube-apiserver-ip-172-31-17-11" Dec 16 12:51:09.108063 kubelet[3300]: I1216 12:51:09.107663 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee5d4f242643d8dc0642d18806c477af-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-11\" (UID: \"ee5d4f242643d8dc0642d18806c477af\") " pod="kube-system/kube-apiserver-ip-172-31-17-11" Dec 16 12:51:09.108063 kubelet[3300]: I1216 12:51:09.107679 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/48b5d68e38ac2798f613c6b3d1901cc2-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-11\" (UID: \"48b5d68e38ac2798f613c6b3d1901cc2\") " pod="kube-system/kube-controller-manager-ip-172-31-17-11" Dec 16 12:51:09.108063 kubelet[3300]: I1216 12:51:09.107693 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/48b5d68e38ac2798f613c6b3d1901cc2-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-11\" (UID: \"48b5d68e38ac2798f613c6b3d1901cc2\") " pod="kube-system/kube-controller-manager-ip-172-31-17-11" Dec 16 12:51:09.108063 kubelet[3300]: I1216 12:51:09.107711 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/48b5d68e38ac2798f613c6b3d1901cc2-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-11\" (UID: \"48b5d68e38ac2798f613c6b3d1901cc2\") " pod="kube-system/kube-controller-manager-ip-172-31-17-11" Dec 16 12:51:09.118991 kubelet[3300]: I1216 12:51:09.118560 3300 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-11" Dec 16 12:51:09.128925 kubelet[3300]: I1216 12:51:09.128880 3300 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-17-11" Dec 16 12:51:09.129422 kubelet[3300]: I1216 12:51:09.128987 3300 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-11" Dec 16 12:51:09.867924 kubelet[3300]: I1216 12:51:09.867821 3300 apiserver.go:52] "Watching apiserver" Dec 16 12:51:09.907118 kubelet[3300]: I1216 12:51:09.907078 3300 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:51:09.972626 kubelet[3300]: I1216 12:51:09.970538 3300 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-11" Dec 16 12:51:09.973574 kubelet[3300]: I1216 12:51:09.973557 3300 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-11" Dec 16 12:51:09.985881 kubelet[3300]: E1216 12:51:09.985845 3300 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-11\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-11" Dec 16 12:51:09.987354 kubelet[3300]: E1216 12:51:09.987300 3300 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-17-11\" already exists" pod="kube-system/kube-scheduler-ip-172-31-17-11" Dec 16 12:51:10.005928 kubelet[3300]: I1216 12:51:10.005492 3300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-11" podStartSLOduration=1.005479188 podStartE2EDuration="1.005479188s" podCreationTimestamp="2025-12-16 12:51:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:51:10.002312132 +0000 UTC m=+1.239414087" watchObservedRunningTime="2025-12-16 12:51:10.005479188 +0000 UTC m=+1.242581111" Dec 16 12:51:10.016668 kubelet[3300]: I1216 12:51:10.016609 3300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-17-11" podStartSLOduration=4.016594828 podStartE2EDuration="4.016594828s" podCreationTimestamp="2025-12-16 12:51:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:51:10.015830151 +0000 UTC m=+1.252932089" watchObservedRunningTime="2025-12-16 12:51:10.016594828 +0000 UTC m=+1.253696766" Dec 16 12:51:10.025851 kubelet[3300]: I1216 12:51:10.025780 3300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-11" podStartSLOduration=1.025763673 podStartE2EDuration="1.025763673s" podCreationTimestamp="2025-12-16 12:51:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:51:10.02557551 +0000 UTC m=+1.262677454" watchObservedRunningTime="2025-12-16 12:51:10.025763673 +0000 UTC m=+1.262865599" Dec 16 12:51:14.807073 kubelet[3300]: I1216 12:51:14.807023 3300 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 12:51:14.852697 containerd[1969]: time="2025-12-16T12:51:14.852637810Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 12:51:14.853083 kubelet[3300]: I1216 12:51:14.853012 3300 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 12:51:15.900698 systemd[1]: Created slice kubepods-besteffort-pod7759dca1_8e28_495b_9f0d_a8cc3a376e89.slice - libcontainer container kubepods-besteffort-pod7759dca1_8e28_495b_9f0d_a8cc3a376e89.slice. Dec 16 12:51:15.953545 kubelet[3300]: I1216 12:51:15.953409 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7759dca1-8e28-495b-9f0d-a8cc3a376e89-kube-proxy\") pod \"kube-proxy-pdr5c\" (UID: \"7759dca1-8e28-495b-9f0d-a8cc3a376e89\") " pod="kube-system/kube-proxy-pdr5c" Dec 16 12:51:15.953545 kubelet[3300]: I1216 12:51:15.953452 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7759dca1-8e28-495b-9f0d-a8cc3a376e89-lib-modules\") pod \"kube-proxy-pdr5c\" (UID: \"7759dca1-8e28-495b-9f0d-a8cc3a376e89\") " pod="kube-system/kube-proxy-pdr5c" Dec 16 12:51:15.953545 kubelet[3300]: I1216 12:51:15.953472 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7759dca1-8e28-495b-9f0d-a8cc3a376e89-xtables-lock\") pod \"kube-proxy-pdr5c\" (UID: \"7759dca1-8e28-495b-9f0d-a8cc3a376e89\") " pod="kube-system/kube-proxy-pdr5c" Dec 16 12:51:15.953545 kubelet[3300]: I1216 12:51:15.953488 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw4f5\" (UniqueName: \"kubernetes.io/projected/7759dca1-8e28-495b-9f0d-a8cc3a376e89-kube-api-access-sw4f5\") pod \"kube-proxy-pdr5c\" (UID: \"7759dca1-8e28-495b-9f0d-a8cc3a376e89\") " pod="kube-system/kube-proxy-pdr5c" Dec 16 12:51:16.056814 update_engine[1934]: I20251216 12:51:16.056195 1934 update_attempter.cc:509] Updating boot flags... Dec 16 12:51:16.072679 systemd[1]: Created slice kubepods-besteffort-pod43406cbd_716e_49b7_a43e_8de582acff72.slice - libcontainer container kubepods-besteffort-pod43406cbd_716e_49b7_a43e_8de582acff72.slice. Dec 16 12:51:16.155109 kubelet[3300]: I1216 12:51:16.154957 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/43406cbd-716e-49b7-a43e-8de582acff72-var-lib-calico\") pod \"tigera-operator-7dcd859c48-jfjh9\" (UID: \"43406cbd-716e-49b7-a43e-8de582acff72\") " pod="tigera-operator/tigera-operator-7dcd859c48-jfjh9" Dec 16 12:51:16.155109 kubelet[3300]: I1216 12:51:16.154995 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g8jw\" (UniqueName: \"kubernetes.io/projected/43406cbd-716e-49b7-a43e-8de582acff72-kube-api-access-4g8jw\") pod \"tigera-operator-7dcd859c48-jfjh9\" (UID: \"43406cbd-716e-49b7-a43e-8de582acff72\") " pod="tigera-operator/tigera-operator-7dcd859c48-jfjh9" Dec 16 12:51:16.208552 containerd[1969]: time="2025-12-16T12:51:16.208200503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pdr5c,Uid:7759dca1-8e28-495b-9f0d-a8cc3a376e89,Namespace:kube-system,Attempt:0,}" Dec 16 12:51:16.311165 containerd[1969]: time="2025-12-16T12:51:16.311066084Z" level=info msg="connecting to shim 2b22e44805869632237e3c50627f74c6dcc8e3d198fd5a23cb1e08416775c2a9" address="unix:///run/containerd/s/6a9f2fa9d723239588c85b0afce7637ffe96e3dc81c57e49a5e38b06ff515e6c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:51:16.373178 systemd[1]: Started cri-containerd-2b22e44805869632237e3c50627f74c6dcc8e3d198fd5a23cb1e08416775c2a9.scope - libcontainer container 2b22e44805869632237e3c50627f74c6dcc8e3d198fd5a23cb1e08416775c2a9. Dec 16 12:51:16.390776 containerd[1969]: time="2025-12-16T12:51:16.389961991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-jfjh9,Uid:43406cbd-716e-49b7-a43e-8de582acff72,Namespace:tigera-operator,Attempt:0,}" Dec 16 12:51:16.393000 audit: BPF prog-id=143 op=LOAD Dec 16 12:51:16.396986 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 16 12:51:16.397070 kernel: audit: type=1334 audit(1765889476.393:458): prog-id=143 op=LOAD Dec 16 12:51:16.397000 audit: BPF prog-id=144 op=LOAD Dec 16 12:51:16.400094 kernel: audit: type=1334 audit(1765889476.397:459): prog-id=144 op=LOAD Dec 16 12:51:16.397000 audit[3473]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3457 pid=3473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:16.405986 kernel: audit: type=1300 audit(1765889476.397:459): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3457 pid=3473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:16.413123 kernel: audit: type=1327 audit(1765889476.397:459): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262323265343438303538363936333232333765336335303632376637 Dec 16 12:51:16.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262323265343438303538363936333232333765336335303632376637 Dec 16 12:51:16.428229 kernel: audit: type=1334 audit(1765889476.397:460): prog-id=144 op=UNLOAD Dec 16 12:51:16.428315 kernel: audit: type=1300 audit(1765889476.397:460): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3457 pid=3473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:16.428340 kernel: audit: type=1327 audit(1765889476.397:460): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262323265343438303538363936333232333765336335303632376637 Dec 16 12:51:16.397000 audit: BPF prog-id=144 op=UNLOAD Dec 16 12:51:16.397000 audit[3473]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3457 pid=3473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:16.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262323265343438303538363936333232333765336335303632376637 Dec 16 12:51:16.397000 audit: BPF prog-id=145 op=LOAD Dec 16 12:51:16.443831 kernel: audit: type=1334 audit(1765889476.397:461): prog-id=145 op=LOAD Dec 16 12:51:16.443973 kernel: audit: type=1300 audit(1765889476.397:461): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3457 pid=3473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:16.397000 audit[3473]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3457 pid=3473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:16.453022 kernel: audit: type=1327 audit(1765889476.397:461): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262323265343438303538363936333232333765336335303632376637 Dec 16 12:51:16.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262323265343438303538363936333232333765336335303632376637 Dec 16 12:51:16.397000 audit: BPF prog-id=146 op=LOAD Dec 16 12:51:16.397000 audit[3473]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3457 pid=3473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:16.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262323265343438303538363936333232333765336335303632376637 Dec 16 12:51:16.397000 audit: BPF prog-id=146 op=UNLOAD Dec 16 12:51:16.397000 audit[3473]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3457 pid=3473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:16.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262323265343438303538363936333232333765336335303632376637 Dec 16 12:51:16.397000 audit: BPF prog-id=145 op=UNLOAD Dec 16 12:51:16.397000 audit[3473]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3457 pid=3473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:16.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262323265343438303538363936333232333765336335303632376637 Dec 16 12:51:16.397000 audit: BPF prog-id=147 op=LOAD Dec 16 12:51:16.397000 audit[3473]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3457 pid=3473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:16.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262323265343438303538363936333232333765336335303632376637 Dec 16 12:51:16.538705 containerd[1969]: time="2025-12-16T12:51:16.538337947Z" level=info msg="connecting to shim 325c2f49796928a1ca5639ec88c425726fa0c14ba951cb845ce27bf321c2bdc9" address="unix:///run/containerd/s/aef6c923143fb985842ee0d4f7790b100270ece5d6261f5c2b1ead42be0bf4b5" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:51:16.550812 containerd[1969]: time="2025-12-16T12:51:16.550745304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pdr5c,Uid:7759dca1-8e28-495b-9f0d-a8cc3a376e89,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b22e44805869632237e3c50627f74c6dcc8e3d198fd5a23cb1e08416775c2a9\"" Dec 16 12:51:16.569143 containerd[1969]: time="2025-12-16T12:51:16.569095777Z" level=info msg="CreateContainer within sandbox \"2b22e44805869632237e3c50627f74c6dcc8e3d198fd5a23cb1e08416775c2a9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 12:51:16.593829 containerd[1969]: time="2025-12-16T12:51:16.593787816Z" level=info msg="Container 4979a32c5a0b61c0b4d385d56393b42525b615a1fc7034ad60182668fbd72ce0: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:51:16.605809 systemd[1]: Started cri-containerd-325c2f49796928a1ca5639ec88c425726fa0c14ba951cb845ce27bf321c2bdc9.scope - libcontainer container 325c2f49796928a1ca5639ec88c425726fa0c14ba951cb845ce27bf321c2bdc9. Dec 16 12:51:16.631000 audit: BPF prog-id=148 op=LOAD Dec 16 12:51:16.632000 audit: BPF prog-id=149 op=LOAD Dec 16 12:51:16.632000 audit[3611]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3586 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:16.632000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332356332663439373936393238613163613536333965633838633432 Dec 16 12:51:16.632000 audit: BPF prog-id=149 op=UNLOAD Dec 16 12:51:16.632000 audit[3611]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3586 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:16.632000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332356332663439373936393238613163613536333965633838633432 Dec 16 12:51:16.632000 audit: BPF prog-id=150 op=LOAD Dec 16 12:51:16.632000 audit[3611]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3586 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:16.632000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332356332663439373936393238613163613536333965633838633432 Dec 16 12:51:16.632000 audit: BPF prog-id=151 op=LOAD Dec 16 12:51:16.632000 audit[3611]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3586 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:16.632000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332356332663439373936393238613163613536333965633838633432 Dec 16 12:51:16.632000 audit: BPF prog-id=151 op=UNLOAD Dec 16 12:51:16.632000 audit[3611]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3586 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:16.632000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332356332663439373936393238613163613536333965633838633432 Dec 16 12:51:16.632000 audit: BPF prog-id=150 op=UNLOAD Dec 16 12:51:16.632000 audit[3611]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3586 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:16.632000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332356332663439373936393238613163613536333965633838633432 Dec 16 12:51:16.633000 audit: BPF prog-id=152 op=LOAD Dec 16 12:51:16.633000 audit[3611]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3586 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:16.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332356332663439373936393238613163613536333965633838633432 Dec 16 12:51:16.717971 containerd[1969]: time="2025-12-16T12:51:16.717837146Z" level=info msg="CreateContainer within sandbox \"2b22e44805869632237e3c50627f74c6dcc8e3d198fd5a23cb1e08416775c2a9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4979a32c5a0b61c0b4d385d56393b42525b615a1fc7034ad60182668fbd72ce0\"" Dec 16 12:51:16.724874 containerd[1969]: time="2025-12-16T12:51:16.724823608Z" level=info msg="StartContainer for \"4979a32c5a0b61c0b4d385d56393b42525b615a1fc7034ad60182668fbd72ce0\"" Dec 16 12:51:16.733084 containerd[1969]: time="2025-12-16T12:51:16.733043220Z" level=info msg="connecting to shim 4979a32c5a0b61c0b4d385d56393b42525b615a1fc7034ad60182668fbd72ce0" address="unix:///run/containerd/s/6a9f2fa9d723239588c85b0afce7637ffe96e3dc81c57e49a5e38b06ff515e6c" protocol=ttrpc version=3 Dec 16 12:51:16.802936 containerd[1969]: time="2025-12-16T12:51:16.801192306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-jfjh9,Uid:43406cbd-716e-49b7-a43e-8de582acff72,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"325c2f49796928a1ca5639ec88c425726fa0c14ba951cb845ce27bf321c2bdc9\"" Dec 16 12:51:16.806551 containerd[1969]: time="2025-12-16T12:51:16.806420586Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 12:51:16.832195 systemd[1]: Started cri-containerd-4979a32c5a0b61c0b4d385d56393b42525b615a1fc7034ad60182668fbd72ce0.scope - libcontainer container 4979a32c5a0b61c0b4d385d56393b42525b615a1fc7034ad60182668fbd72ce0. Dec 16 12:51:16.879000 audit: BPF prog-id=153 op=LOAD Dec 16 12:51:16.879000 audit[3694]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3457 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:16.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439373961333263356130623631633062346433383564353633393362 Dec 16 12:51:16.879000 audit: BPF prog-id=154 op=LOAD Dec 16 12:51:16.879000 audit[3694]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3457 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:16.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439373961333263356130623631633062346433383564353633393362 Dec 16 12:51:16.879000 audit: BPF prog-id=154 op=UNLOAD Dec 16 12:51:16.879000 audit[3694]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3457 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:16.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439373961333263356130623631633062346433383564353633393362 Dec 16 12:51:16.879000 audit: BPF prog-id=153 op=UNLOAD Dec 16 12:51:16.879000 audit[3694]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3457 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:16.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439373961333263356130623631633062346433383564353633393362 Dec 16 12:51:16.879000 audit: BPF prog-id=155 op=LOAD Dec 16 12:51:16.879000 audit[3694]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3457 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:16.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439373961333263356130623631633062346433383564353633393362 Dec 16 12:51:16.900415 containerd[1969]: time="2025-12-16T12:51:16.900369592Z" level=info msg="StartContainer for \"4979a32c5a0b61c0b4d385d56393b42525b615a1fc7034ad60182668fbd72ce0\" returns successfully" Dec 16 12:51:17.002216 kubelet[3300]: I1216 12:51:17.001809 3300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pdr5c" podStartSLOduration=2.001793403 podStartE2EDuration="2.001793403s" podCreationTimestamp="2025-12-16 12:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:51:17.001754929 +0000 UTC m=+8.238856870" watchObservedRunningTime="2025-12-16 12:51:17.001793403 +0000 UTC m=+8.238895338" Dec 16 12:51:17.561000 audit[3774]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3774 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:51:17.561000 audit[3774]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffefa7e0d10 a2=0 a3=7ffefa7e0cfc items=0 ppid=3722 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.561000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 12:51:17.563000 audit[3775]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3775 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:51:17.563000 audit[3775]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc410e960 a2=0 a3=7ffcc410e94c items=0 ppid=3722 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.563000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 12:51:17.564000 audit[3776]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_chain pid=3776 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:51:17.564000 audit[3776]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd73b214f0 a2=0 a3=7ffd73b214dc items=0 ppid=3722 pid=3776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.564000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 12:51:17.566000 audit[3777]: NETFILTER_CFG table=mangle:57 family=10 entries=1 op=nft_register_chain pid=3777 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:51:17.566000 audit[3777]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffff3a19340 a2=0 a3=7ffff3a1932c items=0 ppid=3722 pid=3777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.566000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 12:51:17.567000 audit[3778]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3778 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:51:17.567000 audit[3778]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe44a0d240 a2=0 a3=7ffe44a0d22c items=0 ppid=3722 pid=3778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.567000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 12:51:17.569000 audit[3779]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3779 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:51:17.569000 audit[3779]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeab249e20 a2=0 a3=7ffeab249e0c items=0 ppid=3722 pid=3779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.569000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 12:51:17.681000 audit[3782]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3782 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:51:17.681000 audit[3782]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff75405dc0 a2=0 a3=7fff75405dac items=0 ppid=3722 pid=3782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.681000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 12:51:17.687000 audit[3784]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3784 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:51:17.687000 audit[3784]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc5f987b10 a2=0 a3=7ffc5f987afc items=0 ppid=3722 pid=3784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.687000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 16 12:51:17.692000 audit[3787]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3787 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:51:17.692000 audit[3787]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffe53d7f50 a2=0 a3=7fffe53d7f3c items=0 ppid=3722 pid=3787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.692000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 16 12:51:17.693000 audit[3788]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3788 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:51:17.693000 audit[3788]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffed85fef20 a2=0 a3=7ffed85fef0c items=0 ppid=3722 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.693000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 12:51:17.696000 audit[3790]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3790 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:51:17.696000 audit[3790]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffc381f230 a2=0 a3=7fffc381f21c items=0 ppid=3722 pid=3790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.696000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 12:51:17.698000 audit[3791]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3791 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:51:17.698000 audit[3791]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc991464f0 a2=0 a3=7ffc991464dc items=0 ppid=3722 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.698000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 12:51:17.702000 audit[3793]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3793 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:51:17.702000 audit[3793]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe797cef40 a2=0 a3=7ffe797cef2c items=0 ppid=3722 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.702000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 12:51:17.706000 audit[3796]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3796 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:51:17.706000 audit[3796]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffef791d5b0 a2=0 a3=7ffef791d59c items=0 ppid=3722 pid=3796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.706000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 16 12:51:17.707000 audit[3797]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3797 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:51:17.707000 audit[3797]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb989b230 a2=0 a3=7ffdb989b21c items=0 ppid=3722 pid=3797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.707000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 12:51:17.710000 audit[3799]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3799 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:51:17.710000 audit[3799]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe07cda6d0 a2=0 a3=7ffe07cda6bc items=0 ppid=3722 pid=3799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.710000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 12:51:17.711000 audit[3800]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3800 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:51:17.711000 audit[3800]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd33d4b1d0 a2=0 a3=7ffd33d4b1bc items=0 ppid=3722 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.711000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 12:51:17.714000 audit[3802]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3802 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:51:17.714000 audit[3802]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc5ed61f90 a2=0 a3=7ffc5ed61f7c items=0 ppid=3722 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.714000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 12:51:17.719000 audit[3805]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3805 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:51:17.719000 audit[3805]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd318b12d0 a2=0 a3=7ffd318b12bc items=0 ppid=3722 pid=3805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.719000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 12:51:17.723000 audit[3808]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3808 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:51:17.723000 audit[3808]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdd54f9d40 a2=0 a3=7ffdd54f9d2c items=0 ppid=3722 pid=3808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.723000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 12:51:17.725000 audit[3809]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3809 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:51:17.725000 audit[3809]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcf125f100 a2=0 a3=7ffcf125f0ec items=0 ppid=3722 pid=3809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.725000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 12:51:17.729000 audit[3811]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3811 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:51:17.729000 audit[3811]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffda62a6300 a2=0 a3=7ffda62a62ec items=0 ppid=3722 pid=3811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.729000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:51:17.733000 audit[3814]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3814 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:51:17.733000 audit[3814]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdcae48500 a2=0 a3=7ffdcae484ec items=0 ppid=3722 pid=3814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.733000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:51:17.735000 audit[3815]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3815 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:51:17.735000 audit[3815]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd64f967a0 a2=0 a3=7ffd64f9678c items=0 ppid=3722 pid=3815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.735000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 12:51:17.738000 audit[3817]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3817 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:51:17.738000 audit[3817]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffdfe6dcf80 a2=0 a3=7ffdfe6dcf6c items=0 ppid=3722 pid=3817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.738000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 12:51:17.765000 audit[3823]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3823 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:51:17.765000 audit[3823]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdbc1c0e60 a2=0 a3=7ffdbc1c0e4c items=0 ppid=3722 pid=3823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.765000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:51:17.773000 audit[3823]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3823 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:51:17.773000 audit[3823]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffdbc1c0e60 a2=0 a3=7ffdbc1c0e4c items=0 ppid=3722 pid=3823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.773000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:51:17.774000 audit[3828]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3828 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:51:17.774000 audit[3828]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffff32193e0 a2=0 a3=7ffff32193cc items=0 ppid=3722 pid=3828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.774000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 12:51:17.778000 audit[3830]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3830 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:51:17.778000 audit[3830]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff96235800 a2=0 a3=7fff962357ec items=0 ppid=3722 pid=3830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.778000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 16 12:51:17.782000 audit[3833]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3833 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:51:17.782000 audit[3833]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffee6ecddc0 a2=0 a3=7ffee6ecddac items=0 ppid=3722 pid=3833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.782000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 16 12:51:17.784000 audit[3834]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3834 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:51:17.784000 audit[3834]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffbccf1630 a2=0 a3=7fffbccf161c items=0 ppid=3722 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.784000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 12:51:17.787000 audit[3836]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3836 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:51:17.787000 audit[3836]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffc7031560 a2=0 a3=7fffc703154c items=0 ppid=3722 pid=3836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.787000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 12:51:17.788000 audit[3837]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3837 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:51:17.788000 audit[3837]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc4a84d450 a2=0 a3=7ffc4a84d43c items=0 ppid=3722 pid=3837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.788000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 12:51:17.791000 audit[3839]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3839 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:51:17.791000 audit[3839]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffea6762640 a2=0 a3=7ffea676262c items=0 ppid=3722 pid=3839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.791000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 16 12:51:17.795000 audit[3842]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3842 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:51:17.795000 audit[3842]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffe1671c970 a2=0 a3=7ffe1671c95c items=0 ppid=3722 pid=3842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.795000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 12:51:17.797000 audit[3843]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3843 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:51:17.797000 audit[3843]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff918fbaf0 a2=0 a3=7fff918fbadc items=0 ppid=3722 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.797000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 12:51:17.800000 audit[3845]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3845 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:51:17.800000 audit[3845]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe33c48a40 a2=0 a3=7ffe33c48a2c items=0 ppid=3722 pid=3845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.800000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 12:51:17.802000 audit[3846]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3846 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:51:17.802000 audit[3846]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc1d9802d0 a2=0 a3=7ffc1d9802bc items=0 ppid=3722 pid=3846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.802000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 12:51:17.805000 audit[3848]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3848 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:51:17.805000 audit[3848]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd4681e180 a2=0 a3=7ffd4681e16c items=0 ppid=3722 pid=3848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.805000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 12:51:17.810000 audit[3851]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3851 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:51:17.810000 audit[3851]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff6f7145a0 a2=0 a3=7fff6f71458c items=0 ppid=3722 pid=3851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.810000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 12:51:17.815000 audit[3854]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3854 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:51:17.815000 audit[3854]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcabe38c00 a2=0 a3=7ffcabe38bec items=0 ppid=3722 pid=3854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.815000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 16 12:51:17.817000 audit[3855]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3855 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:51:17.817000 audit[3855]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff8596d530 a2=0 a3=7fff8596d51c items=0 ppid=3722 pid=3855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.817000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 12:51:17.820000 audit[3857]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3857 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:51:17.820000 audit[3857]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffdef8e97c0 a2=0 a3=7ffdef8e97ac items=0 ppid=3722 pid=3857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.820000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:51:17.824000 audit[3860]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3860 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:51:17.824000 audit[3860]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffea6cbde30 a2=0 a3=7ffea6cbde1c items=0 ppid=3722 pid=3860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.824000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:51:17.826000 audit[3861]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3861 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:51:17.826000 audit[3861]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc79956250 a2=0 a3=7ffc7995623c items=0 ppid=3722 pid=3861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.826000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 12:51:17.828000 audit[3863]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3863 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:51:17.828000 audit[3863]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffe33547fb0 a2=0 a3=7ffe33547f9c items=0 ppid=3722 pid=3863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.828000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 12:51:17.830000 audit[3864]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3864 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:51:17.830000 audit[3864]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdff64bbf0 a2=0 a3=7ffdff64bbdc items=0 ppid=3722 pid=3864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.830000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 12:51:17.833000 audit[3866]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3866 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:51:17.833000 audit[3866]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc0829def0 a2=0 a3=7ffc0829dedc items=0 ppid=3722 pid=3866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.833000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:51:17.837000 audit[3869]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3869 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:51:17.837000 audit[3869]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdbe8d6de0 a2=0 a3=7ffdbe8d6dcc items=0 ppid=3722 pid=3869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.837000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:51:17.841000 audit[3871]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3871 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 12:51:17.841000 audit[3871]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffd53f38630 a2=0 a3=7ffd53f3861c items=0 ppid=3722 pid=3871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.841000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:51:17.841000 audit[3871]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3871 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 12:51:17.841000 audit[3871]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffd53f38630 a2=0 a3=7ffd53f3861c items=0 ppid=3722 pid=3871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:17.841000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:51:18.236226 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1828733945.mount: Deactivated successfully. Dec 16 12:51:19.257504 containerd[1969]: time="2025-12-16T12:51:19.257316543Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:51:19.258812 containerd[1969]: time="2025-12-16T12:51:19.258648639Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Dec 16 12:51:19.259958 containerd[1969]: time="2025-12-16T12:51:19.259925812Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:51:19.262259 containerd[1969]: time="2025-12-16T12:51:19.262224962Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:51:19.262967 containerd[1969]: time="2025-12-16T12:51:19.262939490Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.456477912s" Dec 16 12:51:19.263064 containerd[1969]: time="2025-12-16T12:51:19.263051526Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 16 12:51:19.268034 containerd[1969]: time="2025-12-16T12:51:19.267997834Z" level=info msg="CreateContainer within sandbox \"325c2f49796928a1ca5639ec88c425726fa0c14ba951cb845ce27bf321c2bdc9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 12:51:19.278776 containerd[1969]: time="2025-12-16T12:51:19.277566873Z" level=info msg="Container a4bfa757c7627ebe34453ec2b20043f45a2310416daa1bd854348e411628de25: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:51:19.283071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3035920331.mount: Deactivated successfully. Dec 16 12:51:19.293359 containerd[1969]: time="2025-12-16T12:51:19.293291129Z" level=info msg="CreateContainer within sandbox \"325c2f49796928a1ca5639ec88c425726fa0c14ba951cb845ce27bf321c2bdc9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a4bfa757c7627ebe34453ec2b20043f45a2310416daa1bd854348e411628de25\"" Dec 16 12:51:19.294162 containerd[1969]: time="2025-12-16T12:51:19.294133493Z" level=info msg="StartContainer for \"a4bfa757c7627ebe34453ec2b20043f45a2310416daa1bd854348e411628de25\"" Dec 16 12:51:19.295297 containerd[1969]: time="2025-12-16T12:51:19.295265840Z" level=info msg="connecting to shim a4bfa757c7627ebe34453ec2b20043f45a2310416daa1bd854348e411628de25" address="unix:///run/containerd/s/aef6c923143fb985842ee0d4f7790b100270ece5d6261f5c2b1ead42be0bf4b5" protocol=ttrpc version=3 Dec 16 12:51:19.317295 systemd[1]: Started cri-containerd-a4bfa757c7627ebe34453ec2b20043f45a2310416daa1bd854348e411628de25.scope - libcontainer container a4bfa757c7627ebe34453ec2b20043f45a2310416daa1bd854348e411628de25. Dec 16 12:51:19.331000 audit: BPF prog-id=156 op=LOAD Dec 16 12:51:19.332000 audit: BPF prog-id=157 op=LOAD Dec 16 12:51:19.332000 audit[3880]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3586 pid=3880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:19.332000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134626661373537633736323765626533343435336563326232303034 Dec 16 12:51:19.332000 audit: BPF prog-id=157 op=UNLOAD Dec 16 12:51:19.332000 audit[3880]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3586 pid=3880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:19.332000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134626661373537633736323765626533343435336563326232303034 Dec 16 12:51:19.332000 audit: BPF prog-id=158 op=LOAD Dec 16 12:51:19.332000 audit[3880]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3586 pid=3880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:19.332000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134626661373537633736323765626533343435336563326232303034 Dec 16 12:51:19.332000 audit: BPF prog-id=159 op=LOAD Dec 16 12:51:19.332000 audit[3880]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3586 pid=3880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:19.332000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134626661373537633736323765626533343435336563326232303034 Dec 16 12:51:19.332000 audit: BPF prog-id=159 op=UNLOAD Dec 16 12:51:19.332000 audit[3880]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3586 pid=3880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:19.332000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134626661373537633736323765626533343435336563326232303034 Dec 16 12:51:19.332000 audit: BPF prog-id=158 op=UNLOAD Dec 16 12:51:19.332000 audit[3880]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3586 pid=3880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:19.332000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134626661373537633736323765626533343435336563326232303034 Dec 16 12:51:19.332000 audit: BPF prog-id=160 op=LOAD Dec 16 12:51:19.332000 audit[3880]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3586 pid=3880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:19.332000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134626661373537633736323765626533343435336563326232303034 Dec 16 12:51:19.359404 containerd[1969]: time="2025-12-16T12:51:19.359366175Z" level=info msg="StartContainer for \"a4bfa757c7627ebe34453ec2b20043f45a2310416daa1bd854348e411628de25\" returns successfully" Dec 16 12:51:20.016229 kubelet[3300]: I1216 12:51:20.016139 3300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-jfjh9" podStartSLOduration=1.557411837 podStartE2EDuration="4.016119444s" podCreationTimestamp="2025-12-16 12:51:16 +0000 UTC" firstStartedPulling="2025-12-16 12:51:16.805279966 +0000 UTC m=+8.042381903" lastFinishedPulling="2025-12-16 12:51:19.263987585 +0000 UTC m=+10.501089510" observedRunningTime="2025-12-16 12:51:20.015203708 +0000 UTC m=+11.252305654" watchObservedRunningTime="2025-12-16 12:51:20.016119444 +0000 UTC m=+11.253221391" Dec 16 12:51:26.055234 sudo[2323]: pam_unix(sudo:session): session closed for user root Dec 16 12:51:26.060700 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 16 12:51:26.060818 kernel: audit: type=1106 audit(1765889486.054:538): pid=2323 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:51:26.054000 audit[2323]: USER_END pid=2323 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:51:26.054000 audit[2323]: CRED_DISP pid=2323 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:51:26.067933 kernel: audit: type=1104 audit(1765889486.054:539): pid=2323 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:51:26.077703 sshd[2322]: Connection closed by 147.75.109.163 port 42784 Dec 16 12:51:26.078237 sshd-session[2318]: pam_unix(sshd:session): session closed for user core Dec 16 12:51:26.090936 kernel: audit: type=1106 audit(1765889486.082:540): pid=2318 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:51:26.082000 audit[2318]: USER_END pid=2318 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:51:26.092596 systemd[1]: sshd@6-172.31.17.11:22-147.75.109.163:42784.service: Deactivated successfully. Dec 16 12:51:26.097560 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 12:51:26.082000 audit[2318]: CRED_DISP pid=2318 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:51:26.102950 kernel: audit: type=1104 audit(1765889486.082:541): pid=2318 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:51:26.102864 systemd[1]: session-8.scope: Consumed 5.045s CPU time, 152M memory peak. Dec 16 12:51:26.107291 systemd-logind[1926]: Session 8 logged out. Waiting for processes to exit. Dec 16 12:51:26.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.17.11:22-147.75.109.163:42784 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:51:26.112961 kernel: audit: type=1131 audit(1765889486.091:542): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.17.11:22-147.75.109.163:42784 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:51:26.113721 systemd-logind[1926]: Removed session 8. Dec 16 12:51:27.009000 audit[3963]: NETFILTER_CFG table=filter:105 family=2 entries=14 op=nft_register_rule pid=3963 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:51:27.009000 audit[3963]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe28811b30 a2=0 a3=7ffe28811b1c items=0 ppid=3722 pid=3963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:27.016744 kernel: audit: type=1325 audit(1765889487.009:543): table=filter:105 family=2 entries=14 op=nft_register_rule pid=3963 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:51:27.016829 kernel: audit: type=1300 audit(1765889487.009:543): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe28811b30 a2=0 a3=7ffe28811b1c items=0 ppid=3722 pid=3963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:27.009000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:51:27.024941 kernel: audit: type=1327 audit(1765889487.009:543): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:51:27.026000 audit[3963]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3963 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:51:27.030923 kernel: audit: type=1325 audit(1765889487.026:544): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3963 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:51:27.026000 audit[3963]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe28811b30 a2=0 a3=0 items=0 ppid=3722 pid=3963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:27.063935 kernel: audit: type=1300 audit(1765889487.026:544): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe28811b30 a2=0 a3=0 items=0 ppid=3722 pid=3963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:27.026000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:51:27.099000 audit[3965]: NETFILTER_CFG table=filter:107 family=2 entries=15 op=nft_register_rule pid=3965 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:51:27.099000 audit[3965]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffce85192f0 a2=0 a3=7ffce85192dc items=0 ppid=3722 pid=3965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:27.099000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:51:27.104000 audit[3965]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3965 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:51:27.104000 audit[3965]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffce85192f0 a2=0 a3=0 items=0 ppid=3722 pid=3965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:27.104000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:51:32.562000 audit[3969]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3969 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:51:32.564411 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 12:51:32.564506 kernel: audit: type=1325 audit(1765889492.562:547): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3969 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:51:32.562000 audit[3969]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd634ab7c0 a2=0 a3=7ffd634ab7ac items=0 ppid=3722 pid=3969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:32.569323 kernel: audit: type=1300 audit(1765889492.562:547): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd634ab7c0 a2=0 a3=7ffd634ab7ac items=0 ppid=3722 pid=3969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:32.562000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:51:32.574885 kernel: audit: type=1327 audit(1765889492.562:547): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:51:32.579000 audit[3969]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3969 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:51:32.585935 kernel: audit: type=1325 audit(1765889492.579:548): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3969 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:51:32.586050 kernel: audit: type=1300 audit(1765889492.579:548): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd634ab7c0 a2=0 a3=0 items=0 ppid=3722 pid=3969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:32.579000 audit[3969]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd634ab7c0 a2=0 a3=0 items=0 ppid=3722 pid=3969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:32.579000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:51:32.596968 kernel: audit: type=1327 audit(1765889492.579:548): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:51:32.614000 audit[3971]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3971 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:51:32.614000 audit[3971]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe1cd4f570 a2=0 a3=7ffe1cd4f55c items=0 ppid=3722 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:32.620843 kernel: audit: type=1325 audit(1765889492.614:549): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3971 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:51:32.622035 kernel: audit: type=1300 audit(1765889492.614:549): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe1cd4f570 a2=0 a3=7ffe1cd4f55c items=0 ppid=3722 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:32.614000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:51:32.629955 kernel: audit: type=1327 audit(1765889492.614:549): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:51:32.625000 audit[3971]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3971 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:51:32.625000 audit[3971]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe1cd4f570 a2=0 a3=0 items=0 ppid=3722 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:32.625000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:51:32.633925 kernel: audit: type=1325 audit(1765889492.625:550): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3971 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:51:33.642000 audit[3973]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3973 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:51:33.642000 audit[3973]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff6b8f9f00 a2=0 a3=7fff6b8f9eec items=0 ppid=3722 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:33.642000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:51:33.650000 audit[3973]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3973 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:51:33.650000 audit[3973]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff6b8f9f00 a2=0 a3=0 items=0 ppid=3722 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:33.650000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:51:34.800000 audit[3976]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3976 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:51:34.800000 audit[3976]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd53db9de0 a2=0 a3=7ffd53db9dcc items=0 ppid=3722 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:34.800000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:51:34.807000 audit[3976]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3976 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:51:34.807000 audit[3976]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd53db9de0 a2=0 a3=0 items=0 ppid=3722 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:34.807000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:51:34.859300 systemd[1]: Created slice kubepods-besteffort-pod3da16c2a_9fde_44a1_9f48_173bf0524018.slice - libcontainer container kubepods-besteffort-pod3da16c2a_9fde_44a1_9f48_173bf0524018.slice. Dec 16 12:51:34.882053 kubelet[3300]: I1216 12:51:34.881937 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8c42\" (UniqueName: \"kubernetes.io/projected/3da16c2a-9fde-44a1-9f48-173bf0524018-kube-api-access-c8c42\") pod \"calico-typha-67fb684979-m6hdn\" (UID: \"3da16c2a-9fde-44a1-9f48-173bf0524018\") " pod="calico-system/calico-typha-67fb684979-m6hdn" Dec 16 12:51:34.882053 kubelet[3300]: I1216 12:51:34.881978 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3da16c2a-9fde-44a1-9f48-173bf0524018-tigera-ca-bundle\") pod \"calico-typha-67fb684979-m6hdn\" (UID: \"3da16c2a-9fde-44a1-9f48-173bf0524018\") " pod="calico-system/calico-typha-67fb684979-m6hdn" Dec 16 12:51:34.882053 kubelet[3300]: I1216 12:51:34.881996 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3da16c2a-9fde-44a1-9f48-173bf0524018-typha-certs\") pod \"calico-typha-67fb684979-m6hdn\" (UID: \"3da16c2a-9fde-44a1-9f48-173bf0524018\") " pod="calico-system/calico-typha-67fb684979-m6hdn" Dec 16 12:51:35.079687 systemd[1]: Created slice kubepods-besteffort-pod07b6e20c_0806_4c86_818d_0abb5ca37e27.slice - libcontainer container kubepods-besteffort-pod07b6e20c_0806_4c86_818d_0abb5ca37e27.slice. Dec 16 12:51:35.084285 kubelet[3300]: I1216 12:51:35.084252 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/07b6e20c-0806-4c86-818d-0abb5ca37e27-node-certs\") pod \"calico-node-qsttg\" (UID: \"07b6e20c-0806-4c86-818d-0abb5ca37e27\") " pod="calico-system/calico-node-qsttg" Dec 16 12:51:35.085035 kubelet[3300]: I1216 12:51:35.084297 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07b6e20c-0806-4c86-818d-0abb5ca37e27-tigera-ca-bundle\") pod \"calico-node-qsttg\" (UID: \"07b6e20c-0806-4c86-818d-0abb5ca37e27\") " pod="calico-system/calico-node-qsttg" Dec 16 12:51:35.085035 kubelet[3300]: I1216 12:51:35.084324 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/07b6e20c-0806-4c86-818d-0abb5ca37e27-cni-bin-dir\") pod \"calico-node-qsttg\" (UID: \"07b6e20c-0806-4c86-818d-0abb5ca37e27\") " pod="calico-system/calico-node-qsttg" Dec 16 12:51:35.085035 kubelet[3300]: I1216 12:51:35.084348 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/07b6e20c-0806-4c86-818d-0abb5ca37e27-policysync\") pod \"calico-node-qsttg\" (UID: \"07b6e20c-0806-4c86-818d-0abb5ca37e27\") " pod="calico-system/calico-node-qsttg" Dec 16 12:51:35.085035 kubelet[3300]: I1216 12:51:35.084369 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fjnr\" (UniqueName: \"kubernetes.io/projected/07b6e20c-0806-4c86-818d-0abb5ca37e27-kube-api-access-7fjnr\") pod \"calico-node-qsttg\" (UID: \"07b6e20c-0806-4c86-818d-0abb5ca37e27\") " pod="calico-system/calico-node-qsttg" Dec 16 12:51:35.085035 kubelet[3300]: I1216 12:51:35.084395 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/07b6e20c-0806-4c86-818d-0abb5ca37e27-cni-log-dir\") pod \"calico-node-qsttg\" (UID: \"07b6e20c-0806-4c86-818d-0abb5ca37e27\") " pod="calico-system/calico-node-qsttg" Dec 16 12:51:35.085428 kubelet[3300]: I1216 12:51:35.084422 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/07b6e20c-0806-4c86-818d-0abb5ca37e27-cni-net-dir\") pod \"calico-node-qsttg\" (UID: \"07b6e20c-0806-4c86-818d-0abb5ca37e27\") " pod="calico-system/calico-node-qsttg" Dec 16 12:51:35.085428 kubelet[3300]: I1216 12:51:35.084451 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/07b6e20c-0806-4c86-818d-0abb5ca37e27-var-run-calico\") pod \"calico-node-qsttg\" (UID: \"07b6e20c-0806-4c86-818d-0abb5ca37e27\") " pod="calico-system/calico-node-qsttg" Dec 16 12:51:35.085428 kubelet[3300]: I1216 12:51:35.084477 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/07b6e20c-0806-4c86-818d-0abb5ca37e27-flexvol-driver-host\") pod \"calico-node-qsttg\" (UID: \"07b6e20c-0806-4c86-818d-0abb5ca37e27\") " pod="calico-system/calico-node-qsttg" Dec 16 12:51:35.085428 kubelet[3300]: I1216 12:51:35.084500 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07b6e20c-0806-4c86-818d-0abb5ca37e27-lib-modules\") pod \"calico-node-qsttg\" (UID: \"07b6e20c-0806-4c86-818d-0abb5ca37e27\") " pod="calico-system/calico-node-qsttg" Dec 16 12:51:35.085428 kubelet[3300]: I1216 12:51:35.084524 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/07b6e20c-0806-4c86-818d-0abb5ca37e27-var-lib-calico\") pod \"calico-node-qsttg\" (UID: \"07b6e20c-0806-4c86-818d-0abb5ca37e27\") " pod="calico-system/calico-node-qsttg" Dec 16 12:51:35.085578 kubelet[3300]: I1216 12:51:35.084548 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/07b6e20c-0806-4c86-818d-0abb5ca37e27-xtables-lock\") pod \"calico-node-qsttg\" (UID: \"07b6e20c-0806-4c86-818d-0abb5ca37e27\") " pod="calico-system/calico-node-qsttg" Dec 16 12:51:35.165504 containerd[1969]: time="2025-12-16T12:51:35.165322560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67fb684979-m6hdn,Uid:3da16c2a-9fde-44a1-9f48-173bf0524018,Namespace:calico-system,Attempt:0,}" Dec 16 12:51:35.195361 containerd[1969]: time="2025-12-16T12:51:35.195279138Z" level=info msg="connecting to shim 26e1ce466d3bcb46b09140bc93ca5cefc2bfa2e97419920539fc053eef994f23" address="unix:///run/containerd/s/46f91da131fe0673a610337815ef403fb0480e3949d987a24c2fe2fb2322fe1b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:51:35.239249 systemd[1]: Started cri-containerd-26e1ce466d3bcb46b09140bc93ca5cefc2bfa2e97419920539fc053eef994f23.scope - libcontainer container 26e1ce466d3bcb46b09140bc93ca5cefc2bfa2e97419920539fc053eef994f23. Dec 16 12:51:35.278000 audit: BPF prog-id=161 op=LOAD Dec 16 12:51:35.279000 audit: BPF prog-id=162 op=LOAD Dec 16 12:51:35.279000 audit[4002]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=3990 pid=4002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:35.279000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236653163653436366433626362343662303931343062633933636135 Dec 16 12:51:35.279000 audit: BPF prog-id=162 op=UNLOAD Dec 16 12:51:35.279000 audit[4002]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3990 pid=4002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:35.279000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236653163653436366433626362343662303931343062633933636135 Dec 16 12:51:35.280000 audit: BPF prog-id=163 op=LOAD Dec 16 12:51:35.280000 audit[4002]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3990 pid=4002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:35.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236653163653436366433626362343662303931343062633933636135 Dec 16 12:51:35.280000 audit: BPF prog-id=164 op=LOAD Dec 16 12:51:35.280000 audit[4002]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=3990 pid=4002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:35.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236653163653436366433626362343662303931343062633933636135 Dec 16 12:51:35.280000 audit: BPF prog-id=164 op=UNLOAD Dec 16 12:51:35.280000 audit[4002]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3990 pid=4002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:35.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236653163653436366433626362343662303931343062633933636135 Dec 16 12:51:35.280000 audit: BPF prog-id=163 op=UNLOAD Dec 16 12:51:35.280000 audit[4002]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3990 pid=4002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:35.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236653163653436366433626362343662303931343062633933636135 Dec 16 12:51:35.280000 audit: BPF prog-id=165 op=LOAD Dec 16 12:51:35.280000 audit[4002]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=3990 pid=4002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:35.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236653163653436366433626362343662303931343062633933636135 Dec 16 12:51:35.288674 kubelet[3300]: E1216 12:51:35.287877 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l2s79" podUID="a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0" Dec 16 12:51:35.355291 containerd[1969]: time="2025-12-16T12:51:35.353415807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67fb684979-m6hdn,Uid:3da16c2a-9fde-44a1-9f48-173bf0524018,Namespace:calico-system,Attempt:0,} returns sandbox id \"26e1ce466d3bcb46b09140bc93ca5cefc2bfa2e97419920539fc053eef994f23\"" Dec 16 12:51:35.357830 containerd[1969]: time="2025-12-16T12:51:35.357663441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 12:51:35.376810 kubelet[3300]: E1216 12:51:35.376707 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.376810 kubelet[3300]: W1216 12:51:35.376737 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.378729 kubelet[3300]: E1216 12:51:35.378015 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.378973 kubelet[3300]: E1216 12:51:35.378958 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.379054 kubelet[3300]: W1216 12:51:35.379043 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.379108 kubelet[3300]: E1216 12:51:35.379100 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.379659 kubelet[3300]: E1216 12:51:35.379402 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.379659 kubelet[3300]: W1216 12:51:35.379418 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.379659 kubelet[3300]: E1216 12:51:35.379431 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.379896 kubelet[3300]: E1216 12:51:35.379885 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.379984 kubelet[3300]: W1216 12:51:35.379974 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.380182 kubelet[3300]: E1216 12:51:35.380049 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.380452 kubelet[3300]: E1216 12:51:35.380404 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.380452 kubelet[3300]: W1216 12:51:35.380414 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.380617 kubelet[3300]: E1216 12:51:35.380532 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.380844 kubelet[3300]: E1216 12:51:35.380794 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.380844 kubelet[3300]: W1216 12:51:35.380803 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.380844 kubelet[3300]: E1216 12:51:35.380812 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.381430 kubelet[3300]: E1216 12:51:35.381218 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.381430 kubelet[3300]: W1216 12:51:35.381226 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.381430 kubelet[3300]: E1216 12:51:35.381235 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.381773 kubelet[3300]: E1216 12:51:35.381737 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.381936 kubelet[3300]: W1216 12:51:35.381887 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.382006 kubelet[3300]: E1216 12:51:35.381996 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.382272 kubelet[3300]: E1216 12:51:35.382225 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.382272 kubelet[3300]: W1216 12:51:35.382235 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.382272 kubelet[3300]: E1216 12:51:35.382243 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.382567 kubelet[3300]: E1216 12:51:35.382558 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.382680 kubelet[3300]: W1216 12:51:35.382628 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.382680 kubelet[3300]: E1216 12:51:35.382640 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.383214 kubelet[3300]: E1216 12:51:35.382881 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.383214 kubelet[3300]: W1216 12:51:35.382889 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.383214 kubelet[3300]: E1216 12:51:35.382924 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.383504 kubelet[3300]: E1216 12:51:35.383321 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.383504 kubelet[3300]: W1216 12:51:35.383331 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.383504 kubelet[3300]: E1216 12:51:35.383341 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.383796 kubelet[3300]: E1216 12:51:35.383712 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.383796 kubelet[3300]: W1216 12:51:35.383720 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.383796 kubelet[3300]: E1216 12:51:35.383729 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.384155 kubelet[3300]: E1216 12:51:35.384088 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.384155 kubelet[3300]: W1216 12:51:35.384112 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.384155 kubelet[3300]: E1216 12:51:35.384122 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.384426 kubelet[3300]: E1216 12:51:35.384403 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.384594 kubelet[3300]: W1216 12:51:35.384534 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.384594 kubelet[3300]: E1216 12:51:35.384545 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.384946 kubelet[3300]: E1216 12:51:35.384836 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.384946 kubelet[3300]: W1216 12:51:35.384845 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.384946 kubelet[3300]: E1216 12:51:35.384855 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.385329 kubelet[3300]: E1216 12:51:35.385309 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.385427 kubelet[3300]: W1216 12:51:35.385382 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.385514 kubelet[3300]: E1216 12:51:35.385470 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.385793 kubelet[3300]: E1216 12:51:35.385742 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.385977 kubelet[3300]: W1216 12:51:35.385926 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.385977 kubelet[3300]: E1216 12:51:35.385941 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.386400 kubelet[3300]: E1216 12:51:35.386236 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.386400 kubelet[3300]: W1216 12:51:35.386245 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.386400 kubelet[3300]: E1216 12:51:35.386256 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.386742 kubelet[3300]: E1216 12:51:35.386649 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.386742 kubelet[3300]: W1216 12:51:35.386711 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.386742 kubelet[3300]: E1216 12:51:35.386722 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.388529 kubelet[3300]: E1216 12:51:35.388490 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.388529 kubelet[3300]: W1216 12:51:35.388502 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.388529 kubelet[3300]: E1216 12:51:35.388512 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.388770 kubelet[3300]: I1216 12:51:35.388655 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0-registration-dir\") pod \"csi-node-driver-l2s79\" (UID: \"a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0\") " pod="calico-system/csi-node-driver-l2s79" Dec 16 12:51:35.388922 kubelet[3300]: E1216 12:51:35.388867 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.388922 kubelet[3300]: W1216 12:51:35.388895 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.389074 kubelet[3300]: E1216 12:51:35.388999 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.389074 kubelet[3300]: I1216 12:51:35.389031 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0-kubelet-dir\") pod \"csi-node-driver-l2s79\" (UID: \"a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0\") " pod="calico-system/csi-node-driver-l2s79" Dec 16 12:51:35.389326 kubelet[3300]: E1216 12:51:35.389296 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.389326 kubelet[3300]: W1216 12:51:35.389306 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.389326 kubelet[3300]: E1216 12:51:35.389316 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.389510 kubelet[3300]: I1216 12:51:35.389445 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0-varrun\") pod \"csi-node-driver-l2s79\" (UID: \"a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0\") " pod="calico-system/csi-node-driver-l2s79" Dec 16 12:51:35.389729 kubelet[3300]: E1216 12:51:35.389698 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.389729 kubelet[3300]: W1216 12:51:35.389709 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.389729 kubelet[3300]: E1216 12:51:35.389718 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.389902 kubelet[3300]: I1216 12:51:35.389842 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn6r2\" (UniqueName: \"kubernetes.io/projected/a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0-kube-api-access-zn6r2\") pod \"csi-node-driver-l2s79\" (UID: \"a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0\") " pod="calico-system/csi-node-driver-l2s79" Dec 16 12:51:35.390158 kubelet[3300]: E1216 12:51:35.390126 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.390158 kubelet[3300]: W1216 12:51:35.390136 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.390158 kubelet[3300]: E1216 12:51:35.390148 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.390344 kubelet[3300]: I1216 12:51:35.390320 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0-socket-dir\") pod \"csi-node-driver-l2s79\" (UID: \"a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0\") " pod="calico-system/csi-node-driver-l2s79" Dec 16 12:51:35.390533 kubelet[3300]: E1216 12:51:35.390507 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.390533 kubelet[3300]: W1216 12:51:35.390515 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.390533 kubelet[3300]: E1216 12:51:35.390524 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.390945 kubelet[3300]: E1216 12:51:35.390901 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.390945 kubelet[3300]: W1216 12:51:35.390924 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.390945 kubelet[3300]: E1216 12:51:35.390934 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.391444 kubelet[3300]: E1216 12:51:35.391411 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.391444 kubelet[3300]: W1216 12:51:35.391421 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.391444 kubelet[3300]: E1216 12:51:35.391431 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.391866 kubelet[3300]: E1216 12:51:35.391833 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.391866 kubelet[3300]: W1216 12:51:35.391844 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.391866 kubelet[3300]: E1216 12:51:35.391855 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.392323 kubelet[3300]: E1216 12:51:35.392288 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.392323 kubelet[3300]: W1216 12:51:35.392301 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.392323 kubelet[3300]: E1216 12:51:35.392311 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.392787 kubelet[3300]: E1216 12:51:35.392696 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.392787 kubelet[3300]: W1216 12:51:35.392717 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.392787 kubelet[3300]: E1216 12:51:35.392728 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.393646 containerd[1969]: time="2025-12-16T12:51:35.393324944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qsttg,Uid:07b6e20c-0806-4c86-818d-0abb5ca37e27,Namespace:calico-system,Attempt:0,}" Dec 16 12:51:35.393713 kubelet[3300]: E1216 12:51:35.393579 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.393713 kubelet[3300]: W1216 12:51:35.393588 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.393713 kubelet[3300]: E1216 12:51:35.393600 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.394749 kubelet[3300]: E1216 12:51:35.394614 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.394749 kubelet[3300]: W1216 12:51:35.394626 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.394749 kubelet[3300]: E1216 12:51:35.394637 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.395051 kubelet[3300]: E1216 12:51:35.395026 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.395226 kubelet[3300]: W1216 12:51:35.395127 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.395226 kubelet[3300]: E1216 12:51:35.395158 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.396015 kubelet[3300]: E1216 12:51:35.395959 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.396015 kubelet[3300]: W1216 12:51:35.395972 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.396015 kubelet[3300]: E1216 12:51:35.395983 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.429516 containerd[1969]: time="2025-12-16T12:51:35.429470634Z" level=info msg="connecting to shim 97f944ae25ac7930e09a78f3a9f32ef0866d46a776e8e4e2c79678199a147abe" address="unix:///run/containerd/s/bda7b1d247d5aa4c1e6f3107d03d8d230ba8ee7e9f7b872557e87d600b5a4e6f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:51:35.459132 systemd[1]: Started cri-containerd-97f944ae25ac7930e09a78f3a9f32ef0866d46a776e8e4e2c79678199a147abe.scope - libcontainer container 97f944ae25ac7930e09a78f3a9f32ef0866d46a776e8e4e2c79678199a147abe. Dec 16 12:51:35.474000 audit: BPF prog-id=166 op=LOAD Dec 16 12:51:35.474000 audit: BPF prog-id=167 op=LOAD Dec 16 12:51:35.474000 audit[4092]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4081 pid=4092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:35.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937663934346165323561633739333065303961373866336139663332 Dec 16 12:51:35.475000 audit: BPF prog-id=167 op=UNLOAD Dec 16 12:51:35.475000 audit[4092]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4081 pid=4092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:35.475000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937663934346165323561633739333065303961373866336139663332 Dec 16 12:51:35.475000 audit: BPF prog-id=168 op=LOAD Dec 16 12:51:35.475000 audit[4092]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4081 pid=4092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:35.475000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937663934346165323561633739333065303961373866336139663332 Dec 16 12:51:35.475000 audit: BPF prog-id=169 op=LOAD Dec 16 12:51:35.475000 audit[4092]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4081 pid=4092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:35.475000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937663934346165323561633739333065303961373866336139663332 Dec 16 12:51:35.475000 audit: BPF prog-id=169 op=UNLOAD Dec 16 12:51:35.475000 audit[4092]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4081 pid=4092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:35.475000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937663934346165323561633739333065303961373866336139663332 Dec 16 12:51:35.475000 audit: BPF prog-id=168 op=UNLOAD Dec 16 12:51:35.475000 audit[4092]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4081 pid=4092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:35.475000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937663934346165323561633739333065303961373866336139663332 Dec 16 12:51:35.475000 audit: BPF prog-id=170 op=LOAD Dec 16 12:51:35.475000 audit[4092]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4081 pid=4092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:35.475000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937663934346165323561633739333065303961373866336139663332 Dec 16 12:51:35.491443 kubelet[3300]: E1216 12:51:35.491401 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.491443 kubelet[3300]: W1216 12:51:35.491425 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.491443 kubelet[3300]: E1216 12:51:35.491444 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.492004 kubelet[3300]: E1216 12:51:35.491984 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.492004 kubelet[3300]: W1216 12:51:35.492001 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.492084 kubelet[3300]: E1216 12:51:35.492013 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.493332 kubelet[3300]: E1216 12:51:35.493273 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.493332 kubelet[3300]: W1216 12:51:35.493287 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.493332 kubelet[3300]: E1216 12:51:35.493297 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.493588 kubelet[3300]: E1216 12:51:35.493566 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.493588 kubelet[3300]: W1216 12:51:35.493578 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.493588 kubelet[3300]: E1216 12:51:35.493588 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.493928 kubelet[3300]: E1216 12:51:35.493884 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.493974 kubelet[3300]: W1216 12:51:35.493933 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.493974 kubelet[3300]: E1216 12:51:35.493944 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.494455 kubelet[3300]: E1216 12:51:35.494438 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.494455 kubelet[3300]: W1216 12:51:35.494452 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.494535 kubelet[3300]: E1216 12:51:35.494462 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.494892 kubelet[3300]: E1216 12:51:35.494870 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.494892 kubelet[3300]: W1216 12:51:35.494885 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.494892 kubelet[3300]: E1216 12:51:35.494895 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.495400 kubelet[3300]: E1216 12:51:35.495375 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.495400 kubelet[3300]: W1216 12:51:35.495387 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.495400 kubelet[3300]: E1216 12:51:35.495399 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.496077 kubelet[3300]: E1216 12:51:35.496066 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.496077 kubelet[3300]: W1216 12:51:35.496077 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.496145 kubelet[3300]: E1216 12:51:35.496087 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.496471 kubelet[3300]: E1216 12:51:35.496385 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.496471 kubelet[3300]: W1216 12:51:35.496396 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.496471 kubelet[3300]: E1216 12:51:35.496405 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.496828 kubelet[3300]: E1216 12:51:35.496809 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.496828 kubelet[3300]: W1216 12:51:35.496826 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.496900 kubelet[3300]: E1216 12:51:35.496836 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.497357 kubelet[3300]: E1216 12:51:35.497304 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.497357 kubelet[3300]: W1216 12:51:35.497321 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.497490 kubelet[3300]: E1216 12:51:35.497420 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.497872 kubelet[3300]: E1216 12:51:35.497853 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.497872 kubelet[3300]: W1216 12:51:35.497868 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.498064 kubelet[3300]: E1216 12:51:35.497877 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.498287 kubelet[3300]: E1216 12:51:35.498268 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.498287 kubelet[3300]: W1216 12:51:35.498281 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.498352 kubelet[3300]: E1216 12:51:35.498291 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.498617 kubelet[3300]: E1216 12:51:35.498600 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.498617 kubelet[3300]: W1216 12:51:35.498613 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.498683 kubelet[3300]: E1216 12:51:35.498623 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.499113 kubelet[3300]: E1216 12:51:35.499097 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.499113 kubelet[3300]: W1216 12:51:35.499109 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.499194 kubelet[3300]: E1216 12:51:35.499119 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.499568 kubelet[3300]: E1216 12:51:35.499551 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.499568 kubelet[3300]: W1216 12:51:35.499565 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.499638 kubelet[3300]: E1216 12:51:35.499575 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.499777 kubelet[3300]: E1216 12:51:35.499762 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.499777 kubelet[3300]: W1216 12:51:35.499773 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.499866 kubelet[3300]: E1216 12:51:35.499780 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.500065 kubelet[3300]: E1216 12:51:35.500047 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.500118 kubelet[3300]: W1216 12:51:35.500110 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.500145 kubelet[3300]: E1216 12:51:35.500122 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.500773 kubelet[3300]: E1216 12:51:35.500752 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.500773 kubelet[3300]: W1216 12:51:35.500767 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.500773 kubelet[3300]: E1216 12:51:35.500778 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.501417 kubelet[3300]: E1216 12:51:35.501397 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.501417 kubelet[3300]: W1216 12:51:35.501413 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.501490 kubelet[3300]: E1216 12:51:35.501424 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.501981 containerd[1969]: time="2025-12-16T12:51:35.501852189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qsttg,Uid:07b6e20c-0806-4c86-818d-0abb5ca37e27,Namespace:calico-system,Attempt:0,} returns sandbox id \"97f944ae25ac7930e09a78f3a9f32ef0866d46a776e8e4e2c79678199a147abe\"" Dec 16 12:51:35.502480 kubelet[3300]: E1216 12:51:35.502458 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.502480 kubelet[3300]: W1216 12:51:35.502475 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.502651 kubelet[3300]: E1216 12:51:35.502489 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.504100 kubelet[3300]: E1216 12:51:35.504078 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.504100 kubelet[3300]: W1216 12:51:35.504093 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.504272 kubelet[3300]: E1216 12:51:35.504107 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.506774 kubelet[3300]: E1216 12:51:35.506745 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.506774 kubelet[3300]: W1216 12:51:35.506761 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.506947 kubelet[3300]: E1216 12:51:35.506777 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.507510 kubelet[3300]: E1216 12:51:35.507484 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.507510 kubelet[3300]: W1216 12:51:35.507500 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.507651 kubelet[3300]: E1216 12:51:35.507515 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.518362 kubelet[3300]: E1216 12:51:35.518331 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:35.518362 kubelet[3300]: W1216 12:51:35.518352 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:35.518527 kubelet[3300]: E1216 12:51:35.518374 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:35.823000 audit[4146]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=4146 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:51:35.823000 audit[4146]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffde817bf90 a2=0 a3=7ffde817bf7c items=0 ppid=3722 pid=4146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:35.823000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:51:35.828000 audit[4146]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=4146 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:51:35.828000 audit[4146]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffde817bf90 a2=0 a3=0 items=0 ppid=3722 pid=4146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:35.828000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:51:36.812143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2100759971.mount: Deactivated successfully. Dec 16 12:51:36.921390 kubelet[3300]: E1216 12:51:36.921347 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l2s79" podUID="a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0" Dec 16 12:51:38.677070 containerd[1969]: time="2025-12-16T12:51:38.677024557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:51:38.679309 containerd[1969]: time="2025-12-16T12:51:38.679259365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Dec 16 12:51:38.681570 containerd[1969]: time="2025-12-16T12:51:38.681510402Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:51:38.684826 containerd[1969]: time="2025-12-16T12:51:38.684772315Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:51:38.685839 containerd[1969]: time="2025-12-16T12:51:38.685325082Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.327499244s" Dec 16 12:51:38.685839 containerd[1969]: time="2025-12-16T12:51:38.685357283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 16 12:51:38.686478 containerd[1969]: time="2025-12-16T12:51:38.686457004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 12:51:38.717539 containerd[1969]: time="2025-12-16T12:51:38.717233028Z" level=info msg="CreateContainer within sandbox \"26e1ce466d3bcb46b09140bc93ca5cefc2bfa2e97419920539fc053eef994f23\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 12:51:38.734927 containerd[1969]: time="2025-12-16T12:51:38.733098276Z" level=info msg="Container 5ac3cb83762ddbaa188a7a510f2772c1963dfc1d379b5f298041136f32464860: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:51:38.748218 containerd[1969]: time="2025-12-16T12:51:38.748161868Z" level=info msg="CreateContainer within sandbox \"26e1ce466d3bcb46b09140bc93ca5cefc2bfa2e97419920539fc053eef994f23\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5ac3cb83762ddbaa188a7a510f2772c1963dfc1d379b5f298041136f32464860\"" Dec 16 12:51:38.749093 containerd[1969]: time="2025-12-16T12:51:38.749035043Z" level=info msg="StartContainer for \"5ac3cb83762ddbaa188a7a510f2772c1963dfc1d379b5f298041136f32464860\"" Dec 16 12:51:38.750508 containerd[1969]: time="2025-12-16T12:51:38.750440730Z" level=info msg="connecting to shim 5ac3cb83762ddbaa188a7a510f2772c1963dfc1d379b5f298041136f32464860" address="unix:///run/containerd/s/46f91da131fe0673a610337815ef403fb0480e3949d987a24c2fe2fb2322fe1b" protocol=ttrpc version=3 Dec 16 12:51:38.794167 systemd[1]: Started cri-containerd-5ac3cb83762ddbaa188a7a510f2772c1963dfc1d379b5f298041136f32464860.scope - libcontainer container 5ac3cb83762ddbaa188a7a510f2772c1963dfc1d379b5f298041136f32464860. Dec 16 12:51:38.806000 audit: BPF prog-id=171 op=LOAD Dec 16 12:51:38.808120 kernel: kauditd_printk_skb: 64 callbacks suppressed Dec 16 12:51:38.808162 kernel: audit: type=1334 audit(1765889498.806:573): prog-id=171 op=LOAD Dec 16 12:51:38.809000 audit: BPF prog-id=172 op=LOAD Dec 16 12:51:38.816999 kernel: audit: type=1334 audit(1765889498.809:574): prog-id=172 op=LOAD Dec 16 12:51:38.817101 kernel: audit: type=1300 audit(1765889498.809:574): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3990 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:38.809000 audit[4157]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3990 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:38.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561633363623833373632646462616131383861376135313066323737 Dec 16 12:51:38.823933 kernel: audit: type=1327 audit(1765889498.809:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561633363623833373632646462616131383861376135313066323737 Dec 16 12:51:38.809000 audit: BPF prog-id=172 op=UNLOAD Dec 16 12:51:38.832944 kernel: audit: type=1334 audit(1765889498.809:575): prog-id=172 op=UNLOAD Dec 16 12:51:38.833038 kernel: audit: type=1300 audit(1765889498.809:575): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3990 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:38.809000 audit[4157]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3990 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:38.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561633363623833373632646462616131383861376135313066323737 Dec 16 12:51:38.839535 kernel: audit: type=1327 audit(1765889498.809:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561633363623833373632646462616131383861376135313066323737 Dec 16 12:51:38.839608 kernel: audit: type=1334 audit(1765889498.809:576): prog-id=173 op=LOAD Dec 16 12:51:38.809000 audit: BPF prog-id=173 op=LOAD Dec 16 12:51:38.842958 kernel: audit: type=1300 audit(1765889498.809:576): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3990 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:38.809000 audit[4157]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3990 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:38.849788 kernel: audit: type=1327 audit(1765889498.809:576): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561633363623833373632646462616131383861376135313066323737 Dec 16 12:51:38.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561633363623833373632646462616131383861376135313066323737 Dec 16 12:51:38.809000 audit: BPF prog-id=174 op=LOAD Dec 16 12:51:38.809000 audit[4157]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3990 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:38.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561633363623833373632646462616131383861376135313066323737 Dec 16 12:51:38.809000 audit: BPF prog-id=174 op=UNLOAD Dec 16 12:51:38.809000 audit[4157]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3990 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:38.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561633363623833373632646462616131383861376135313066323737 Dec 16 12:51:38.809000 audit: BPF prog-id=173 op=UNLOAD Dec 16 12:51:38.809000 audit[4157]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3990 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:38.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561633363623833373632646462616131383861376135313066323737 Dec 16 12:51:38.809000 audit: BPF prog-id=175 op=LOAD Dec 16 12:51:38.809000 audit[4157]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3990 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:38.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561633363623833373632646462616131383861376135313066323737 Dec 16 12:51:38.865635 containerd[1969]: time="2025-12-16T12:51:38.865600262Z" level=info msg="StartContainer for \"5ac3cb83762ddbaa188a7a510f2772c1963dfc1d379b5f298041136f32464860\" returns successfully" Dec 16 12:51:38.924583 kubelet[3300]: E1216 12:51:38.924539 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l2s79" podUID="a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0" Dec 16 12:51:39.119579 kubelet[3300]: E1216 12:51:39.118593 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:39.119579 kubelet[3300]: W1216 12:51:39.118616 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:39.121255 kubelet[3300]: E1216 12:51:39.121231 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:39.121937 kubelet[3300]: E1216 12:51:39.121598 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:39.122026 kubelet[3300]: W1216 12:51:39.122014 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:39.122081 kubelet[3300]: E1216 12:51:39.122073 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:39.122283 kubelet[3300]: E1216 12:51:39.122276 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:39.122334 kubelet[3300]: W1216 12:51:39.122327 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:39.122379 kubelet[3300]: E1216 12:51:39.122372 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:39.123064 kubelet[3300]: E1216 12:51:39.123020 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:39.123064 kubelet[3300]: W1216 12:51:39.123032 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:39.123064 kubelet[3300]: E1216 12:51:39.123042 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:39.123621 kubelet[3300]: E1216 12:51:39.123526 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:39.123621 kubelet[3300]: W1216 12:51:39.123536 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:39.123621 kubelet[3300]: E1216 12:51:39.123548 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:39.123986 kubelet[3300]: E1216 12:51:39.123975 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:39.124129 kubelet[3300]: W1216 12:51:39.124041 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:39.124129 kubelet[3300]: E1216 12:51:39.124053 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:39.125466 kubelet[3300]: E1216 12:51:39.125373 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:39.125466 kubelet[3300]: W1216 12:51:39.125385 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:39.125466 kubelet[3300]: E1216 12:51:39.125397 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:39.126159 kubelet[3300]: E1216 12:51:39.126031 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:39.126159 kubelet[3300]: W1216 12:51:39.126042 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:39.126159 kubelet[3300]: E1216 12:51:39.126052 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:39.128052 kubelet[3300]: E1216 12:51:39.128039 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:39.128118 kubelet[3300]: W1216 12:51:39.128110 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:39.128186 kubelet[3300]: E1216 12:51:39.128178 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:39.128440 kubelet[3300]: E1216 12:51:39.128366 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:39.128440 kubelet[3300]: W1216 12:51:39.128374 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:39.128440 kubelet[3300]: E1216 12:51:39.128382 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:39.128936 kubelet[3300]: E1216 12:51:39.128925 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:39.129072 kubelet[3300]: W1216 12:51:39.128986 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:39.129072 kubelet[3300]: E1216 12:51:39.128998 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:39.129927 kubelet[3300]: E1216 12:51:39.129215 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:39.130007 kubelet[3300]: W1216 12:51:39.129995 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:39.130056 kubelet[3300]: E1216 12:51:39.130049 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:39.130318 kubelet[3300]: E1216 12:51:39.130239 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:39.130318 kubelet[3300]: W1216 12:51:39.130247 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:39.130318 kubelet[3300]: E1216 12:51:39.130255 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:39.130500 kubelet[3300]: E1216 12:51:39.130491 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:39.130561 kubelet[3300]: W1216 12:51:39.130554 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:39.130606 kubelet[3300]: E1216 12:51:39.130599 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:39.130925 kubelet[3300]: E1216 12:51:39.130783 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:39.130925 kubelet[3300]: W1216 12:51:39.130791 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:39.130925 kubelet[3300]: E1216 12:51:39.130799 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:39.131210 kubelet[3300]: E1216 12:51:39.131201 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:39.131283 kubelet[3300]: W1216 12:51:39.131274 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:39.131356 kubelet[3300]: E1216 12:51:39.131347 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:39.133058 kubelet[3300]: E1216 12:51:39.132956 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:39.133058 kubelet[3300]: W1216 12:51:39.132968 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:39.133058 kubelet[3300]: E1216 12:51:39.132978 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:39.133342 kubelet[3300]: E1216 12:51:39.133332 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:39.133390 kubelet[3300]: W1216 12:51:39.133383 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:39.133447 kubelet[3300]: E1216 12:51:39.133437 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:39.134133 kubelet[3300]: E1216 12:51:39.134115 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:39.134133 kubelet[3300]: W1216 12:51:39.134131 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:39.134221 kubelet[3300]: E1216 12:51:39.134142 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:39.134604 kubelet[3300]: E1216 12:51:39.134590 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:39.134604 kubelet[3300]: W1216 12:51:39.134603 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:39.134687 kubelet[3300]: E1216 12:51:39.134613 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:39.134792 kubelet[3300]: E1216 12:51:39.134782 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:39.134792 kubelet[3300]: W1216 12:51:39.134792 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:39.134848 kubelet[3300]: E1216 12:51:39.134800 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:39.136013 kubelet[3300]: E1216 12:51:39.135993 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:39.136013 kubelet[3300]: W1216 12:51:39.136010 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:39.136097 kubelet[3300]: E1216 12:51:39.136021 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:39.136183 kubelet[3300]: E1216 12:51:39.136173 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:39.136183 kubelet[3300]: W1216 12:51:39.136182 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:39.136237 kubelet[3300]: E1216 12:51:39.136190 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:39.136362 kubelet[3300]: E1216 12:51:39.136351 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:39.136362 kubelet[3300]: W1216 12:51:39.136361 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:39.136432 kubelet[3300]: E1216 12:51:39.136368 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:39.136515 kubelet[3300]: E1216 12:51:39.136504 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:39.136515 kubelet[3300]: W1216 12:51:39.136514 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:39.137081 kubelet[3300]: E1216 12:51:39.136521 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:39.137081 kubelet[3300]: E1216 12:51:39.136642 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:39.137081 kubelet[3300]: W1216 12:51:39.136649 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:39.137081 kubelet[3300]: E1216 12:51:39.136655 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:39.137081 kubelet[3300]: E1216 12:51:39.136788 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:39.137081 kubelet[3300]: W1216 12:51:39.136794 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:39.137081 kubelet[3300]: E1216 12:51:39.136800 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:39.137685 kubelet[3300]: E1216 12:51:39.137612 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:39.137685 kubelet[3300]: W1216 12:51:39.137624 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:39.137685 kubelet[3300]: E1216 12:51:39.137635 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:39.140037 kubelet[3300]: E1216 12:51:39.139991 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:39.140037 kubelet[3300]: W1216 12:51:39.140035 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:39.140160 kubelet[3300]: E1216 12:51:39.140049 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:39.141048 kubelet[3300]: E1216 12:51:39.141031 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:39.141048 kubelet[3300]: W1216 12:51:39.141045 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:39.141152 kubelet[3300]: E1216 12:51:39.141057 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:39.141267 kubelet[3300]: E1216 12:51:39.141255 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:39.141267 kubelet[3300]: W1216 12:51:39.141265 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:39.141329 kubelet[3300]: E1216 12:51:39.141274 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:39.141581 kubelet[3300]: E1216 12:51:39.141569 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:39.141581 kubelet[3300]: W1216 12:51:39.141579 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:39.141664 kubelet[3300]: E1216 12:51:39.141587 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:39.141765 kubelet[3300]: E1216 12:51:39.141753 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:39.141765 kubelet[3300]: W1216 12:51:39.141764 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:39.141821 kubelet[3300]: E1216 12:51:39.141772 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.080333 kubelet[3300]: I1216 12:51:40.080297 3300 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:51:40.135070 kubelet[3300]: E1216 12:51:40.135038 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:40.135070 kubelet[3300]: W1216 12:51:40.135061 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:40.135739 kubelet[3300]: E1216 12:51:40.135083 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.135739 kubelet[3300]: E1216 12:51:40.135282 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:40.135739 kubelet[3300]: W1216 12:51:40.135288 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:40.135739 kubelet[3300]: E1216 12:51:40.135298 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.135739 kubelet[3300]: E1216 12:51:40.135519 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:40.135739 kubelet[3300]: W1216 12:51:40.135534 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:40.135739 kubelet[3300]: E1216 12:51:40.135548 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.136123 kubelet[3300]: E1216 12:51:40.135762 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:40.136123 kubelet[3300]: W1216 12:51:40.135769 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:40.136123 kubelet[3300]: E1216 12:51:40.135778 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.136123 kubelet[3300]: E1216 12:51:40.136001 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:40.136123 kubelet[3300]: W1216 12:51:40.136014 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:40.136123 kubelet[3300]: E1216 12:51:40.136024 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.136345 kubelet[3300]: E1216 12:51:40.136167 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:40.136345 kubelet[3300]: W1216 12:51:40.136173 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:40.136345 kubelet[3300]: E1216 12:51:40.136180 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.136345 kubelet[3300]: E1216 12:51:40.136306 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:40.136345 kubelet[3300]: W1216 12:51:40.136312 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:40.136345 kubelet[3300]: E1216 12:51:40.136319 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.136552 kubelet[3300]: E1216 12:51:40.136449 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:40.136552 kubelet[3300]: W1216 12:51:40.136455 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:40.136552 kubelet[3300]: E1216 12:51:40.136461 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.136677 kubelet[3300]: E1216 12:51:40.136605 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:40.136677 kubelet[3300]: W1216 12:51:40.136611 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:40.136677 kubelet[3300]: E1216 12:51:40.136618 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.136781 kubelet[3300]: E1216 12:51:40.136771 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:40.136781 kubelet[3300]: W1216 12:51:40.136777 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:40.137013 kubelet[3300]: E1216 12:51:40.136783 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.137084 kubelet[3300]: E1216 12:51:40.137017 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:40.137084 kubelet[3300]: W1216 12:51:40.137025 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:40.137084 kubelet[3300]: E1216 12:51:40.137034 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.137304 kubelet[3300]: E1216 12:51:40.137280 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:40.137304 kubelet[3300]: W1216 12:51:40.137299 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:40.137304 kubelet[3300]: E1216 12:51:40.137308 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.137465 kubelet[3300]: E1216 12:51:40.137452 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:40.137465 kubelet[3300]: W1216 12:51:40.137462 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:40.137565 kubelet[3300]: E1216 12:51:40.137468 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.137615 kubelet[3300]: E1216 12:51:40.137598 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:40.137615 kubelet[3300]: W1216 12:51:40.137604 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:40.137615 kubelet[3300]: E1216 12:51:40.137610 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.137742 kubelet[3300]: E1216 12:51:40.137734 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:40.137742 kubelet[3300]: W1216 12:51:40.137742 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:40.137824 kubelet[3300]: E1216 12:51:40.137749 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.138105 kubelet[3300]: E1216 12:51:40.138061 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:40.138105 kubelet[3300]: W1216 12:51:40.138080 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:40.138105 kubelet[3300]: E1216 12:51:40.138091 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.138375 kubelet[3300]: E1216 12:51:40.138359 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:40.138375 kubelet[3300]: W1216 12:51:40.138370 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:40.138453 kubelet[3300]: E1216 12:51:40.138380 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.138624 kubelet[3300]: E1216 12:51:40.138611 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:40.138704 kubelet[3300]: W1216 12:51:40.138676 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:40.138704 kubelet[3300]: E1216 12:51:40.138695 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.139021 kubelet[3300]: E1216 12:51:40.139007 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:40.139021 kubelet[3300]: W1216 12:51:40.139019 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:40.139127 kubelet[3300]: E1216 12:51:40.139030 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.139215 kubelet[3300]: E1216 12:51:40.139201 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:40.139215 kubelet[3300]: W1216 12:51:40.139212 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:40.139293 kubelet[3300]: E1216 12:51:40.139219 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.139388 kubelet[3300]: E1216 12:51:40.139375 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:40.139388 kubelet[3300]: W1216 12:51:40.139385 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:40.139388 kubelet[3300]: E1216 12:51:40.139393 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.139580 kubelet[3300]: E1216 12:51:40.139568 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:40.139580 kubelet[3300]: W1216 12:51:40.139578 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:40.139645 kubelet[3300]: E1216 12:51:40.139585 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.139825 kubelet[3300]: E1216 12:51:40.139811 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:40.139825 kubelet[3300]: W1216 12:51:40.139822 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:40.139899 kubelet[3300]: E1216 12:51:40.139829 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.140056 kubelet[3300]: E1216 12:51:40.140043 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:40.140056 kubelet[3300]: W1216 12:51:40.140055 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:40.140121 kubelet[3300]: E1216 12:51:40.140064 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.140235 kubelet[3300]: E1216 12:51:40.140231 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:40.140308 kubelet[3300]: W1216 12:51:40.140237 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:40.140308 kubelet[3300]: E1216 12:51:40.140244 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.140420 kubelet[3300]: E1216 12:51:40.140391 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:40.140420 kubelet[3300]: W1216 12:51:40.140405 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:40.140420 kubelet[3300]: E1216 12:51:40.140412 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.141163 kubelet[3300]: E1216 12:51:40.141145 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:40.141163 kubelet[3300]: W1216 12:51:40.141157 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:40.141163 kubelet[3300]: E1216 12:51:40.141166 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.141345 kubelet[3300]: E1216 12:51:40.141335 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:40.141345 kubelet[3300]: W1216 12:51:40.141344 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:40.141423 kubelet[3300]: E1216 12:51:40.141352 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.141532 kubelet[3300]: E1216 12:51:40.141510 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:40.141532 kubelet[3300]: W1216 12:51:40.141526 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:40.141624 kubelet[3300]: E1216 12:51:40.141538 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.141726 kubelet[3300]: E1216 12:51:40.141712 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:40.141726 kubelet[3300]: W1216 12:51:40.141722 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:40.141787 kubelet[3300]: E1216 12:51:40.141730 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.142000 kubelet[3300]: E1216 12:51:40.141985 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:40.142093 kubelet[3300]: W1216 12:51:40.142007 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:40.142093 kubelet[3300]: E1216 12:51:40.142016 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.142435 kubelet[3300]: E1216 12:51:40.142419 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:40.142435 kubelet[3300]: W1216 12:51:40.142430 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:40.142502 kubelet[3300]: E1216 12:51:40.142441 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.142621 kubelet[3300]: E1216 12:51:40.142611 3300 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:51:40.142621 kubelet[3300]: W1216 12:51:40.142620 3300 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:51:40.142699 kubelet[3300]: E1216 12:51:40.142628 3300 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:51:40.477708 containerd[1969]: time="2025-12-16T12:51:40.477653489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:51:40.479653 containerd[1969]: time="2025-12-16T12:51:40.479444861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4442579" Dec 16 12:51:40.481697 containerd[1969]: time="2025-12-16T12:51:40.481660659Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:51:40.486235 containerd[1969]: time="2025-12-16T12:51:40.485517292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:51:40.486235 containerd[1969]: time="2025-12-16T12:51:40.486100495Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.799613174s" Dec 16 12:51:40.486235 containerd[1969]: time="2025-12-16T12:51:40.486133485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 12:51:40.492781 containerd[1969]: time="2025-12-16T12:51:40.492291040Z" level=info msg="CreateContainer within sandbox \"97f944ae25ac7930e09a78f3a9f32ef0866d46a776e8e4e2c79678199a147abe\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 12:51:40.508104 containerd[1969]: time="2025-12-16T12:51:40.508064607Z" level=info msg="Container 24b2d042bdaba123468f4799d10277cabd7df8ad0fec39cdd10b4711841fc552: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:51:40.522805 containerd[1969]: time="2025-12-16T12:51:40.522761528Z" level=info msg="CreateContainer within sandbox \"97f944ae25ac7930e09a78f3a9f32ef0866d46a776e8e4e2c79678199a147abe\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"24b2d042bdaba123468f4799d10277cabd7df8ad0fec39cdd10b4711841fc552\"" Dec 16 12:51:40.524039 containerd[1969]: time="2025-12-16T12:51:40.524010176Z" level=info msg="StartContainer for \"24b2d042bdaba123468f4799d10277cabd7df8ad0fec39cdd10b4711841fc552\"" Dec 16 12:51:40.525595 containerd[1969]: time="2025-12-16T12:51:40.525565191Z" level=info msg="connecting to shim 24b2d042bdaba123468f4799d10277cabd7df8ad0fec39cdd10b4711841fc552" address="unix:///run/containerd/s/bda7b1d247d5aa4c1e6f3107d03d8d230ba8ee7e9f7b872557e87d600b5a4e6f" protocol=ttrpc version=3 Dec 16 12:51:40.549175 systemd[1]: Started cri-containerd-24b2d042bdaba123468f4799d10277cabd7df8ad0fec39cdd10b4711841fc552.scope - libcontainer container 24b2d042bdaba123468f4799d10277cabd7df8ad0fec39cdd10b4711841fc552. Dec 16 12:51:40.596000 audit: BPF prog-id=176 op=LOAD Dec 16 12:51:40.596000 audit[4266]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4081 pid=4266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:40.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234623264303432626461626131323334363866343739396431303237 Dec 16 12:51:40.596000 audit: BPF prog-id=177 op=LOAD Dec 16 12:51:40.596000 audit[4266]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4081 pid=4266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:40.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234623264303432626461626131323334363866343739396431303237 Dec 16 12:51:40.596000 audit: BPF prog-id=177 op=UNLOAD Dec 16 12:51:40.596000 audit[4266]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4081 pid=4266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:40.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234623264303432626461626131323334363866343739396431303237 Dec 16 12:51:40.596000 audit: BPF prog-id=176 op=UNLOAD Dec 16 12:51:40.596000 audit[4266]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4081 pid=4266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:40.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234623264303432626461626131323334363866343739396431303237 Dec 16 12:51:40.596000 audit: BPF prog-id=178 op=LOAD Dec 16 12:51:40.596000 audit[4266]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4081 pid=4266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:40.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234623264303432626461626131323334363866343739396431303237 Dec 16 12:51:40.625306 containerd[1969]: time="2025-12-16T12:51:40.624904134Z" level=info msg="StartContainer for \"24b2d042bdaba123468f4799d10277cabd7df8ad0fec39cdd10b4711841fc552\" returns successfully" Dec 16 12:51:40.635406 systemd[1]: cri-containerd-24b2d042bdaba123468f4799d10277cabd7df8ad0fec39cdd10b4711841fc552.scope: Deactivated successfully. Dec 16 12:51:40.637000 audit: BPF prog-id=178 op=UNLOAD Dec 16 12:51:40.651191 containerd[1969]: time="2025-12-16T12:51:40.651129685Z" level=info msg="received container exit event container_id:\"24b2d042bdaba123468f4799d10277cabd7df8ad0fec39cdd10b4711841fc552\" id:\"24b2d042bdaba123468f4799d10277cabd7df8ad0fec39cdd10b4711841fc552\" pid:4279 exited_at:{seconds:1765889500 nanos:639648477}" Dec 16 12:51:40.748731 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24b2d042bdaba123468f4799d10277cabd7df8ad0fec39cdd10b4711841fc552-rootfs.mount: Deactivated successfully. Dec 16 12:51:40.922245 kubelet[3300]: E1216 12:51:40.921267 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l2s79" podUID="a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0" Dec 16 12:51:41.084474 containerd[1969]: time="2025-12-16T12:51:41.084390884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 12:51:41.109305 kubelet[3300]: I1216 12:51:41.107307 3300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-67fb684979-m6hdn" podStartSLOduration=3.776358395 podStartE2EDuration="7.105501124s" podCreationTimestamp="2025-12-16 12:51:34 +0000 UTC" firstStartedPulling="2025-12-16 12:51:35.357193055 +0000 UTC m=+26.594294992" lastFinishedPulling="2025-12-16 12:51:38.686335795 +0000 UTC m=+29.923437721" observedRunningTime="2025-12-16 12:51:39.18966137 +0000 UTC m=+30.426763309" watchObservedRunningTime="2025-12-16 12:51:41.105501124 +0000 UTC m=+32.342603110" Dec 16 12:51:42.922611 kubelet[3300]: E1216 12:51:42.922159 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l2s79" podUID="a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0" Dec 16 12:51:44.923349 kubelet[3300]: E1216 12:51:44.923316 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l2s79" podUID="a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0" Dec 16 12:51:46.922256 kubelet[3300]: E1216 12:51:46.921368 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l2s79" podUID="a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0" Dec 16 12:51:47.180177 containerd[1969]: time="2025-12-16T12:51:47.179940466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:51:47.181735 containerd[1969]: time="2025-12-16T12:51:47.181704284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 16 12:51:47.184114 containerd[1969]: time="2025-12-16T12:51:47.183956340Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:51:47.187276 containerd[1969]: time="2025-12-16T12:51:47.187228739Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:51:47.187714 containerd[1969]: time="2025-12-16T12:51:47.187691513Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 6.103074134s" Dec 16 12:51:47.188449 containerd[1969]: time="2025-12-16T12:51:47.187719449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 12:51:47.194172 containerd[1969]: time="2025-12-16T12:51:47.194120012Z" level=info msg="CreateContainer within sandbox \"97f944ae25ac7930e09a78f3a9f32ef0866d46a776e8e4e2c79678199a147abe\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 12:51:47.211680 containerd[1969]: time="2025-12-16T12:51:47.211401284Z" level=info msg="Container 2d2d301da3f016b87252d0769aaa4699e8ff336ccdb55b2a30ec509f697fc11a: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:51:47.233496 containerd[1969]: time="2025-12-16T12:51:47.233449068Z" level=info msg="CreateContainer within sandbox \"97f944ae25ac7930e09a78f3a9f32ef0866d46a776e8e4e2c79678199a147abe\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2d2d301da3f016b87252d0769aaa4699e8ff336ccdb55b2a30ec509f697fc11a\"" Dec 16 12:51:47.234054 containerd[1969]: time="2025-12-16T12:51:47.234020877Z" level=info msg="StartContainer for \"2d2d301da3f016b87252d0769aaa4699e8ff336ccdb55b2a30ec509f697fc11a\"" Dec 16 12:51:47.236577 containerd[1969]: time="2025-12-16T12:51:47.235566875Z" level=info msg="connecting to shim 2d2d301da3f016b87252d0769aaa4699e8ff336ccdb55b2a30ec509f697fc11a" address="unix:///run/containerd/s/bda7b1d247d5aa4c1e6f3107d03d8d230ba8ee7e9f7b872557e87d600b5a4e6f" protocol=ttrpc version=3 Dec 16 12:51:47.271106 systemd[1]: Started cri-containerd-2d2d301da3f016b87252d0769aaa4699e8ff336ccdb55b2a30ec509f697fc11a.scope - libcontainer container 2d2d301da3f016b87252d0769aaa4699e8ff336ccdb55b2a30ec509f697fc11a. Dec 16 12:51:47.317000 audit: BPF prog-id=179 op=LOAD Dec 16 12:51:47.320078 kernel: kauditd_printk_skb: 28 callbacks suppressed Dec 16 12:51:47.320124 kernel: audit: type=1334 audit(1765889507.317:587): prog-id=179 op=LOAD Dec 16 12:51:47.317000 audit[4322]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4081 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:47.322492 kernel: audit: type=1300 audit(1765889507.317:587): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4081 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:47.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264326433303164613366303136623837323532643037363961616134 Dec 16 12:51:47.327596 kernel: audit: type=1327 audit(1765889507.317:587): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264326433303164613366303136623837323532643037363961616134 Dec 16 12:51:47.318000 audit: BPF prog-id=180 op=LOAD Dec 16 12:51:47.332716 kernel: audit: type=1334 audit(1765889507.318:588): prog-id=180 op=LOAD Dec 16 12:51:47.318000 audit[4322]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4081 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:47.338949 kernel: audit: type=1300 audit(1765889507.318:588): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4081 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:47.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264326433303164613366303136623837323532643037363961616134 Dec 16 12:51:47.348948 kernel: audit: type=1327 audit(1765889507.318:588): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264326433303164613366303136623837323532643037363961616134 Dec 16 12:51:47.318000 audit: BPF prog-id=180 op=UNLOAD Dec 16 12:51:47.318000 audit[4322]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4081 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:47.352762 kernel: audit: type=1334 audit(1765889507.318:589): prog-id=180 op=UNLOAD Dec 16 12:51:47.352914 kernel: audit: type=1300 audit(1765889507.318:589): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4081 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:47.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264326433303164613366303136623837323532643037363961616134 Dec 16 12:51:47.357551 kernel: audit: type=1327 audit(1765889507.318:589): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264326433303164613366303136623837323532643037363961616134 Dec 16 12:51:47.318000 audit: BPF prog-id=179 op=UNLOAD Dec 16 12:51:47.363164 kernel: audit: type=1334 audit(1765889507.318:590): prog-id=179 op=UNLOAD Dec 16 12:51:47.318000 audit[4322]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4081 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:47.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264326433303164613366303136623837323532643037363961616134 Dec 16 12:51:47.318000 audit: BPF prog-id=181 op=LOAD Dec 16 12:51:47.318000 audit[4322]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=4081 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:47.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264326433303164613366303136623837323532643037363961616134 Dec 16 12:51:47.394003 containerd[1969]: time="2025-12-16T12:51:47.393967779Z" level=info msg="StartContainer for \"2d2d301da3f016b87252d0769aaa4699e8ff336ccdb55b2a30ec509f697fc11a\" returns successfully" Dec 16 12:51:48.324576 systemd[1]: cri-containerd-2d2d301da3f016b87252d0769aaa4699e8ff336ccdb55b2a30ec509f697fc11a.scope: Deactivated successfully. Dec 16 12:51:48.324973 systemd[1]: cri-containerd-2d2d301da3f016b87252d0769aaa4699e8ff336ccdb55b2a30ec509f697fc11a.scope: Consumed 521ms CPU time, 164.3M memory peak, 4.7M read from disk, 171.3M written to disk. Dec 16 12:51:48.327000 audit: BPF prog-id=181 op=UNLOAD Dec 16 12:51:48.348067 containerd[1969]: time="2025-12-16T12:51:48.348007330Z" level=info msg="received container exit event container_id:\"2d2d301da3f016b87252d0769aaa4699e8ff336ccdb55b2a30ec509f697fc11a\" id:\"2d2d301da3f016b87252d0769aaa4699e8ff336ccdb55b2a30ec509f697fc11a\" pid:4335 exited_at:{seconds:1765889508 nanos:327186683}" Dec 16 12:51:48.378846 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d2d301da3f016b87252d0769aaa4699e8ff336ccdb55b2a30ec509f697fc11a-rootfs.mount: Deactivated successfully. Dec 16 12:51:48.455342 kubelet[3300]: I1216 12:51:48.455299 3300 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 12:51:48.526037 systemd[1]: Created slice kubepods-burstable-pod6847d7c3_2098_48b8_8794_1b664366eb8c.slice - libcontainer container kubepods-burstable-pod6847d7c3_2098_48b8_8794_1b664366eb8c.slice. Dec 16 12:51:48.543330 systemd[1]: Created slice kubepods-besteffort-pod5fa2e950_afe9_46cc_8df6_b014aa90c32b.slice - libcontainer container kubepods-besteffort-pod5fa2e950_afe9_46cc_8df6_b014aa90c32b.slice. Dec 16 12:51:48.553958 systemd[1]: Created slice kubepods-burstable-pod677a62d7_b648_43b1_ac2f_6337ecae67c0.slice - libcontainer container kubepods-burstable-pod677a62d7_b648_43b1_ac2f_6337ecae67c0.slice. Dec 16 12:51:48.563785 systemd[1]: Created slice kubepods-besteffort-pod97ebcca4_f43d_4a70_b294_e93b18442671.slice - libcontainer container kubepods-besteffort-pod97ebcca4_f43d_4a70_b294_e93b18442671.slice. Dec 16 12:51:48.573572 systemd[1]: Created slice kubepods-besteffort-pod4f37d126_32e9_4a2f_b0c1_8df2ac964398.slice - libcontainer container kubepods-besteffort-pod4f37d126_32e9_4a2f_b0c1_8df2ac964398.slice. Dec 16 12:51:48.582022 systemd[1]: Created slice kubepods-besteffort-pod39a10cdc_f6c5_430e_ac11_7f9183d0c949.slice - libcontainer container kubepods-besteffort-pod39a10cdc_f6c5_430e_ac11_7f9183d0c949.slice. Dec 16 12:51:48.590820 systemd[1]: Created slice kubepods-besteffort-pod07dabb60_b494_485b_be91_7522183aff41.slice - libcontainer container kubepods-besteffort-pod07dabb60_b494_485b_be91_7522183aff41.slice. Dec 16 12:51:48.597897 systemd[1]: Created slice kubepods-besteffort-pod631dcf21_0085_4dd5_b7a8_35fb5c10f8ab.slice - libcontainer container kubepods-besteffort-pod631dcf21_0085_4dd5_b7a8_35fb5c10f8ab.slice. Dec 16 12:51:48.704177 kubelet[3300]: I1216 12:51:48.704089 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm8jc\" (UniqueName: \"kubernetes.io/projected/631dcf21-0085-4dd5-b7a8-35fb5c10f8ab-kube-api-access-gm8jc\") pod \"calico-apiserver-6fb4c65f97-rg8xf\" (UID: \"631dcf21-0085-4dd5-b7a8-35fb5c10f8ab\") " pod="calico-apiserver/calico-apiserver-6fb4c65f97-rg8xf" Dec 16 12:51:48.704391 kubelet[3300]: I1216 12:51:48.704338 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6847d7c3-2098-48b8-8794-1b664366eb8c-config-volume\") pod \"coredns-674b8bbfcf-j62tw\" (UID: \"6847d7c3-2098-48b8-8794-1b664366eb8c\") " pod="kube-system/coredns-674b8bbfcf-j62tw" Dec 16 12:51:48.704482 kubelet[3300]: I1216 12:51:48.704390 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5fa2e950-afe9-46cc-8df6-b014aa90c32b-calico-apiserver-certs\") pod \"calico-apiserver-6fb4c65f97-cxgtk\" (UID: \"5fa2e950-afe9-46cc-8df6-b014aa90c32b\") " pod="calico-apiserver/calico-apiserver-6fb4c65f97-cxgtk" Dec 16 12:51:48.704482 kubelet[3300]: I1216 12:51:48.704430 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z28sw\" (UniqueName: \"kubernetes.io/projected/97ebcca4-f43d-4a70-b294-e93b18442671-kube-api-access-z28sw\") pod \"calico-kube-controllers-5b97648649-sj8kb\" (UID: \"97ebcca4-f43d-4a70-b294-e93b18442671\") " pod="calico-system/calico-kube-controllers-5b97648649-sj8kb" Dec 16 12:51:48.704482 kubelet[3300]: I1216 12:51:48.704452 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtqck\" (UniqueName: \"kubernetes.io/projected/6847d7c3-2098-48b8-8794-1b664366eb8c-kube-api-access-wtqck\") pod \"coredns-674b8bbfcf-j62tw\" (UID: \"6847d7c3-2098-48b8-8794-1b664366eb8c\") " pod="kube-system/coredns-674b8bbfcf-j62tw" Dec 16 12:51:48.704590 kubelet[3300]: I1216 12:51:48.704485 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39a10cdc-f6c5-430e-ac11-7f9183d0c949-goldmane-ca-bundle\") pod \"goldmane-666569f655-8gcbc\" (UID: \"39a10cdc-f6c5-430e-ac11-7f9183d0c949\") " pod="calico-system/goldmane-666569f655-8gcbc" Dec 16 12:51:48.704590 kubelet[3300]: I1216 12:51:48.704502 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/39a10cdc-f6c5-430e-ac11-7f9183d0c949-goldmane-key-pair\") pod \"goldmane-666569f655-8gcbc\" (UID: \"39a10cdc-f6c5-430e-ac11-7f9183d0c949\") " pod="calico-system/goldmane-666569f655-8gcbc" Dec 16 12:51:48.704590 kubelet[3300]: I1216 12:51:48.704518 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/07dabb60-b494-485b-be91-7522183aff41-calico-apiserver-certs\") pod \"calico-apiserver-7b5d97d74b-j9kr5\" (UID: \"07dabb60-b494-485b-be91-7522183aff41\") " pod="calico-apiserver/calico-apiserver-7b5d97d74b-j9kr5" Dec 16 12:51:48.704590 kubelet[3300]: I1216 12:51:48.704563 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/677a62d7-b648-43b1-ac2f-6337ecae67c0-config-volume\") pod \"coredns-674b8bbfcf-rscvc\" (UID: \"677a62d7-b648-43b1-ac2f-6337ecae67c0\") " pod="kube-system/coredns-674b8bbfcf-rscvc" Dec 16 12:51:48.704590 kubelet[3300]: I1216 12:51:48.704581 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/631dcf21-0085-4dd5-b7a8-35fb5c10f8ab-calico-apiserver-certs\") pod \"calico-apiserver-6fb4c65f97-rg8xf\" (UID: \"631dcf21-0085-4dd5-b7a8-35fb5c10f8ab\") " pod="calico-apiserver/calico-apiserver-6fb4c65f97-rg8xf" Dec 16 12:51:48.704723 kubelet[3300]: I1216 12:51:48.704597 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spx94\" (UniqueName: \"kubernetes.io/projected/07dabb60-b494-485b-be91-7522183aff41-kube-api-access-spx94\") pod \"calico-apiserver-7b5d97d74b-j9kr5\" (UID: \"07dabb60-b494-485b-be91-7522183aff41\") " pod="calico-apiserver/calico-apiserver-7b5d97d74b-j9kr5" Dec 16 12:51:48.704723 kubelet[3300]: I1216 12:51:48.704616 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97ebcca4-f43d-4a70-b294-e93b18442671-tigera-ca-bundle\") pod \"calico-kube-controllers-5b97648649-sj8kb\" (UID: \"97ebcca4-f43d-4a70-b294-e93b18442671\") " pod="calico-system/calico-kube-controllers-5b97648649-sj8kb" Dec 16 12:51:48.704723 kubelet[3300]: I1216 12:51:48.704644 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngxfr\" (UniqueName: \"kubernetes.io/projected/4f37d126-32e9-4a2f-b0c1-8df2ac964398-kube-api-access-ngxfr\") pod \"whisker-649b454465-8f5vt\" (UID: \"4f37d126-32e9-4a2f-b0c1-8df2ac964398\") " pod="calico-system/whisker-649b454465-8f5vt" Dec 16 12:51:48.704723 kubelet[3300]: I1216 12:51:48.704661 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldh2g\" (UniqueName: \"kubernetes.io/projected/677a62d7-b648-43b1-ac2f-6337ecae67c0-kube-api-access-ldh2g\") pod \"coredns-674b8bbfcf-rscvc\" (UID: \"677a62d7-b648-43b1-ac2f-6337ecae67c0\") " pod="kube-system/coredns-674b8bbfcf-rscvc" Dec 16 12:51:48.704723 kubelet[3300]: I1216 12:51:48.704681 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f37d126-32e9-4a2f-b0c1-8df2ac964398-whisker-ca-bundle\") pod \"whisker-649b454465-8f5vt\" (UID: \"4f37d126-32e9-4a2f-b0c1-8df2ac964398\") " pod="calico-system/whisker-649b454465-8f5vt" Dec 16 12:51:48.705005 kubelet[3300]: I1216 12:51:48.704697 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39a10cdc-f6c5-430e-ac11-7f9183d0c949-config\") pod \"goldmane-666569f655-8gcbc\" (UID: \"39a10cdc-f6c5-430e-ac11-7f9183d0c949\") " pod="calico-system/goldmane-666569f655-8gcbc" Dec 16 12:51:48.705005 kubelet[3300]: I1216 12:51:48.704725 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrjjg\" (UniqueName: \"kubernetes.io/projected/39a10cdc-f6c5-430e-ac11-7f9183d0c949-kube-api-access-hrjjg\") pod \"goldmane-666569f655-8gcbc\" (UID: \"39a10cdc-f6c5-430e-ac11-7f9183d0c949\") " pod="calico-system/goldmane-666569f655-8gcbc" Dec 16 12:51:48.705005 kubelet[3300]: I1216 12:51:48.704745 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzbm2\" (UniqueName: \"kubernetes.io/projected/5fa2e950-afe9-46cc-8df6-b014aa90c32b-kube-api-access-hzbm2\") pod \"calico-apiserver-6fb4c65f97-cxgtk\" (UID: \"5fa2e950-afe9-46cc-8df6-b014aa90c32b\") " pod="calico-apiserver/calico-apiserver-6fb4c65f97-cxgtk" Dec 16 12:51:48.705005 kubelet[3300]: I1216 12:51:48.704888 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4f37d126-32e9-4a2f-b0c1-8df2ac964398-whisker-backend-key-pair\") pod \"whisker-649b454465-8f5vt\" (UID: \"4f37d126-32e9-4a2f-b0c1-8df2ac964398\") " pod="calico-system/whisker-649b454465-8f5vt" Dec 16 12:51:48.857046 containerd[1969]: time="2025-12-16T12:51:48.856130753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j62tw,Uid:6847d7c3-2098-48b8-8794-1b664366eb8c,Namespace:kube-system,Attempt:0,}" Dec 16 12:51:48.868724 containerd[1969]: time="2025-12-16T12:51:48.868688192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b97648649-sj8kb,Uid:97ebcca4-f43d-4a70-b294-e93b18442671,Namespace:calico-system,Attempt:0,}" Dec 16 12:51:48.886546 containerd[1969]: time="2025-12-16T12:51:48.886502576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-649b454465-8f5vt,Uid:4f37d126-32e9-4a2f-b0c1-8df2ac964398,Namespace:calico-system,Attempt:0,}" Dec 16 12:51:48.888830 containerd[1969]: time="2025-12-16T12:51:48.888743571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8gcbc,Uid:39a10cdc-f6c5-430e-ac11-7f9183d0c949,Namespace:calico-system,Attempt:0,}" Dec 16 12:51:48.930608 systemd[1]: Created slice kubepods-besteffort-poda2c9a6a7_124f_4bab_89b2_0cbb2fd935f0.slice - libcontainer container kubepods-besteffort-poda2c9a6a7_124f_4bab_89b2_0cbb2fd935f0.slice. Dec 16 12:51:48.932747 containerd[1969]: time="2025-12-16T12:51:48.932721957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b5d97d74b-j9kr5,Uid:07dabb60-b494-485b-be91-7522183aff41,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:51:48.933846 containerd[1969]: time="2025-12-16T12:51:48.933505507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb4c65f97-rg8xf,Uid:631dcf21-0085-4dd5-b7a8-35fb5c10f8ab,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:51:48.934587 containerd[1969]: time="2025-12-16T12:51:48.934567667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l2s79,Uid:a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0,Namespace:calico-system,Attempt:0,}" Dec 16 12:51:49.149721 containerd[1969]: time="2025-12-16T12:51:49.149684273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb4c65f97-cxgtk,Uid:5fa2e950-afe9-46cc-8df6-b014aa90c32b,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:51:49.164112 containerd[1969]: time="2025-12-16T12:51:49.164078992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 12:51:49.164688 containerd[1969]: time="2025-12-16T12:51:49.164659258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rscvc,Uid:677a62d7-b648-43b1-ac2f-6337ecae67c0,Namespace:kube-system,Attempt:0,}" Dec 16 12:51:49.223485 containerd[1969]: time="2025-12-16T12:51:49.223421434Z" level=error msg="Failed to destroy network for sandbox \"f9442e9826682712317b731981d7a488e8d7bb0915e06265e8611b9aec4954e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:51:49.233467 containerd[1969]: time="2025-12-16T12:51:49.233113728Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-649b454465-8f5vt,Uid:4f37d126-32e9-4a2f-b0c1-8df2ac964398,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9442e9826682712317b731981d7a488e8d7bb0915e06265e8611b9aec4954e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:51:49.236446 kubelet[3300]: E1216 12:51:49.236403 3300 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9442e9826682712317b731981d7a488e8d7bb0915e06265e8611b9aec4954e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:51:49.236575 kubelet[3300]: E1216 12:51:49.236469 3300 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9442e9826682712317b731981d7a488e8d7bb0915e06265e8611b9aec4954e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-649b454465-8f5vt" Dec 16 12:51:49.236575 kubelet[3300]: E1216 12:51:49.236490 3300 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9442e9826682712317b731981d7a488e8d7bb0915e06265e8611b9aec4954e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-649b454465-8f5vt" Dec 16 12:51:49.236575 kubelet[3300]: E1216 12:51:49.236537 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-649b454465-8f5vt_calico-system(4f37d126-32e9-4a2f-b0c1-8df2ac964398)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-649b454465-8f5vt_calico-system(4f37d126-32e9-4a2f-b0c1-8df2ac964398)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9442e9826682712317b731981d7a488e8d7bb0915e06265e8611b9aec4954e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-649b454465-8f5vt" podUID="4f37d126-32e9-4a2f-b0c1-8df2ac964398" Dec 16 12:51:49.279543 containerd[1969]: time="2025-12-16T12:51:49.279502930Z" level=error msg="Failed to destroy network for sandbox \"ff84c740b78f56e7d5efb5ad1288c200d50033391e134173072952b57d177797\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:51:49.280472 containerd[1969]: time="2025-12-16T12:51:49.280418879Z" level=error msg="Failed to destroy network for sandbox \"7c2162907316679577a4ea4c967e372141f885522e38baee5d25f94611decf42\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:51:49.284716 containerd[1969]: time="2025-12-16T12:51:49.284676388Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l2s79,Uid:a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff84c740b78f56e7d5efb5ad1288c200d50033391e134173072952b57d177797\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:51:49.286160 kubelet[3300]: E1216 12:51:49.286071 3300 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff84c740b78f56e7d5efb5ad1288c200d50033391e134173072952b57d177797\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:51:49.286338 kubelet[3300]: E1216 12:51:49.286268 3300 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff84c740b78f56e7d5efb5ad1288c200d50033391e134173072952b57d177797\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l2s79" Dec 16 12:51:49.286491 kubelet[3300]: E1216 12:51:49.286292 3300 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff84c740b78f56e7d5efb5ad1288c200d50033391e134173072952b57d177797\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l2s79" Dec 16 12:51:49.287493 kubelet[3300]: E1216 12:51:49.287384 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l2s79_calico-system(a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l2s79_calico-system(a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff84c740b78f56e7d5efb5ad1288c200d50033391e134173072952b57d177797\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l2s79" podUID="a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0" Dec 16 12:51:49.287918 containerd[1969]: time="2025-12-16T12:51:49.287829749Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb4c65f97-rg8xf,Uid:631dcf21-0085-4dd5-b7a8-35fb5c10f8ab,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c2162907316679577a4ea4c967e372141f885522e38baee5d25f94611decf42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:51:49.288368 kubelet[3300]: E1216 12:51:49.288339 3300 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c2162907316679577a4ea4c967e372141f885522e38baee5d25f94611decf42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:51:49.288686 kubelet[3300]: E1216 12:51:49.288377 3300 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c2162907316679577a4ea4c967e372141f885522e38baee5d25f94611decf42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fb4c65f97-rg8xf" Dec 16 12:51:49.288686 kubelet[3300]: E1216 12:51:49.288397 3300 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c2162907316679577a4ea4c967e372141f885522e38baee5d25f94611decf42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fb4c65f97-rg8xf" Dec 16 12:51:49.288686 kubelet[3300]: E1216 12:51:49.288445 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fb4c65f97-rg8xf_calico-apiserver(631dcf21-0085-4dd5-b7a8-35fb5c10f8ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fb4c65f97-rg8xf_calico-apiserver(631dcf21-0085-4dd5-b7a8-35fb5c10f8ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c2162907316679577a4ea4c967e372141f885522e38baee5d25f94611decf42\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fb4c65f97-rg8xf" podUID="631dcf21-0085-4dd5-b7a8-35fb5c10f8ab" Dec 16 12:51:49.296723 containerd[1969]: time="2025-12-16T12:51:49.296683053Z" level=error msg="Failed to destroy network for sandbox \"b00a8f24a344685edef796239b92bf72014dfb699861162ba0805e8aee2e49b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:51:49.301579 containerd[1969]: time="2025-12-16T12:51:49.301473235Z" level=error msg="Failed to destroy network for sandbox \"ec300563cd7b481714afd7230dd585d360861f1fae46111a2ae99bfb7c003d53\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:51:49.302395 containerd[1969]: time="2025-12-16T12:51:49.302358744Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8gcbc,Uid:39a10cdc-f6c5-430e-ac11-7f9183d0c949,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b00a8f24a344685edef796239b92bf72014dfb699861162ba0805e8aee2e49b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:51:49.303062 kubelet[3300]: E1216 12:51:49.302536 3300 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b00a8f24a344685edef796239b92bf72014dfb699861162ba0805e8aee2e49b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:51:49.303062 kubelet[3300]: E1216 12:51:49.302590 3300 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b00a8f24a344685edef796239b92bf72014dfb699861162ba0805e8aee2e49b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-8gcbc" Dec 16 12:51:49.303062 kubelet[3300]: E1216 12:51:49.302610 3300 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b00a8f24a344685edef796239b92bf72014dfb699861162ba0805e8aee2e49b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-8gcbc" Dec 16 12:51:49.303303 kubelet[3300]: E1216 12:51:49.302653 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-8gcbc_calico-system(39a10cdc-f6c5-430e-ac11-7f9183d0c949)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-8gcbc_calico-system(39a10cdc-f6c5-430e-ac11-7f9183d0c949)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b00a8f24a344685edef796239b92bf72014dfb699861162ba0805e8aee2e49b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-8gcbc" podUID="39a10cdc-f6c5-430e-ac11-7f9183d0c949" Dec 16 12:51:49.303449 containerd[1969]: time="2025-12-16T12:51:49.303142206Z" level=error msg="Failed to destroy network for sandbox \"5a1548589285d67cd779fb9f0c405b32e95a201badb6eb7273fd6d78f6b62a17\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:51:49.306719 containerd[1969]: time="2025-12-16T12:51:49.306683066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b5d97d74b-j9kr5,Uid:07dabb60-b494-485b-be91-7522183aff41,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec300563cd7b481714afd7230dd585d360861f1fae46111a2ae99bfb7c003d53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:51:49.307198 kubelet[3300]: E1216 12:51:49.306953 3300 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec300563cd7b481714afd7230dd585d360861f1fae46111a2ae99bfb7c003d53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:51:49.307198 kubelet[3300]: E1216 12:51:49.307012 3300 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec300563cd7b481714afd7230dd585d360861f1fae46111a2ae99bfb7c003d53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b5d97d74b-j9kr5" Dec 16 12:51:49.307198 kubelet[3300]: E1216 12:51:49.307033 3300 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec300563cd7b481714afd7230dd585d360861f1fae46111a2ae99bfb7c003d53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b5d97d74b-j9kr5" Dec 16 12:51:49.308293 kubelet[3300]: E1216 12:51:49.307083 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b5d97d74b-j9kr5_calico-apiserver(07dabb60-b494-485b-be91-7522183aff41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b5d97d74b-j9kr5_calico-apiserver(07dabb60-b494-485b-be91-7522183aff41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec300563cd7b481714afd7230dd585d360861f1fae46111a2ae99bfb7c003d53\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b5d97d74b-j9kr5" podUID="07dabb60-b494-485b-be91-7522183aff41" Dec 16 12:51:49.313379 containerd[1969]: time="2025-12-16T12:51:49.313339071Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j62tw,Uid:6847d7c3-2098-48b8-8794-1b664366eb8c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a1548589285d67cd779fb9f0c405b32e95a201badb6eb7273fd6d78f6b62a17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:51:49.313544 kubelet[3300]: E1216 12:51:49.313508 3300 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a1548589285d67cd779fb9f0c405b32e95a201badb6eb7273fd6d78f6b62a17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:51:49.313595 kubelet[3300]: E1216 12:51:49.313553 3300 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a1548589285d67cd779fb9f0c405b32e95a201badb6eb7273fd6d78f6b62a17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-j62tw" Dec 16 12:51:49.313595 kubelet[3300]: E1216 12:51:49.313583 3300 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a1548589285d67cd779fb9f0c405b32e95a201badb6eb7273fd6d78f6b62a17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-j62tw" Dec 16 12:51:49.313670 kubelet[3300]: E1216 12:51:49.313629 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-j62tw_kube-system(6847d7c3-2098-48b8-8794-1b664366eb8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-j62tw_kube-system(6847d7c3-2098-48b8-8794-1b664366eb8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a1548589285d67cd779fb9f0c405b32e95a201badb6eb7273fd6d78f6b62a17\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-j62tw" podUID="6847d7c3-2098-48b8-8794-1b664366eb8c" Dec 16 12:51:49.319715 containerd[1969]: time="2025-12-16T12:51:49.319615419Z" level=error msg="Failed to destroy network for sandbox \"b86a6bbca3684731e49096b08febd7ab35f278658e9c84ce602a42821fb42bbc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:51:49.324621 containerd[1969]: time="2025-12-16T12:51:49.324579702Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b97648649-sj8kb,Uid:97ebcca4-f43d-4a70-b294-e93b18442671,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b86a6bbca3684731e49096b08febd7ab35f278658e9c84ce602a42821fb42bbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:51:49.325161 kubelet[3300]: E1216 12:51:49.324991 3300 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b86a6bbca3684731e49096b08febd7ab35f278658e9c84ce602a42821fb42bbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:51:49.325161 kubelet[3300]: E1216 12:51:49.325042 3300 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b86a6bbca3684731e49096b08febd7ab35f278658e9c84ce602a42821fb42bbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b97648649-sj8kb" Dec 16 12:51:49.325161 kubelet[3300]: E1216 12:51:49.325072 3300 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b86a6bbca3684731e49096b08febd7ab35f278658e9c84ce602a42821fb42bbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b97648649-sj8kb" Dec 16 12:51:49.325317 kubelet[3300]: E1216 12:51:49.325124 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b97648649-sj8kb_calico-system(97ebcca4-f43d-4a70-b294-e93b18442671)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b97648649-sj8kb_calico-system(97ebcca4-f43d-4a70-b294-e93b18442671)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b86a6bbca3684731e49096b08febd7ab35f278658e9c84ce602a42821fb42bbc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b97648649-sj8kb" podUID="97ebcca4-f43d-4a70-b294-e93b18442671" Dec 16 12:51:49.344931 containerd[1969]: time="2025-12-16T12:51:49.344796387Z" level=error msg="Failed to destroy network for sandbox \"424580c7198337b9f950924cd1f6140ecab44d3e617e043c485ff62007d1302a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:51:49.349631 containerd[1969]: time="2025-12-16T12:51:49.349590499Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rscvc,Uid:677a62d7-b648-43b1-ac2f-6337ecae67c0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"424580c7198337b9f950924cd1f6140ecab44d3e617e043c485ff62007d1302a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:51:49.350634 kubelet[3300]: E1216 12:51:49.350594 3300 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"424580c7198337b9f950924cd1f6140ecab44d3e617e043c485ff62007d1302a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:51:49.350898 kubelet[3300]: E1216 12:51:49.350797 3300 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"424580c7198337b9f950924cd1f6140ecab44d3e617e043c485ff62007d1302a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rscvc" Dec 16 12:51:49.350898 kubelet[3300]: E1216 12:51:49.350861 3300 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"424580c7198337b9f950924cd1f6140ecab44d3e617e043c485ff62007d1302a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rscvc" Dec 16 12:51:49.351189 kubelet[3300]: E1216 12:51:49.350960 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-rscvc_kube-system(677a62d7-b648-43b1-ac2f-6337ecae67c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-rscvc_kube-system(677a62d7-b648-43b1-ac2f-6337ecae67c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"424580c7198337b9f950924cd1f6140ecab44d3e617e043c485ff62007d1302a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-rscvc" podUID="677a62d7-b648-43b1-ac2f-6337ecae67c0" Dec 16 12:51:49.364571 containerd[1969]: time="2025-12-16T12:51:49.364518079Z" level=error msg="Failed to destroy network for sandbox \"ee992cdc9a46b33f4de72f95f7f00f0c70f83d47949c14b9406ea0257203468b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:51:49.369405 containerd[1969]: time="2025-12-16T12:51:49.369344665Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb4c65f97-cxgtk,Uid:5fa2e950-afe9-46cc-8df6-b014aa90c32b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee992cdc9a46b33f4de72f95f7f00f0c70f83d47949c14b9406ea0257203468b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:51:49.369699 kubelet[3300]: E1216 12:51:49.369637 3300 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee992cdc9a46b33f4de72f95f7f00f0c70f83d47949c14b9406ea0257203468b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:51:49.369779 kubelet[3300]: E1216 12:51:49.369726 3300 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee992cdc9a46b33f4de72f95f7f00f0c70f83d47949c14b9406ea0257203468b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fb4c65f97-cxgtk" Dec 16 12:51:49.369830 kubelet[3300]: E1216 12:51:49.369775 3300 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee992cdc9a46b33f4de72f95f7f00f0c70f83d47949c14b9406ea0257203468b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fb4c65f97-cxgtk" Dec 16 12:51:49.370193 kubelet[3300]: E1216 12:51:49.369873 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fb4c65f97-cxgtk_calico-apiserver(5fa2e950-afe9-46cc-8df6-b014aa90c32b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fb4c65f97-cxgtk_calico-apiserver(5fa2e950-afe9-46cc-8df6-b014aa90c32b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee992cdc9a46b33f4de72f95f7f00f0c70f83d47949c14b9406ea0257203468b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fb4c65f97-cxgtk" podUID="5fa2e950-afe9-46cc-8df6-b014aa90c32b" Dec 16 12:51:55.752247 kubelet[3300]: I1216 12:51:55.752147 3300 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:51:55.948000 audit[4620]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=4620 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:51:55.951808 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 16 12:51:55.956416 kernel: audit: type=1325 audit(1765889515.948:593): table=filter:119 family=2 entries=21 op=nft_register_rule pid=4620 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:51:55.948000 audit[4620]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc66293630 a2=0 a3=7ffc6629361c items=0 ppid=3722 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:55.966715 kernel: audit: type=1300 audit(1765889515.948:593): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc66293630 a2=0 a3=7ffc6629361c items=0 ppid=3722 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:55.966817 kernel: audit: type=1327 audit(1765889515.948:593): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:51:55.948000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:51:55.956000 audit[4620]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=4620 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:51:55.956000 audit[4620]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc66293630 a2=0 a3=7ffc6629361c items=0 ppid=3722 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:55.975182 kernel: audit: type=1325 audit(1765889515.956:594): table=nat:120 family=2 entries=19 op=nft_register_chain pid=4620 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:51:55.975239 kernel: audit: type=1300 audit(1765889515.956:594): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc66293630 a2=0 a3=7ffc6629361c items=0 ppid=3722 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:55.956000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:51:55.981876 kernel: audit: type=1327 audit(1765889515.956:594): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:51:57.577863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3195533349.mount: Deactivated successfully. Dec 16 12:51:57.669582 containerd[1969]: time="2025-12-16T12:51:57.644000223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:51:57.684506 containerd[1969]: time="2025-12-16T12:51:57.684436274Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 16 12:51:57.753898 containerd[1969]: time="2025-12-16T12:51:57.753832892Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:51:57.757292 containerd[1969]: time="2025-12-16T12:51:57.757131680Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:51:57.757702 containerd[1969]: time="2025-12-16T12:51:57.757557609Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.593279933s" Dec 16 12:51:57.771237 containerd[1969]: time="2025-12-16T12:51:57.771135929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 12:51:57.917160 containerd[1969]: time="2025-12-16T12:51:57.917108548Z" level=info msg="CreateContainer within sandbox \"97f944ae25ac7930e09a78f3a9f32ef0866d46a776e8e4e2c79678199a147abe\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 12:51:58.016829 containerd[1969]: time="2025-12-16T12:51:58.016621969Z" level=info msg="Container 0876d9f8656c9a4800623e6bdbbed4c33d244f68bd97ba19e7f32b835abfcc07: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:51:58.020515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3433465942.mount: Deactivated successfully. Dec 16 12:51:58.089948 containerd[1969]: time="2025-12-16T12:51:58.089875837Z" level=info msg="CreateContainer within sandbox \"97f944ae25ac7930e09a78f3a9f32ef0866d46a776e8e4e2c79678199a147abe\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0876d9f8656c9a4800623e6bdbbed4c33d244f68bd97ba19e7f32b835abfcc07\"" Dec 16 12:51:58.099950 containerd[1969]: time="2025-12-16T12:51:58.099896462Z" level=info msg="StartContainer for \"0876d9f8656c9a4800623e6bdbbed4c33d244f68bd97ba19e7f32b835abfcc07\"" Dec 16 12:51:58.108431 containerd[1969]: time="2025-12-16T12:51:58.108391111Z" level=info msg="connecting to shim 0876d9f8656c9a4800623e6bdbbed4c33d244f68bd97ba19e7f32b835abfcc07" address="unix:///run/containerd/s/bda7b1d247d5aa4c1e6f3107d03d8d230ba8ee7e9f7b872557e87d600b5a4e6f" protocol=ttrpc version=3 Dec 16 12:51:58.228325 systemd[1]: Started cri-containerd-0876d9f8656c9a4800623e6bdbbed4c33d244f68bd97ba19e7f32b835abfcc07.scope - libcontainer container 0876d9f8656c9a4800623e6bdbbed4c33d244f68bd97ba19e7f32b835abfcc07. Dec 16 12:51:58.293000 audit: BPF prog-id=182 op=LOAD Dec 16 12:51:58.296512 kernel: audit: type=1334 audit(1765889518.293:595): prog-id=182 op=LOAD Dec 16 12:51:58.293000 audit[4623]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4081 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:58.303951 kernel: audit: type=1300 audit(1765889518.293:595): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4081 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:58.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038373664396638363536633961343830303632336536626462626564 Dec 16 12:51:58.313193 kernel: audit: type=1327 audit(1765889518.293:595): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038373664396638363536633961343830303632336536626462626564 Dec 16 12:51:58.313290 kernel: audit: type=1334 audit(1765889518.295:596): prog-id=183 op=LOAD Dec 16 12:51:58.295000 audit: BPF prog-id=183 op=LOAD Dec 16 12:51:58.295000 audit[4623]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4081 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:58.295000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038373664396638363536633961343830303632336536626462626564 Dec 16 12:51:58.295000 audit: BPF prog-id=183 op=UNLOAD Dec 16 12:51:58.295000 audit[4623]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4081 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:58.295000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038373664396638363536633961343830303632336536626462626564 Dec 16 12:51:58.295000 audit: BPF prog-id=182 op=UNLOAD Dec 16 12:51:58.295000 audit[4623]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4081 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:58.295000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038373664396638363536633961343830303632336536626462626564 Dec 16 12:51:58.295000 audit: BPF prog-id=184 op=LOAD Dec 16 12:51:58.295000 audit[4623]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4081 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:51:58.295000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038373664396638363536633961343830303632336536626462626564 Dec 16 12:51:58.369274 containerd[1969]: time="2025-12-16T12:51:58.369042709Z" level=info msg="StartContainer for \"0876d9f8656c9a4800623e6bdbbed4c33d244f68bd97ba19e7f32b835abfcc07\" returns successfully" Dec 16 12:51:59.238474 kubelet[3300]: I1216 12:51:59.232398 3300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qsttg" podStartSLOduration=1.948608692 podStartE2EDuration="24.223874421s" podCreationTimestamp="2025-12-16 12:51:35 +0000 UTC" firstStartedPulling="2025-12-16 12:51:35.507092632 +0000 UTC m=+26.744194556" lastFinishedPulling="2025-12-16 12:51:57.782358351 +0000 UTC m=+49.019460285" observedRunningTime="2025-12-16 12:51:59.22312779 +0000 UTC m=+50.460229761" watchObservedRunningTime="2025-12-16 12:51:59.223874421 +0000 UTC m=+50.460976364" Dec 16 12:52:00.924878 containerd[1969]: time="2025-12-16T12:52:00.924820638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rscvc,Uid:677a62d7-b648-43b1-ac2f-6337ecae67c0,Namespace:kube-system,Attempt:0,}" Dec 16 12:52:00.925541 containerd[1969]: time="2025-12-16T12:52:00.925023143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8gcbc,Uid:39a10cdc-f6c5-430e-ac11-7f9183d0c949,Namespace:calico-system,Attempt:0,}" Dec 16 12:52:00.925541 containerd[1969]: time="2025-12-16T12:52:00.925137903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb4c65f97-rg8xf,Uid:631dcf21-0085-4dd5-b7a8-35fb5c10f8ab,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:52:00.928237 containerd[1969]: time="2025-12-16T12:52:00.925352952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b5d97d74b-j9kr5,Uid:07dabb60-b494-485b-be91-7522183aff41,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:52:01.105751 containerd[1969]: time="2025-12-16T12:52:01.105193822Z" level=error msg="Failed to destroy network for sandbox \"7cabb7a6c22e491b38f6e3f02e68235eb692044a5d8b9cfc385192c658140a44\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:52:01.109780 systemd[1]: run-netns-cni\x2d733faca8\x2d64e9\x2d925c\x2de696\x2d79e6328804d7.mount: Deactivated successfully. Dec 16 12:52:01.121692 containerd[1969]: time="2025-12-16T12:52:01.121496889Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rscvc,Uid:677a62d7-b648-43b1-ac2f-6337ecae67c0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cabb7a6c22e491b38f6e3f02e68235eb692044a5d8b9cfc385192c658140a44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:52:01.132825 kubelet[3300]: E1216 12:52:01.132396 3300 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cabb7a6c22e491b38f6e3f02e68235eb692044a5d8b9cfc385192c658140a44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:52:01.133733 kubelet[3300]: E1216 12:52:01.133689 3300 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cabb7a6c22e491b38f6e3f02e68235eb692044a5d8b9cfc385192c658140a44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rscvc" Dec 16 12:52:01.133853 kubelet[3300]: E1216 12:52:01.133751 3300 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cabb7a6c22e491b38f6e3f02e68235eb692044a5d8b9cfc385192c658140a44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rscvc" Dec 16 12:52:01.139983 kubelet[3300]: E1216 12:52:01.139915 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-rscvc_kube-system(677a62d7-b648-43b1-ac2f-6337ecae67c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-rscvc_kube-system(677a62d7-b648-43b1-ac2f-6337ecae67c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7cabb7a6c22e491b38f6e3f02e68235eb692044a5d8b9cfc385192c658140a44\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-rscvc" podUID="677a62d7-b648-43b1-ac2f-6337ecae67c0" Dec 16 12:52:01.159184 containerd[1969]: time="2025-12-16T12:52:01.158977399Z" level=error msg="Failed to destroy network for sandbox \"b354310fe213b45d290c9b4282c8633c3c6b5fd5a3f21f8ee2601c70125bec90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:52:01.166843 containerd[1969]: time="2025-12-16T12:52:01.166577996Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b5d97d74b-j9kr5,Uid:07dabb60-b494-485b-be91-7522183aff41,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b354310fe213b45d290c9b4282c8633c3c6b5fd5a3f21f8ee2601c70125bec90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:52:01.167059 kubelet[3300]: E1216 12:52:01.166937 3300 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b354310fe213b45d290c9b4282c8633c3c6b5fd5a3f21f8ee2601c70125bec90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:52:01.167059 kubelet[3300]: E1216 12:52:01.167023 3300 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b354310fe213b45d290c9b4282c8633c3c6b5fd5a3f21f8ee2601c70125bec90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b5d97d74b-j9kr5" Dec 16 12:52:01.167059 kubelet[3300]: E1216 12:52:01.167049 3300 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b354310fe213b45d290c9b4282c8633c3c6b5fd5a3f21f8ee2601c70125bec90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b5d97d74b-j9kr5" Dec 16 12:52:01.167220 kubelet[3300]: E1216 12:52:01.167132 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b5d97d74b-j9kr5_calico-apiserver(07dabb60-b494-485b-be91-7522183aff41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b5d97d74b-j9kr5_calico-apiserver(07dabb60-b494-485b-be91-7522183aff41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b354310fe213b45d290c9b4282c8633c3c6b5fd5a3f21f8ee2601c70125bec90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b5d97d74b-j9kr5" podUID="07dabb60-b494-485b-be91-7522183aff41" Dec 16 12:52:01.172619 containerd[1969]: time="2025-12-16T12:52:01.172541346Z" level=error msg="Failed to destroy network for sandbox \"5456df7a7e0b244d85515fd5c35b058f06b9b4eb65656b203ac6cedf856f75f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:52:01.173952 containerd[1969]: time="2025-12-16T12:52:01.173867081Z" level=error msg="Failed to destroy network for sandbox \"713b98d8c68f418051d8647986e76380c7a3c93c7a8e124d6da72a4497a9a523\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:52:01.177458 containerd[1969]: time="2025-12-16T12:52:01.177009394Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8gcbc,Uid:39a10cdc-f6c5-430e-ac11-7f9183d0c949,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5456df7a7e0b244d85515fd5c35b058f06b9b4eb65656b203ac6cedf856f75f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:52:01.177570 kubelet[3300]: E1216 12:52:01.177223 3300 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5456df7a7e0b244d85515fd5c35b058f06b9b4eb65656b203ac6cedf856f75f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:52:01.177570 kubelet[3300]: E1216 12:52:01.177273 3300 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5456df7a7e0b244d85515fd5c35b058f06b9b4eb65656b203ac6cedf856f75f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-8gcbc" Dec 16 12:52:01.177570 kubelet[3300]: E1216 12:52:01.177418 3300 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5456df7a7e0b244d85515fd5c35b058f06b9b4eb65656b203ac6cedf856f75f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-8gcbc" Dec 16 12:52:01.177679 kubelet[3300]: E1216 12:52:01.177503 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-8gcbc_calico-system(39a10cdc-f6c5-430e-ac11-7f9183d0c949)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-8gcbc_calico-system(39a10cdc-f6c5-430e-ac11-7f9183d0c949)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5456df7a7e0b244d85515fd5c35b058f06b9b4eb65656b203ac6cedf856f75f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-8gcbc" podUID="39a10cdc-f6c5-430e-ac11-7f9183d0c949" Dec 16 12:52:01.181276 containerd[1969]: time="2025-12-16T12:52:01.181216327Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb4c65f97-rg8xf,Uid:631dcf21-0085-4dd5-b7a8-35fb5c10f8ab,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"713b98d8c68f418051d8647986e76380c7a3c93c7a8e124d6da72a4497a9a523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:52:01.181969 kubelet[3300]: E1216 12:52:01.181471 3300 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"713b98d8c68f418051d8647986e76380c7a3c93c7a8e124d6da72a4497a9a523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:52:01.181969 kubelet[3300]: E1216 12:52:01.181515 3300 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"713b98d8c68f418051d8647986e76380c7a3c93c7a8e124d6da72a4497a9a523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fb4c65f97-rg8xf" Dec 16 12:52:01.181969 kubelet[3300]: E1216 12:52:01.181534 3300 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"713b98d8c68f418051d8647986e76380c7a3c93c7a8e124d6da72a4497a9a523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fb4c65f97-rg8xf" Dec 16 12:52:01.182100 kubelet[3300]: E1216 12:52:01.181582 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fb4c65f97-rg8xf_calico-apiserver(631dcf21-0085-4dd5-b7a8-35fb5c10f8ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fb4c65f97-rg8xf_calico-apiserver(631dcf21-0085-4dd5-b7a8-35fb5c10f8ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"713b98d8c68f418051d8647986e76380c7a3c93c7a8e124d6da72a4497a9a523\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fb4c65f97-rg8xf" podUID="631dcf21-0085-4dd5-b7a8-35fb5c10f8ab" Dec 16 12:52:01.199056 kubelet[3300]: I1216 12:52:01.199025 3300 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:52:01.924820 containerd[1969]: time="2025-12-16T12:52:01.924136903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-649b454465-8f5vt,Uid:4f37d126-32e9-4a2f-b0c1-8df2ac964398,Namespace:calico-system,Attempt:0,}" Dec 16 12:52:01.925199 containerd[1969]: time="2025-12-16T12:52:01.925166709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j62tw,Uid:6847d7c3-2098-48b8-8794-1b664366eb8c,Namespace:kube-system,Attempt:0,}" Dec 16 12:52:01.981732 systemd[1]: run-netns-cni\x2da07d6b16\x2dc7d0\x2dc11c\x2d516b\x2d32fd71f17eb0.mount: Deactivated successfully. Dec 16 12:52:01.985519 systemd[1]: run-netns-cni\x2d5cb56f32\x2da56d\x2d7f4c\x2ddf91\x2de420a6c9b317.mount: Deactivated successfully. Dec 16 12:52:01.985615 systemd[1]: run-netns-cni\x2d42eead99\x2d660c\x2dc7a2\x2d0e5e\x2d27e4c2b34b2b.mount: Deactivated successfully. Dec 16 12:52:02.494679 containerd[1969]: time="2025-12-16T12:52:02.494453509Z" level=error msg="Failed to destroy network for sandbox \"61cec4cc2ec3291302e548d0aa9fd15a73c64dec19a9763548b5bdf7cb69633f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:52:02.511173 systemd[1]: run-netns-cni\x2d4b3caac8\x2d8a06\x2d3651\x2d576d\x2d43d4952a72e2.mount: Deactivated successfully. Dec 16 12:52:02.533290 containerd[1969]: time="2025-12-16T12:52:02.516583027Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j62tw,Uid:6847d7c3-2098-48b8-8794-1b664366eb8c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"61cec4cc2ec3291302e548d0aa9fd15a73c64dec19a9763548b5bdf7cb69633f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:52:02.533490 kubelet[3300]: E1216 12:52:02.533428 3300 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61cec4cc2ec3291302e548d0aa9fd15a73c64dec19a9763548b5bdf7cb69633f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:52:02.544325 kubelet[3300]: E1216 12:52:02.533512 3300 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61cec4cc2ec3291302e548d0aa9fd15a73c64dec19a9763548b5bdf7cb69633f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-j62tw" Dec 16 12:52:02.544325 kubelet[3300]: E1216 12:52:02.533549 3300 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61cec4cc2ec3291302e548d0aa9fd15a73c64dec19a9763548b5bdf7cb69633f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-j62tw" Dec 16 12:52:02.545192 kubelet[3300]: E1216 12:52:02.544983 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-j62tw_kube-system(6847d7c3-2098-48b8-8794-1b664366eb8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-j62tw_kube-system(6847d7c3-2098-48b8-8794-1b664366eb8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61cec4cc2ec3291302e548d0aa9fd15a73c64dec19a9763548b5bdf7cb69633f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-j62tw" podUID="6847d7c3-2098-48b8-8794-1b664366eb8c" Dec 16 12:52:02.548581 containerd[1969]: time="2025-12-16T12:52:02.548522153Z" level=error msg="Failed to destroy network for sandbox \"63688dcc6fa134b3c7da70704153a765effa735ab7598eacdb1104b22faa9c81\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:52:02.564034 systemd[1]: run-netns-cni\x2df79a0c1d\x2dbab0\x2d9092\x2dad95\x2d4e45c12fcbe2.mount: Deactivated successfully. Dec 16 12:52:02.565232 containerd[1969]: time="2025-12-16T12:52:02.565032859Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-649b454465-8f5vt,Uid:4f37d126-32e9-4a2f-b0c1-8df2ac964398,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"63688dcc6fa134b3c7da70704153a765effa735ab7598eacdb1104b22faa9c81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:52:02.566880 kubelet[3300]: E1216 12:52:02.565420 3300 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63688dcc6fa134b3c7da70704153a765effa735ab7598eacdb1104b22faa9c81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:52:02.566880 kubelet[3300]: E1216 12:52:02.565491 3300 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63688dcc6fa134b3c7da70704153a765effa735ab7598eacdb1104b22faa9c81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-649b454465-8f5vt" Dec 16 12:52:02.566880 kubelet[3300]: E1216 12:52:02.565520 3300 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63688dcc6fa134b3c7da70704153a765effa735ab7598eacdb1104b22faa9c81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-649b454465-8f5vt" Dec 16 12:52:02.567155 kubelet[3300]: E1216 12:52:02.565578 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-649b454465-8f5vt_calico-system(4f37d126-32e9-4a2f-b0c1-8df2ac964398)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-649b454465-8f5vt_calico-system(4f37d126-32e9-4a2f-b0c1-8df2ac964398)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63688dcc6fa134b3c7da70704153a765effa735ab7598eacdb1104b22faa9c81\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-649b454465-8f5vt" podUID="4f37d126-32e9-4a2f-b0c1-8df2ac964398" Dec 16 12:52:02.922236 containerd[1969]: time="2025-12-16T12:52:02.922110784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b97648649-sj8kb,Uid:97ebcca4-f43d-4a70-b294-e93b18442671,Namespace:calico-system,Attempt:0,}" Dec 16 12:52:02.928708 containerd[1969]: time="2025-12-16T12:52:02.928575954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l2s79,Uid:a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0,Namespace:calico-system,Attempt:0,}" Dec 16 12:52:03.027447 containerd[1969]: time="2025-12-16T12:52:03.018438719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb4c65f97-cxgtk,Uid:5fa2e950-afe9-46cc-8df6-b014aa90c32b,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:52:03.204141 containerd[1969]: time="2025-12-16T12:52:03.203985222Z" level=error msg="Failed to destroy network for sandbox \"4ab222ea151b18ccc4a9088341816d0603ab2201f9584f4b2ecd6657d51d5c80\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:52:03.210249 systemd[1]: run-netns-cni\x2d61c3537d\x2d72f9\x2dd800\x2dc26a\x2daa7c18a89585.mount: Deactivated successfully. Dec 16 12:52:03.215856 containerd[1969]: time="2025-12-16T12:52:03.215794531Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb4c65f97-cxgtk,Uid:5fa2e950-afe9-46cc-8df6-b014aa90c32b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ab222ea151b18ccc4a9088341816d0603ab2201f9584f4b2ecd6657d51d5c80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:52:03.216445 kubelet[3300]: E1216 12:52:03.216340 3300 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ab222ea151b18ccc4a9088341816d0603ab2201f9584f4b2ecd6657d51d5c80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:52:03.216445 kubelet[3300]: E1216 12:52:03.216420 3300 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ab222ea151b18ccc4a9088341816d0603ab2201f9584f4b2ecd6657d51d5c80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fb4c65f97-cxgtk" Dec 16 12:52:03.216764 kubelet[3300]: E1216 12:52:03.216447 3300 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ab222ea151b18ccc4a9088341816d0603ab2201f9584f4b2ecd6657d51d5c80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fb4c65f97-cxgtk" Dec 16 12:52:03.216764 kubelet[3300]: E1216 12:52:03.216512 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fb4c65f97-cxgtk_calico-apiserver(5fa2e950-afe9-46cc-8df6-b014aa90c32b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fb4c65f97-cxgtk_calico-apiserver(5fa2e950-afe9-46cc-8df6-b014aa90c32b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ab222ea151b18ccc4a9088341816d0603ab2201f9584f4b2ecd6657d51d5c80\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fb4c65f97-cxgtk" podUID="5fa2e950-afe9-46cc-8df6-b014aa90c32b" Dec 16 12:52:03.245864 containerd[1969]: time="2025-12-16T12:52:03.241548602Z" level=error msg="Failed to destroy network for sandbox \"10a6f9d368a1f3d85e1754a0064b68c45b88c201d758383b455bedcf389eb8a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:52:03.246976 containerd[1969]: time="2025-12-16T12:52:03.246270824Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b97648649-sj8kb,Uid:97ebcca4-f43d-4a70-b294-e93b18442671,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"10a6f9d368a1f3d85e1754a0064b68c45b88c201d758383b455bedcf389eb8a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:52:03.246746 systemd[1]: run-netns-cni\x2de3b3f431\x2dadf2\x2d606d\x2d9d7c\x2d7570b729abc2.mount: Deactivated successfully. Dec 16 12:52:03.247564 kubelet[3300]: E1216 12:52:03.246606 3300 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10a6f9d368a1f3d85e1754a0064b68c45b88c201d758383b455bedcf389eb8a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:52:03.247564 kubelet[3300]: E1216 12:52:03.246685 3300 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10a6f9d368a1f3d85e1754a0064b68c45b88c201d758383b455bedcf389eb8a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b97648649-sj8kb" Dec 16 12:52:03.247564 kubelet[3300]: E1216 12:52:03.246731 3300 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10a6f9d368a1f3d85e1754a0064b68c45b88c201d758383b455bedcf389eb8a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b97648649-sj8kb" Dec 16 12:52:03.250714 kubelet[3300]: E1216 12:52:03.246810 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b97648649-sj8kb_calico-system(97ebcca4-f43d-4a70-b294-e93b18442671)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b97648649-sj8kb_calico-system(97ebcca4-f43d-4a70-b294-e93b18442671)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10a6f9d368a1f3d85e1754a0064b68c45b88c201d758383b455bedcf389eb8a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b97648649-sj8kb" podUID="97ebcca4-f43d-4a70-b294-e93b18442671" Dec 16 12:52:03.257400 containerd[1969]: time="2025-12-16T12:52:03.257352090Z" level=error msg="Failed to destroy network for sandbox \"62adc7ee30d3586420dceb94fc4bc4ffb5ae14b25f6cafc750be9f75391b1f01\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:52:03.260571 systemd[1]: run-netns-cni\x2d4cbbc1aa\x2d00db\x2dd8f8\x2d378f\x2dbd091fb572da.mount: Deactivated successfully. Dec 16 12:52:03.264358 containerd[1969]: time="2025-12-16T12:52:03.264064409Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l2s79,Uid:a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"62adc7ee30d3586420dceb94fc4bc4ffb5ae14b25f6cafc750be9f75391b1f01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:52:03.264815 kubelet[3300]: E1216 12:52:03.264632 3300 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62adc7ee30d3586420dceb94fc4bc4ffb5ae14b25f6cafc750be9f75391b1f01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:52:03.264815 kubelet[3300]: E1216 12:52:03.264791 3300 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62adc7ee30d3586420dceb94fc4bc4ffb5ae14b25f6cafc750be9f75391b1f01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l2s79" Dec 16 12:52:03.264990 kubelet[3300]: E1216 12:52:03.264824 3300 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62adc7ee30d3586420dceb94fc4bc4ffb5ae14b25f6cafc750be9f75391b1f01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l2s79" Dec 16 12:52:03.264990 kubelet[3300]: E1216 12:52:03.264887 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l2s79_calico-system(a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l2s79_calico-system(a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62adc7ee30d3586420dceb94fc4bc4ffb5ae14b25f6cafc750be9f75391b1f01\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l2s79" podUID="a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0" Dec 16 12:52:03.684158 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 12:52:03.684287 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 12:52:04.451059 kubelet[3300]: I1216 12:52:04.451018 3300 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4f37d126-32e9-4a2f-b0c1-8df2ac964398-whisker-backend-key-pair\") pod \"4f37d126-32e9-4a2f-b0c1-8df2ac964398\" (UID: \"4f37d126-32e9-4a2f-b0c1-8df2ac964398\") " Dec 16 12:52:04.451504 kubelet[3300]: I1216 12:52:04.451097 3300 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngxfr\" (UniqueName: \"kubernetes.io/projected/4f37d126-32e9-4a2f-b0c1-8df2ac964398-kube-api-access-ngxfr\") pod \"4f37d126-32e9-4a2f-b0c1-8df2ac964398\" (UID: \"4f37d126-32e9-4a2f-b0c1-8df2ac964398\") " Dec 16 12:52:04.451504 kubelet[3300]: I1216 12:52:04.451135 3300 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f37d126-32e9-4a2f-b0c1-8df2ac964398-whisker-ca-bundle\") pod \"4f37d126-32e9-4a2f-b0c1-8df2ac964398\" (UID: \"4f37d126-32e9-4a2f-b0c1-8df2ac964398\") " Dec 16 12:52:04.479173 systemd[1]: var-lib-kubelet-pods-4f37d126\x2d32e9\x2d4a2f\x2db0c1\x2d8df2ac964398-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dngxfr.mount: Deactivated successfully. Dec 16 12:52:04.501944 kubelet[3300]: I1216 12:52:04.501120 3300 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f37d126-32e9-4a2f-b0c1-8df2ac964398-kube-api-access-ngxfr" (OuterVolumeSpecName: "kube-api-access-ngxfr") pod "4f37d126-32e9-4a2f-b0c1-8df2ac964398" (UID: "4f37d126-32e9-4a2f-b0c1-8df2ac964398"). InnerVolumeSpecName "kube-api-access-ngxfr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:52:04.501944 kubelet[3300]: I1216 12:52:04.501208 3300 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f37d126-32e9-4a2f-b0c1-8df2ac964398-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4f37d126-32e9-4a2f-b0c1-8df2ac964398" (UID: "4f37d126-32e9-4a2f-b0c1-8df2ac964398"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:52:04.507703 kubelet[3300]: I1216 12:52:04.507613 3300 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f37d126-32e9-4a2f-b0c1-8df2ac964398-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4f37d126-32e9-4a2f-b0c1-8df2ac964398" (UID: "4f37d126-32e9-4a2f-b0c1-8df2ac964398"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 12:52:04.510638 systemd[1]: var-lib-kubelet-pods-4f37d126\x2d32e9\x2d4a2f\x2db0c1\x2d8df2ac964398-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 12:52:04.554931 kubelet[3300]: I1216 12:52:04.553740 3300 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4f37d126-32e9-4a2f-b0c1-8df2ac964398-whisker-backend-key-pair\") on node \"ip-172-31-17-11\" DevicePath \"\"" Dec 16 12:52:04.554931 kubelet[3300]: I1216 12:52:04.553802 3300 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ngxfr\" (UniqueName: \"kubernetes.io/projected/4f37d126-32e9-4a2f-b0c1-8df2ac964398-kube-api-access-ngxfr\") on node \"ip-172-31-17-11\" DevicePath \"\"" Dec 16 12:52:04.554931 kubelet[3300]: I1216 12:52:04.553819 3300 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f37d126-32e9-4a2f-b0c1-8df2ac964398-whisker-ca-bundle\") on node \"ip-172-31-17-11\" DevicePath \"\"" Dec 16 12:52:04.948651 systemd[1]: Removed slice kubepods-besteffort-pod4f37d126_32e9_4a2f_b0c1_8df2ac964398.slice - libcontainer container kubepods-besteffort-pod4f37d126_32e9_4a2f_b0c1_8df2ac964398.slice. Dec 16 12:52:05.385449 systemd[1]: Created slice kubepods-besteffort-pod0a0e312f_8a41_4806_bbcc_0e2c96e89982.slice - libcontainer container kubepods-besteffort-pod0a0e312f_8a41_4806_bbcc_0e2c96e89982.slice. Dec 16 12:52:05.473722 kubelet[3300]: I1216 12:52:05.473638 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a0e312f-8a41-4806-bbcc-0e2c96e89982-whisker-ca-bundle\") pod \"whisker-595b9f8d69-rdqmx\" (UID: \"0a0e312f-8a41-4806-bbcc-0e2c96e89982\") " pod="calico-system/whisker-595b9f8d69-rdqmx" Dec 16 12:52:05.479332 kubelet[3300]: I1216 12:52:05.479168 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vw2m\" (UniqueName: \"kubernetes.io/projected/0a0e312f-8a41-4806-bbcc-0e2c96e89982-kube-api-access-4vw2m\") pod \"whisker-595b9f8d69-rdqmx\" (UID: \"0a0e312f-8a41-4806-bbcc-0e2c96e89982\") " pod="calico-system/whisker-595b9f8d69-rdqmx" Dec 16 12:52:05.479332 kubelet[3300]: I1216 12:52:05.479250 3300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0a0e312f-8a41-4806-bbcc-0e2c96e89982-whisker-backend-key-pair\") pod \"whisker-595b9f8d69-rdqmx\" (UID: \"0a0e312f-8a41-4806-bbcc-0e2c96e89982\") " pod="calico-system/whisker-595b9f8d69-rdqmx" Dec 16 12:52:05.690893 containerd[1969]: time="2025-12-16T12:52:05.690653630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-595b9f8d69-rdqmx,Uid:0a0e312f-8a41-4806-bbcc-0e2c96e89982,Namespace:calico-system,Attempt:0,}" Dec 16 12:52:06.253899 (udev-worker)[4917]: Network interface NamePolicy= disabled on kernel command line. Dec 16 12:52:06.263225 systemd-networkd[1548]: calif1606d9e0cd: Link UP Dec 16 12:52:06.264813 systemd-networkd[1548]: calif1606d9e0cd: Gained carrier Dec 16 12:52:06.301033 containerd[1969]: 2025-12-16 12:52:05.720 [INFO][4973] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:52:06.301033 containerd[1969]: 2025-12-16 12:52:05.772 [INFO][4973] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--11-k8s-whisker--595b9f8d69--rdqmx-eth0 whisker-595b9f8d69- calico-system 0a0e312f-8a41-4806-bbcc-0e2c96e89982 943 0 2025-12-16 12:52:05 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:595b9f8d69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-17-11 whisker-595b9f8d69-rdqmx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif1606d9e0cd [] [] }} ContainerID="d0e09b9ff7232f41c0d82d33fb3dd3c4c1230b1d22a16efc6d19a580079f8eb8" Namespace="calico-system" Pod="whisker-595b9f8d69-rdqmx" WorkloadEndpoint="ip--172--31--17--11-k8s-whisker--595b9f8d69--rdqmx-" Dec 16 12:52:06.301033 containerd[1969]: 2025-12-16 12:52:05.772 [INFO][4973] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d0e09b9ff7232f41c0d82d33fb3dd3c4c1230b1d22a16efc6d19a580079f8eb8" Namespace="calico-system" Pod="whisker-595b9f8d69-rdqmx" WorkloadEndpoint="ip--172--31--17--11-k8s-whisker--595b9f8d69--rdqmx-eth0" Dec 16 12:52:06.301033 containerd[1969]: 2025-12-16 12:52:06.141 [INFO][4984] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d0e09b9ff7232f41c0d82d33fb3dd3c4c1230b1d22a16efc6d19a580079f8eb8" HandleID="k8s-pod-network.d0e09b9ff7232f41c0d82d33fb3dd3c4c1230b1d22a16efc6d19a580079f8eb8" Workload="ip--172--31--17--11-k8s-whisker--595b9f8d69--rdqmx-eth0" Dec 16 12:52:06.302173 containerd[1969]: 2025-12-16 12:52:06.145 [INFO][4984] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d0e09b9ff7232f41c0d82d33fb3dd3c4c1230b1d22a16efc6d19a580079f8eb8" HandleID="k8s-pod-network.d0e09b9ff7232f41c0d82d33fb3dd3c4c1230b1d22a16efc6d19a580079f8eb8" Workload="ip--172--31--17--11-k8s-whisker--595b9f8d69--rdqmx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004359c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-11", "pod":"whisker-595b9f8d69-rdqmx", "timestamp":"2025-12-16 12:52:06.141227811 +0000 UTC"}, Hostname:"ip-172-31-17-11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:52:06.302173 containerd[1969]: 2025-12-16 12:52:06.145 [INFO][4984] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:52:06.302173 containerd[1969]: 2025-12-16 12:52:06.146 [INFO][4984] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:52:06.302173 containerd[1969]: 2025-12-16 12:52:06.147 [INFO][4984] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-11' Dec 16 12:52:06.302173 containerd[1969]: 2025-12-16 12:52:06.167 [INFO][4984] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d0e09b9ff7232f41c0d82d33fb3dd3c4c1230b1d22a16efc6d19a580079f8eb8" host="ip-172-31-17-11" Dec 16 12:52:06.302173 containerd[1969]: 2025-12-16 12:52:06.185 [INFO][4984] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-11" Dec 16 12:52:06.302173 containerd[1969]: 2025-12-16 12:52:06.194 [INFO][4984] ipam/ipam.go 511: Trying affinity for 192.168.15.192/26 host="ip-172-31-17-11" Dec 16 12:52:06.302173 containerd[1969]: 2025-12-16 12:52:06.197 [INFO][4984] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.192/26 host="ip-172-31-17-11" Dec 16 12:52:06.302173 containerd[1969]: 2025-12-16 12:52:06.200 [INFO][4984] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.192/26 host="ip-172-31-17-11" Dec 16 12:52:06.302553 containerd[1969]: 2025-12-16 12:52:06.200 [INFO][4984] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.192/26 handle="k8s-pod-network.d0e09b9ff7232f41c0d82d33fb3dd3c4c1230b1d22a16efc6d19a580079f8eb8" host="ip-172-31-17-11" Dec 16 12:52:06.302553 containerd[1969]: 2025-12-16 12:52:06.203 [INFO][4984] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d0e09b9ff7232f41c0d82d33fb3dd3c4c1230b1d22a16efc6d19a580079f8eb8 Dec 16 12:52:06.302553 containerd[1969]: 2025-12-16 12:52:06.208 [INFO][4984] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.192/26 handle="k8s-pod-network.d0e09b9ff7232f41c0d82d33fb3dd3c4c1230b1d22a16efc6d19a580079f8eb8" host="ip-172-31-17-11" Dec 16 12:52:06.302553 containerd[1969]: 2025-12-16 12:52:06.226 [INFO][4984] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.193/26] block=192.168.15.192/26 handle="k8s-pod-network.d0e09b9ff7232f41c0d82d33fb3dd3c4c1230b1d22a16efc6d19a580079f8eb8" host="ip-172-31-17-11" Dec 16 12:52:06.302553 containerd[1969]: 2025-12-16 12:52:06.226 [INFO][4984] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.193/26] handle="k8s-pod-network.d0e09b9ff7232f41c0d82d33fb3dd3c4c1230b1d22a16efc6d19a580079f8eb8" host="ip-172-31-17-11" Dec 16 12:52:06.302553 containerd[1969]: 2025-12-16 12:52:06.226 [INFO][4984] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:52:06.302553 containerd[1969]: 2025-12-16 12:52:06.226 [INFO][4984] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.193/26] IPv6=[] ContainerID="d0e09b9ff7232f41c0d82d33fb3dd3c4c1230b1d22a16efc6d19a580079f8eb8" HandleID="k8s-pod-network.d0e09b9ff7232f41c0d82d33fb3dd3c4c1230b1d22a16efc6d19a580079f8eb8" Workload="ip--172--31--17--11-k8s-whisker--595b9f8d69--rdqmx-eth0" Dec 16 12:52:06.302809 containerd[1969]: 2025-12-16 12:52:06.230 [INFO][4973] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d0e09b9ff7232f41c0d82d33fb3dd3c4c1230b1d22a16efc6d19a580079f8eb8" Namespace="calico-system" Pod="whisker-595b9f8d69-rdqmx" WorkloadEndpoint="ip--172--31--17--11-k8s-whisker--595b9f8d69--rdqmx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--11-k8s-whisker--595b9f8d69--rdqmx-eth0", GenerateName:"whisker-595b9f8d69-", Namespace:"calico-system", SelfLink:"", UID:"0a0e312f-8a41-4806-bbcc-0e2c96e89982", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 52, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"595b9f8d69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-11", ContainerID:"", Pod:"whisker-595b9f8d69-rdqmx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.15.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif1606d9e0cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:52:06.302809 containerd[1969]: 2025-12-16 12:52:06.230 [INFO][4973] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.193/32] ContainerID="d0e09b9ff7232f41c0d82d33fb3dd3c4c1230b1d22a16efc6d19a580079f8eb8" Namespace="calico-system" Pod="whisker-595b9f8d69-rdqmx" WorkloadEndpoint="ip--172--31--17--11-k8s-whisker--595b9f8d69--rdqmx-eth0" Dec 16 12:52:06.303596 containerd[1969]: 2025-12-16 12:52:06.230 [INFO][4973] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif1606d9e0cd ContainerID="d0e09b9ff7232f41c0d82d33fb3dd3c4c1230b1d22a16efc6d19a580079f8eb8" Namespace="calico-system" Pod="whisker-595b9f8d69-rdqmx" WorkloadEndpoint="ip--172--31--17--11-k8s-whisker--595b9f8d69--rdqmx-eth0" Dec 16 12:52:06.303596 containerd[1969]: 2025-12-16 12:52:06.265 [INFO][4973] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d0e09b9ff7232f41c0d82d33fb3dd3c4c1230b1d22a16efc6d19a580079f8eb8" Namespace="calico-system" Pod="whisker-595b9f8d69-rdqmx" WorkloadEndpoint="ip--172--31--17--11-k8s-whisker--595b9f8d69--rdqmx-eth0" Dec 16 12:52:06.306244 containerd[1969]: 2025-12-16 12:52:06.266 [INFO][4973] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d0e09b9ff7232f41c0d82d33fb3dd3c4c1230b1d22a16efc6d19a580079f8eb8" Namespace="calico-system" Pod="whisker-595b9f8d69-rdqmx" WorkloadEndpoint="ip--172--31--17--11-k8s-whisker--595b9f8d69--rdqmx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--11-k8s-whisker--595b9f8d69--rdqmx-eth0", GenerateName:"whisker-595b9f8d69-", Namespace:"calico-system", SelfLink:"", UID:"0a0e312f-8a41-4806-bbcc-0e2c96e89982", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 52, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"595b9f8d69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-11", ContainerID:"d0e09b9ff7232f41c0d82d33fb3dd3c4c1230b1d22a16efc6d19a580079f8eb8", Pod:"whisker-595b9f8d69-rdqmx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.15.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif1606d9e0cd", MAC:"4a:f2:f5:12:2d:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:52:06.306352 containerd[1969]: 2025-12-16 12:52:06.285 [INFO][4973] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d0e09b9ff7232f41c0d82d33fb3dd3c4c1230b1d22a16efc6d19a580079f8eb8" Namespace="calico-system" Pod="whisker-595b9f8d69-rdqmx" WorkloadEndpoint="ip--172--31--17--11-k8s-whisker--595b9f8d69--rdqmx-eth0" Dec 16 12:52:06.620094 containerd[1969]: time="2025-12-16T12:52:06.619169060Z" level=info msg="connecting to shim d0e09b9ff7232f41c0d82d33fb3dd3c4c1230b1d22a16efc6d19a580079f8eb8" address="unix:///run/containerd/s/126e7538f8b4613763d98abdbf036f50ea64b95b0d28f73ac3de8e48232f0b30" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:52:06.674602 systemd[1]: Started cri-containerd-d0e09b9ff7232f41c0d82d33fb3dd3c4c1230b1d22a16efc6d19a580079f8eb8.scope - libcontainer container d0e09b9ff7232f41c0d82d33fb3dd3c4c1230b1d22a16efc6d19a580079f8eb8. Dec 16 12:52:06.744036 kernel: kauditd_printk_skb: 11 callbacks suppressed Dec 16 12:52:06.744186 kernel: audit: type=1334 audit(1765889526.740:600): prog-id=185 op=LOAD Dec 16 12:52:06.740000 audit: BPF prog-id=185 op=LOAD Dec 16 12:52:06.740000 audit: BPF prog-id=186 op=LOAD Dec 16 12:52:06.753842 kernel: audit: type=1334 audit(1765889526.740:601): prog-id=186 op=LOAD Dec 16 12:52:06.753989 kernel: audit: type=1300 audit(1765889526.740:601): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5104 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:06.740000 audit[5115]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5104 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:06.740000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430653039623966663732333266343163306438326433336662336464 Dec 16 12:52:06.762501 kernel: audit: type=1327 audit(1765889526.740:601): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430653039623966663732333266343163306438326433336662336464 Dec 16 12:52:06.770292 kernel: audit: type=1334 audit(1765889526.741:602): prog-id=186 op=UNLOAD Dec 16 12:52:06.776576 kernel: audit: type=1300 audit(1765889526.741:602): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5104 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:06.776649 kernel: audit: type=1327 audit(1765889526.741:602): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430653039623966663732333266343163306438326433336662336464 Dec 16 12:52:06.741000 audit: BPF prog-id=186 op=UNLOAD Dec 16 12:52:06.741000 audit[5115]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5104 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:06.741000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430653039623966663732333266343163306438326433336662336464 Dec 16 12:52:06.741000 audit: BPF prog-id=187 op=LOAD Dec 16 12:52:06.784335 kernel: audit: type=1334 audit(1765889526.741:603): prog-id=187 op=LOAD Dec 16 12:52:06.784478 kernel: audit: type=1300 audit(1765889526.741:603): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5104 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:06.741000 audit[5115]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5104 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:06.790565 kernel: audit: type=1327 audit(1765889526.741:603): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430653039623966663732333266343163306438326433336662336464 Dec 16 12:52:06.741000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430653039623966663732333266343163306438326433336662336464 Dec 16 12:52:06.741000 audit: BPF prog-id=188 op=LOAD Dec 16 12:52:06.741000 audit[5115]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5104 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:06.741000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430653039623966663732333266343163306438326433336662336464 Dec 16 12:52:06.741000 audit: BPF prog-id=188 op=UNLOAD Dec 16 12:52:06.741000 audit[5115]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5104 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:06.741000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430653039623966663732333266343163306438326433336662336464 Dec 16 12:52:06.741000 audit: BPF prog-id=187 op=UNLOAD Dec 16 12:52:06.741000 audit[5115]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5104 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:06.741000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430653039623966663732333266343163306438326433336662336464 Dec 16 12:52:06.741000 audit: BPF prog-id=189 op=LOAD Dec 16 12:52:06.741000 audit[5115]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5104 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:06.741000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430653039623966663732333266343163306438326433336662336464 Dec 16 12:52:06.768000 audit: BPF prog-id=190 op=LOAD Dec 16 12:52:06.768000 audit[5153]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe8050d190 a2=98 a3=1fffffffffffffff items=0 ppid=5011 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:06.768000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:52:06.768000 audit: BPF prog-id=190 op=UNLOAD Dec 16 12:52:06.768000 audit[5153]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe8050d160 a3=0 items=0 ppid=5011 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:06.768000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:52:06.783000 audit: BPF prog-id=191 op=LOAD Dec 16 12:52:06.783000 audit[5153]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe8050d070 a2=94 a3=3 items=0 ppid=5011 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:06.783000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:52:06.783000 audit: BPF prog-id=191 op=UNLOAD Dec 16 12:52:06.783000 audit[5153]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe8050d070 a2=94 a3=3 items=0 ppid=5011 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:06.783000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:52:06.783000 audit: BPF prog-id=192 op=LOAD Dec 16 12:52:06.783000 audit[5153]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe8050d0b0 a2=94 a3=7ffe8050d290 items=0 ppid=5011 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:06.783000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:52:06.783000 audit: BPF prog-id=192 op=UNLOAD Dec 16 12:52:06.783000 audit[5153]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe8050d0b0 a2=94 a3=7ffe8050d290 items=0 ppid=5011 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:06.783000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:52:06.802000 audit: BPF prog-id=193 op=LOAD Dec 16 12:52:06.802000 audit[5159]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffde2d84a40 a2=98 a3=3 items=0 ppid=5011 pid=5159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:06.802000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:52:06.802000 audit: BPF prog-id=193 op=UNLOAD Dec 16 12:52:06.802000 audit[5159]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffde2d84a10 a3=0 items=0 ppid=5011 pid=5159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:06.802000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:52:06.804000 audit: BPF prog-id=194 op=LOAD Dec 16 12:52:06.804000 audit[5159]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffde2d84830 a2=94 a3=54428f items=0 ppid=5011 pid=5159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:06.804000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:52:06.804000 audit: BPF prog-id=194 op=UNLOAD Dec 16 12:52:06.804000 audit[5159]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffde2d84830 a2=94 a3=54428f items=0 ppid=5011 pid=5159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:06.804000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:52:06.804000 audit: BPF prog-id=195 op=LOAD Dec 16 12:52:06.804000 audit[5159]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffde2d84860 a2=94 a3=2 items=0 ppid=5011 pid=5159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:06.804000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:52:06.804000 audit: BPF prog-id=195 op=UNLOAD Dec 16 12:52:06.804000 audit[5159]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffde2d84860 a2=0 a3=2 items=0 ppid=5011 pid=5159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:06.804000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:52:06.860265 containerd[1969]: time="2025-12-16T12:52:06.860213718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-595b9f8d69-rdqmx,Uid:0a0e312f-8a41-4806-bbcc-0e2c96e89982,Namespace:calico-system,Attempt:0,} returns sandbox id \"d0e09b9ff7232f41c0d82d33fb3dd3c4c1230b1d22a16efc6d19a580079f8eb8\"" Dec 16 12:52:06.901455 containerd[1969]: time="2025-12-16T12:52:06.901411374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:52:06.980875 kubelet[3300]: I1216 12:52:06.980825 3300 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f37d126-32e9-4a2f-b0c1-8df2ac964398" path="/var/lib/kubelet/pods/4f37d126-32e9-4a2f-b0c1-8df2ac964398/volumes" Dec 16 12:52:07.050000 audit: BPF prog-id=196 op=LOAD Dec 16 12:52:07.050000 audit[5159]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffde2d84720 a2=94 a3=1 items=0 ppid=5011 pid=5159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.050000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:52:07.050000 audit: BPF prog-id=196 op=UNLOAD Dec 16 12:52:07.050000 audit[5159]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffde2d84720 a2=94 a3=1 items=0 ppid=5011 pid=5159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.050000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:52:07.063000 audit: BPF prog-id=197 op=LOAD Dec 16 12:52:07.063000 audit[5159]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffde2d84710 a2=94 a3=4 items=0 ppid=5011 pid=5159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.063000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:52:07.063000 audit: BPF prog-id=197 op=UNLOAD Dec 16 12:52:07.063000 audit[5159]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffde2d84710 a2=0 a3=4 items=0 ppid=5011 pid=5159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.063000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:52:07.064000 audit: BPF prog-id=198 op=LOAD Dec 16 12:52:07.064000 audit[5159]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffde2d84570 a2=94 a3=5 items=0 ppid=5011 pid=5159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.064000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:52:07.064000 audit: BPF prog-id=198 op=UNLOAD Dec 16 12:52:07.064000 audit[5159]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffde2d84570 a2=0 a3=5 items=0 ppid=5011 pid=5159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.064000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:52:07.064000 audit: BPF prog-id=199 op=LOAD Dec 16 12:52:07.064000 audit[5159]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffde2d84790 a2=94 a3=6 items=0 ppid=5011 pid=5159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.064000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:52:07.064000 audit: BPF prog-id=199 op=UNLOAD Dec 16 12:52:07.064000 audit[5159]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffde2d84790 a2=0 a3=6 items=0 ppid=5011 pid=5159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.064000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:52:07.064000 audit: BPF prog-id=200 op=LOAD Dec 16 12:52:07.064000 audit[5159]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffde2d83f40 a2=94 a3=88 items=0 ppid=5011 pid=5159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.064000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:52:07.065000 audit: BPF prog-id=201 op=LOAD Dec 16 12:52:07.065000 audit[5159]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffde2d83dc0 a2=94 a3=2 items=0 ppid=5011 pid=5159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.065000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:52:07.065000 audit: BPF prog-id=201 op=UNLOAD Dec 16 12:52:07.065000 audit[5159]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffde2d83df0 a2=0 a3=7ffde2d83ef0 items=0 ppid=5011 pid=5159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.065000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:52:07.065000 audit: BPF prog-id=200 op=UNLOAD Dec 16 12:52:07.065000 audit[5159]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=2869ad10 a2=0 a3=2d8cac5565b55d88 items=0 ppid=5011 pid=5159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.065000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:52:07.075000 audit: BPF prog-id=202 op=LOAD Dec 16 12:52:07.075000 audit[5173]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcfecfda90 a2=98 a3=1999999999999999 items=0 ppid=5011 pid=5173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.075000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:52:07.075000 audit: BPF prog-id=202 op=UNLOAD Dec 16 12:52:07.075000 audit[5173]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcfecfda60 a3=0 items=0 ppid=5011 pid=5173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.075000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:52:07.075000 audit: BPF prog-id=203 op=LOAD Dec 16 12:52:07.075000 audit[5173]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcfecfd970 a2=94 a3=ffff items=0 ppid=5011 pid=5173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.075000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:52:07.075000 audit: BPF prog-id=203 op=UNLOAD Dec 16 12:52:07.075000 audit[5173]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcfecfd970 a2=94 a3=ffff items=0 ppid=5011 pid=5173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.075000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:52:07.075000 audit: BPF prog-id=204 op=LOAD Dec 16 12:52:07.075000 audit[5173]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcfecfd9b0 a2=94 a3=7ffcfecfdb90 items=0 ppid=5011 pid=5173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.075000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:52:07.075000 audit: BPF prog-id=204 op=UNLOAD Dec 16 12:52:07.075000 audit[5173]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcfecfd9b0 a2=94 a3=7ffcfecfdb90 items=0 ppid=5011 pid=5173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.075000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:52:07.151757 systemd-networkd[1548]: vxlan.calico: Link UP Dec 16 12:52:07.151765 systemd-networkd[1548]: vxlan.calico: Gained carrier Dec 16 12:52:07.171000 audit: BPF prog-id=205 op=LOAD Dec 16 12:52:07.171000 audit[5198]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffebda63d60 a2=98 a3=0 items=0 ppid=5011 pid=5198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.171000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:52:07.171000 audit: BPF prog-id=205 op=UNLOAD Dec 16 12:52:07.171000 audit[5198]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffebda63d30 a3=0 items=0 ppid=5011 pid=5198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.171000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:52:07.172000 audit: BPF prog-id=206 op=LOAD Dec 16 12:52:07.172000 audit[5198]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffebda63b70 a2=94 a3=54428f items=0 ppid=5011 pid=5198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.172000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:52:07.172000 audit: BPF prog-id=206 op=UNLOAD Dec 16 12:52:07.172000 audit[5198]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffebda63b70 a2=94 a3=54428f items=0 ppid=5011 pid=5198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.172000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:52:07.172000 audit: BPF prog-id=207 op=LOAD Dec 16 12:52:07.172000 audit[5198]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffebda63ba0 a2=94 a3=2 items=0 ppid=5011 pid=5198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.172000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:52:07.172000 audit: BPF prog-id=207 op=UNLOAD Dec 16 12:52:07.172000 audit[5198]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffebda63ba0 a2=0 a3=2 items=0 ppid=5011 pid=5198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.172000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:52:07.172000 audit: BPF prog-id=208 op=LOAD Dec 16 12:52:07.172000 audit[5198]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffebda63950 a2=94 a3=4 items=0 ppid=5011 pid=5198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.172000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:52:07.172000 audit: BPF prog-id=208 op=UNLOAD Dec 16 12:52:07.172000 audit[5198]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffebda63950 a2=94 a3=4 items=0 ppid=5011 pid=5198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.172000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:52:07.172000 audit: BPF prog-id=209 op=LOAD Dec 16 12:52:07.172000 audit[5198]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffebda63a50 a2=94 a3=7ffebda63bd0 items=0 ppid=5011 pid=5198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.172000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:52:07.172000 audit: BPF prog-id=209 op=UNLOAD Dec 16 12:52:07.172000 audit[5198]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffebda63a50 a2=0 a3=7ffebda63bd0 items=0 ppid=5011 pid=5198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.172000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:52:07.173000 audit: BPF prog-id=210 op=LOAD Dec 16 12:52:07.173000 audit[5198]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffebda63180 a2=94 a3=2 items=0 ppid=5011 pid=5198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.173000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:52:07.173000 audit: BPF prog-id=210 op=UNLOAD Dec 16 12:52:07.173000 audit[5198]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffebda63180 a2=0 a3=2 items=0 ppid=5011 pid=5198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.173000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:52:07.173000 audit: BPF prog-id=211 op=LOAD Dec 16 12:52:07.173000 audit[5198]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffebda63280 a2=94 a3=30 items=0 ppid=5011 pid=5198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.173000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:52:07.176832 (udev-worker)[5083]: Network interface NamePolicy= disabled on kernel command line. Dec 16 12:52:07.187000 audit: BPF prog-id=212 op=LOAD Dec 16 12:52:07.187000 audit[5202]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe9c5ed4d0 a2=98 a3=0 items=0 ppid=5011 pid=5202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.187000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:52:07.188000 audit: BPF prog-id=212 op=UNLOAD Dec 16 12:52:07.188000 audit[5202]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe9c5ed4a0 a3=0 items=0 ppid=5011 pid=5202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.188000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:52:07.188000 audit: BPF prog-id=213 op=LOAD Dec 16 12:52:07.188000 audit[5202]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe9c5ed2c0 a2=94 a3=54428f items=0 ppid=5011 pid=5202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.188000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:52:07.188000 audit: BPF prog-id=213 op=UNLOAD Dec 16 12:52:07.188000 audit[5202]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe9c5ed2c0 a2=94 a3=54428f items=0 ppid=5011 pid=5202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.188000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:52:07.188000 audit: BPF prog-id=214 op=LOAD Dec 16 12:52:07.188000 audit[5202]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe9c5ed2f0 a2=94 a3=2 items=0 ppid=5011 pid=5202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.188000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:52:07.188000 audit: BPF prog-id=214 op=UNLOAD Dec 16 12:52:07.188000 audit[5202]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe9c5ed2f0 a2=0 a3=2 items=0 ppid=5011 pid=5202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.188000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:52:07.220482 containerd[1969]: time="2025-12-16T12:52:07.220440770Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:52:07.222866 containerd[1969]: time="2025-12-16T12:52:07.222827000Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:52:07.223133 containerd[1969]: time="2025-12-16T12:52:07.223030880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:52:07.223426 kubelet[3300]: E1216 12:52:07.223399 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:52:07.227802 kubelet[3300]: E1216 12:52:07.227739 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:52:07.245882 kubelet[3300]: E1216 12:52:07.245823 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:47cc487b7b6d43fa90aba00c924ab97e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4vw2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-595b9f8d69-rdqmx_calico-system(0a0e312f-8a41-4806-bbcc-0e2c96e89982): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:52:07.248244 containerd[1969]: time="2025-12-16T12:52:07.248027322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:52:07.362000 audit: BPF prog-id=215 op=LOAD Dec 16 12:52:07.362000 audit[5202]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe9c5ed1b0 a2=94 a3=1 items=0 ppid=5011 pid=5202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.362000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:52:07.362000 audit: BPF prog-id=215 op=UNLOAD Dec 16 12:52:07.362000 audit[5202]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe9c5ed1b0 a2=94 a3=1 items=0 ppid=5011 pid=5202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.362000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:52:07.375000 audit: BPF prog-id=216 op=LOAD Dec 16 12:52:07.375000 audit[5202]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe9c5ed1a0 a2=94 a3=4 items=0 ppid=5011 pid=5202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.375000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:52:07.375000 audit: BPF prog-id=216 op=UNLOAD Dec 16 12:52:07.375000 audit[5202]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe9c5ed1a0 a2=0 a3=4 items=0 ppid=5011 pid=5202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.375000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:52:07.375000 audit: BPF prog-id=217 op=LOAD Dec 16 12:52:07.375000 audit[5202]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe9c5ed000 a2=94 a3=5 items=0 ppid=5011 pid=5202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.375000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:52:07.376000 audit: BPF prog-id=217 op=UNLOAD Dec 16 12:52:07.376000 audit[5202]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe9c5ed000 a2=0 a3=5 items=0 ppid=5011 pid=5202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.376000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:52:07.376000 audit: BPF prog-id=218 op=LOAD Dec 16 12:52:07.376000 audit[5202]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe9c5ed220 a2=94 a3=6 items=0 ppid=5011 pid=5202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.376000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:52:07.376000 audit: BPF prog-id=218 op=UNLOAD Dec 16 12:52:07.376000 audit[5202]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe9c5ed220 a2=0 a3=6 items=0 ppid=5011 pid=5202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.376000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:52:07.376000 audit: BPF prog-id=219 op=LOAD Dec 16 12:52:07.376000 audit[5202]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe9c5ec9d0 a2=94 a3=88 items=0 ppid=5011 pid=5202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.376000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:52:07.377000 audit: BPF prog-id=220 op=LOAD Dec 16 12:52:07.377000 audit[5202]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffe9c5ec850 a2=94 a3=2 items=0 ppid=5011 pid=5202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.377000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:52:07.377000 audit: BPF prog-id=220 op=UNLOAD Dec 16 12:52:07.377000 audit[5202]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffe9c5ec880 a2=0 a3=7ffe9c5ec980 items=0 ppid=5011 pid=5202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.377000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:52:07.377000 audit: BPF prog-id=219 op=UNLOAD Dec 16 12:52:07.377000 audit[5202]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=d960d10 a2=0 a3=f4e10d863c47d68f items=0 ppid=5011 pid=5202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.377000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:52:07.382000 audit: BPF prog-id=211 op=UNLOAD Dec 16 12:52:07.382000 audit[5011]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000788200 a2=0 a3=0 items=0 ppid=4992 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.382000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 12:52:07.457000 audit[5225]: NETFILTER_CFG table=raw:121 family=2 entries=21 op=nft_register_chain pid=5225 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:52:07.457000 audit[5225]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffda5a3d410 a2=0 a3=7ffda5a3d3fc items=0 ppid=5011 pid=5225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.457000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:52:07.458000 audit[5230]: NETFILTER_CFG table=nat:122 family=2 entries=15 op=nft_register_chain pid=5230 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:52:07.458000 audit[5230]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fff01a0efe0 a2=0 a3=7fff01a0efcc items=0 ppid=5011 pid=5230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.458000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:52:07.464000 audit[5232]: NETFILTER_CFG table=mangle:123 family=2 entries=16 op=nft_register_chain pid=5232 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:52:07.464000 audit[5232]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffc13808370 a2=0 a3=7ffc1380835c items=0 ppid=5011 pid=5232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.464000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:52:07.473000 audit[5229]: NETFILTER_CFG table=filter:124 family=2 entries=94 op=nft_register_chain pid=5229 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:52:07.473000 audit[5229]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffdac454100 a2=0 a3=7ffdac4540ec items=0 ppid=5011 pid=5229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:07.473000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:52:07.497922 containerd[1969]: time="2025-12-16T12:52:07.497781342Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:52:07.500941 containerd[1969]: time="2025-12-16T12:52:07.500309908Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:52:07.501207 containerd[1969]: time="2025-12-16T12:52:07.500486982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:52:07.501426 kubelet[3300]: E1216 12:52:07.501368 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:52:07.501426 kubelet[3300]: E1216 12:52:07.501431 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:52:07.501897 kubelet[3300]: E1216 12:52:07.501834 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4vw2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-595b9f8d69-rdqmx_calico-system(0a0e312f-8a41-4806-bbcc-0e2c96e89982): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:52:07.503220 kubelet[3300]: E1216 12:52:07.503164 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-595b9f8d69-rdqmx" podUID="0a0e312f-8a41-4806-bbcc-0e2c96e89982" Dec 16 12:52:07.638453 systemd-networkd[1548]: calif1606d9e0cd: Gained IPv6LL Dec 16 12:52:08.255456 kubelet[3300]: E1216 12:52:08.255348 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-595b9f8d69-rdqmx" podUID="0a0e312f-8a41-4806-bbcc-0e2c96e89982" Dec 16 12:52:08.304000 audit[5242]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=5242 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:52:08.304000 audit[5242]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffda30ad910 a2=0 a3=7ffda30ad8fc items=0 ppid=3722 pid=5242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:08.304000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:52:08.307000 audit[5242]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=5242 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:52:08.307000 audit[5242]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffda30ad910 a2=0 a3=0 items=0 ppid=3722 pid=5242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:08.307000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:52:08.726548 systemd-networkd[1548]: vxlan.calico: Gained IPv6LL Dec 16 12:52:10.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.17.11:22-147.75.109.163:48298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:10.323122 systemd[1]: Started sshd@7-172.31.17.11:22-147.75.109.163:48298.service - OpenSSH per-connection server daemon (147.75.109.163:48298). Dec 16 12:52:10.531000 audit[5249]: USER_ACCT pid=5249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:10.532923 sshd[5249]: Accepted publickey for core from 147.75.109.163 port 48298 ssh2: RSA SHA256:U/DoHe1xJfGISnzLa/L/V6WMQEeU/gRWF7ew5p5tFyo Dec 16 12:52:10.533000 audit[5249]: CRED_ACQ pid=5249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:10.533000 audit[5249]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc14d0c90 a2=3 a3=0 items=0 ppid=1 pid=5249 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:10.533000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:52:10.535389 sshd-session[5249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:52:10.545184 systemd-logind[1926]: New session 9 of user core. Dec 16 12:52:10.553160 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 12:52:10.555000 audit[5249]: USER_START pid=5249 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:10.557000 audit[5253]: CRED_ACQ pid=5253 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:10.855390 ntpd[1914]: Listen normally on 6 vxlan.calico 192.168.15.192:123 Dec 16 12:52:10.855769 ntpd[1914]: 16 Dec 12:52:10 ntpd[1914]: Listen normally on 6 vxlan.calico 192.168.15.192:123 Dec 16 12:52:10.855769 ntpd[1914]: 16 Dec 12:52:10 ntpd[1914]: Listen normally on 7 calif1606d9e0cd [fe80::ecee:eeff:feee:eeee%4]:123 Dec 16 12:52:10.855769 ntpd[1914]: 16 Dec 12:52:10 ntpd[1914]: Listen normally on 8 vxlan.calico [fe80::648d:65ff:fe96:cb6f%5]:123 Dec 16 12:52:10.855461 ntpd[1914]: Listen normally on 7 calif1606d9e0cd [fe80::ecee:eeff:feee:eeee%4]:123 Dec 16 12:52:10.855487 ntpd[1914]: Listen normally on 8 vxlan.calico [fe80::648d:65ff:fe96:cb6f%5]:123 Dec 16 12:52:11.390930 sshd[5253]: Connection closed by 147.75.109.163 port 48298 Dec 16 12:52:11.391603 sshd-session[5249]: pam_unix(sshd:session): session closed for user core Dec 16 12:52:11.394000 audit[5249]: USER_END pid=5249 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:11.394000 audit[5249]: CRED_DISP pid=5249 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:11.397452 systemd[1]: sshd@7-172.31.17.11:22-147.75.109.163:48298.service: Deactivated successfully. Dec 16 12:52:11.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.17.11:22-147.75.109.163:48298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:11.399557 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 12:52:11.403527 systemd-logind[1926]: Session 9 logged out. Waiting for processes to exit. Dec 16 12:52:11.405144 systemd-logind[1926]: Removed session 9. Dec 16 12:52:11.921486 containerd[1969]: time="2025-12-16T12:52:11.921447704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b5d97d74b-j9kr5,Uid:07dabb60-b494-485b-be91-7522183aff41,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:52:12.092726 systemd-networkd[1548]: calic462f062ea4: Link UP Dec 16 12:52:12.093956 systemd-networkd[1548]: calic462f062ea4: Gained carrier Dec 16 12:52:12.098386 (udev-worker)[5286]: Network interface NamePolicy= disabled on kernel command line. Dec 16 12:52:12.131013 containerd[1969]: 2025-12-16 12:52:11.981 [INFO][5266] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--11-k8s-calico--apiserver--7b5d97d74b--j9kr5-eth0 calico-apiserver-7b5d97d74b- calico-apiserver 07dabb60-b494-485b-be91-7522183aff41 847 0 2025-12-16 12:51:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b5d97d74b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-17-11 calico-apiserver-7b5d97d74b-j9kr5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic462f062ea4 [] [] }} ContainerID="f657a2cd22fd1374ac5e98f69aec5927d1e4f1bba8853b83b11304ba858aa4bb" Namespace="calico-apiserver" Pod="calico-apiserver-7b5d97d74b-j9kr5" WorkloadEndpoint="ip--172--31--17--11-k8s-calico--apiserver--7b5d97d74b--j9kr5-" Dec 16 12:52:12.131013 containerd[1969]: 2025-12-16 12:52:11.981 [INFO][5266] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f657a2cd22fd1374ac5e98f69aec5927d1e4f1bba8853b83b11304ba858aa4bb" Namespace="calico-apiserver" Pod="calico-apiserver-7b5d97d74b-j9kr5" WorkloadEndpoint="ip--172--31--17--11-k8s-calico--apiserver--7b5d97d74b--j9kr5-eth0" Dec 16 12:52:12.131013 containerd[1969]: 2025-12-16 12:52:12.016 [INFO][5277] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f657a2cd22fd1374ac5e98f69aec5927d1e4f1bba8853b83b11304ba858aa4bb" HandleID="k8s-pod-network.f657a2cd22fd1374ac5e98f69aec5927d1e4f1bba8853b83b11304ba858aa4bb" Workload="ip--172--31--17--11-k8s-calico--apiserver--7b5d97d74b--j9kr5-eth0" Dec 16 12:52:12.131740 containerd[1969]: 2025-12-16 12:52:12.016 [INFO][5277] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f657a2cd22fd1374ac5e98f69aec5927d1e4f1bba8853b83b11304ba858aa4bb" HandleID="k8s-pod-network.f657a2cd22fd1374ac5e98f69aec5927d1e4f1bba8853b83b11304ba858aa4bb" Workload="ip--172--31--17--11-k8s-calico--apiserver--7b5d97d74b--j9kr5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-17-11", "pod":"calico-apiserver-7b5d97d74b-j9kr5", "timestamp":"2025-12-16 12:52:12.016037911 +0000 UTC"}, Hostname:"ip-172-31-17-11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:52:12.131740 containerd[1969]: 2025-12-16 12:52:12.016 [INFO][5277] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:52:12.131740 containerd[1969]: 2025-12-16 12:52:12.016 [INFO][5277] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:52:12.131740 containerd[1969]: 2025-12-16 12:52:12.016 [INFO][5277] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-11' Dec 16 12:52:12.131740 containerd[1969]: 2025-12-16 12:52:12.030 [INFO][5277] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f657a2cd22fd1374ac5e98f69aec5927d1e4f1bba8853b83b11304ba858aa4bb" host="ip-172-31-17-11" Dec 16 12:52:12.131740 containerd[1969]: 2025-12-16 12:52:12.037 [INFO][5277] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-11" Dec 16 12:52:12.131740 containerd[1969]: 2025-12-16 12:52:12.055 [INFO][5277] ipam/ipam.go 511: Trying affinity for 192.168.15.192/26 host="ip-172-31-17-11" Dec 16 12:52:12.131740 containerd[1969]: 2025-12-16 12:52:12.060 [INFO][5277] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.192/26 host="ip-172-31-17-11" Dec 16 12:52:12.131740 containerd[1969]: 2025-12-16 12:52:12.063 [INFO][5277] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.192/26 host="ip-172-31-17-11" Dec 16 12:52:12.133014 containerd[1969]: 2025-12-16 12:52:12.063 [INFO][5277] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.192/26 handle="k8s-pod-network.f657a2cd22fd1374ac5e98f69aec5927d1e4f1bba8853b83b11304ba858aa4bb" host="ip-172-31-17-11" Dec 16 12:52:12.133014 containerd[1969]: 2025-12-16 12:52:12.066 [INFO][5277] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f657a2cd22fd1374ac5e98f69aec5927d1e4f1bba8853b83b11304ba858aa4bb Dec 16 12:52:12.133014 containerd[1969]: 2025-12-16 12:52:12.072 [INFO][5277] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.192/26 handle="k8s-pod-network.f657a2cd22fd1374ac5e98f69aec5927d1e4f1bba8853b83b11304ba858aa4bb" host="ip-172-31-17-11" Dec 16 12:52:12.133014 containerd[1969]: 2025-12-16 12:52:12.083 [INFO][5277] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.194/26] block=192.168.15.192/26 handle="k8s-pod-network.f657a2cd22fd1374ac5e98f69aec5927d1e4f1bba8853b83b11304ba858aa4bb" host="ip-172-31-17-11" Dec 16 12:52:12.133014 containerd[1969]: 2025-12-16 12:52:12.083 [INFO][5277] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.194/26] handle="k8s-pod-network.f657a2cd22fd1374ac5e98f69aec5927d1e4f1bba8853b83b11304ba858aa4bb" host="ip-172-31-17-11" Dec 16 12:52:12.133014 containerd[1969]: 2025-12-16 12:52:12.083 [INFO][5277] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:52:12.133014 containerd[1969]: 2025-12-16 12:52:12.083 [INFO][5277] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.194/26] IPv6=[] ContainerID="f657a2cd22fd1374ac5e98f69aec5927d1e4f1bba8853b83b11304ba858aa4bb" HandleID="k8s-pod-network.f657a2cd22fd1374ac5e98f69aec5927d1e4f1bba8853b83b11304ba858aa4bb" Workload="ip--172--31--17--11-k8s-calico--apiserver--7b5d97d74b--j9kr5-eth0" Dec 16 12:52:12.133882 containerd[1969]: 2025-12-16 12:52:12.087 [INFO][5266] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f657a2cd22fd1374ac5e98f69aec5927d1e4f1bba8853b83b11304ba858aa4bb" Namespace="calico-apiserver" Pod="calico-apiserver-7b5d97d74b-j9kr5" WorkloadEndpoint="ip--172--31--17--11-k8s-calico--apiserver--7b5d97d74b--j9kr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--11-k8s-calico--apiserver--7b5d97d74b--j9kr5-eth0", GenerateName:"calico-apiserver-7b5d97d74b-", Namespace:"calico-apiserver", SelfLink:"", UID:"07dabb60-b494-485b-be91-7522183aff41", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 51, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b5d97d74b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-11", ContainerID:"", Pod:"calico-apiserver-7b5d97d74b-j9kr5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic462f062ea4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:52:12.134000 containerd[1969]: 2025-12-16 12:52:12.087 [INFO][5266] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.194/32] ContainerID="f657a2cd22fd1374ac5e98f69aec5927d1e4f1bba8853b83b11304ba858aa4bb" Namespace="calico-apiserver" Pod="calico-apiserver-7b5d97d74b-j9kr5" WorkloadEndpoint="ip--172--31--17--11-k8s-calico--apiserver--7b5d97d74b--j9kr5-eth0" Dec 16 12:52:12.134000 containerd[1969]: 2025-12-16 12:52:12.087 [INFO][5266] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic462f062ea4 ContainerID="f657a2cd22fd1374ac5e98f69aec5927d1e4f1bba8853b83b11304ba858aa4bb" Namespace="calico-apiserver" Pod="calico-apiserver-7b5d97d74b-j9kr5" WorkloadEndpoint="ip--172--31--17--11-k8s-calico--apiserver--7b5d97d74b--j9kr5-eth0" Dec 16 12:52:12.134000 containerd[1969]: 2025-12-16 12:52:12.096 [INFO][5266] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f657a2cd22fd1374ac5e98f69aec5927d1e4f1bba8853b83b11304ba858aa4bb" Namespace="calico-apiserver" Pod="calico-apiserver-7b5d97d74b-j9kr5" WorkloadEndpoint="ip--172--31--17--11-k8s-calico--apiserver--7b5d97d74b--j9kr5-eth0" Dec 16 12:52:12.134078 containerd[1969]: 2025-12-16 12:52:12.100 [INFO][5266] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f657a2cd22fd1374ac5e98f69aec5927d1e4f1bba8853b83b11304ba858aa4bb" Namespace="calico-apiserver" Pod="calico-apiserver-7b5d97d74b-j9kr5" WorkloadEndpoint="ip--172--31--17--11-k8s-calico--apiserver--7b5d97d74b--j9kr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--11-k8s-calico--apiserver--7b5d97d74b--j9kr5-eth0", GenerateName:"calico-apiserver-7b5d97d74b-", Namespace:"calico-apiserver", SelfLink:"", UID:"07dabb60-b494-485b-be91-7522183aff41", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 51, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b5d97d74b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-11", ContainerID:"f657a2cd22fd1374ac5e98f69aec5927d1e4f1bba8853b83b11304ba858aa4bb", Pod:"calico-apiserver-7b5d97d74b-j9kr5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic462f062ea4", MAC:"c6:ab:55:fc:c4:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:52:12.134139 containerd[1969]: 2025-12-16 12:52:12.125 [INFO][5266] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f657a2cd22fd1374ac5e98f69aec5927d1e4f1bba8853b83b11304ba858aa4bb" Namespace="calico-apiserver" Pod="calico-apiserver-7b5d97d74b-j9kr5" WorkloadEndpoint="ip--172--31--17--11-k8s-calico--apiserver--7b5d97d74b--j9kr5-eth0" Dec 16 12:52:12.181194 containerd[1969]: time="2025-12-16T12:52:12.181054867Z" level=info msg="connecting to shim f657a2cd22fd1374ac5e98f69aec5927d1e4f1bba8853b83b11304ba858aa4bb" address="unix:///run/containerd/s/7567464835c95a62708ebe68a0987e2954d23aa0a9b715845becc694d8b5bcaa" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:52:12.219000 audit[5315]: NETFILTER_CFG table=filter:127 family=2 entries=50 op=nft_register_chain pid=5315 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:52:12.222022 kernel: kauditd_printk_skb: 227 callbacks suppressed Dec 16 12:52:12.222083 kernel: audit: type=1325 audit(1765889532.219:685): table=filter:127 family=2 entries=50 op=nft_register_chain pid=5315 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:52:12.230003 kernel: audit: type=1300 audit(1765889532.219:685): arch=c000003e syscall=46 success=yes exit=28208 a0=3 a1=7ffce67c4280 a2=0 a3=7ffce67c426c items=0 ppid=5011 pid=5315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:12.219000 audit[5315]: SYSCALL arch=c000003e syscall=46 success=yes exit=28208 a0=3 a1=7ffce67c4280 a2=0 a3=7ffce67c426c items=0 ppid=5011 pid=5315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:12.219000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:52:12.233948 kernel: audit: type=1327 audit(1765889532.219:685): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:52:12.249176 systemd[1]: Started cri-containerd-f657a2cd22fd1374ac5e98f69aec5927d1e4f1bba8853b83b11304ba858aa4bb.scope - libcontainer container f657a2cd22fd1374ac5e98f69aec5927d1e4f1bba8853b83b11304ba858aa4bb. Dec 16 12:52:12.262000 audit: BPF prog-id=221 op=LOAD Dec 16 12:52:12.265348 kernel: audit: type=1334 audit(1765889532.262:686): prog-id=221 op=LOAD Dec 16 12:52:12.265438 kernel: audit: type=1334 audit(1765889532.264:687): prog-id=222 op=LOAD Dec 16 12:52:12.264000 audit: BPF prog-id=222 op=LOAD Dec 16 12:52:12.264000 audit[5313]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5302 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:12.270966 kernel: audit: type=1300 audit(1765889532.264:687): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5302 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:12.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636353761326364323266643133373461633565393866363961656335 Dec 16 12:52:12.275966 kernel: audit: type=1327 audit(1765889532.264:687): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636353761326364323266643133373461633565393866363961656335 Dec 16 12:52:12.264000 audit: BPF prog-id=222 op=UNLOAD Dec 16 12:52:12.282247 kernel: audit: type=1334 audit(1765889532.264:688): prog-id=222 op=UNLOAD Dec 16 12:52:12.282323 kernel: audit: type=1300 audit(1765889532.264:688): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5302 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:12.264000 audit[5313]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5302 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:12.287461 kernel: audit: type=1327 audit(1765889532.264:688): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636353761326364323266643133373461633565393866363961656335 Dec 16 12:52:12.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636353761326364323266643133373461633565393866363961656335 Dec 16 12:52:12.264000 audit: BPF prog-id=223 op=LOAD Dec 16 12:52:12.264000 audit[5313]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5302 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:12.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636353761326364323266643133373461633565393866363961656335 Dec 16 12:52:12.264000 audit: BPF prog-id=224 op=LOAD Dec 16 12:52:12.264000 audit[5313]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5302 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:12.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636353761326364323266643133373461633565393866363961656335 Dec 16 12:52:12.264000 audit: BPF prog-id=224 op=UNLOAD Dec 16 12:52:12.264000 audit[5313]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5302 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:12.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636353761326364323266643133373461633565393866363961656335 Dec 16 12:52:12.264000 audit: BPF prog-id=223 op=UNLOAD Dec 16 12:52:12.264000 audit[5313]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5302 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:12.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636353761326364323266643133373461633565393866363961656335 Dec 16 12:52:12.264000 audit: BPF prog-id=225 op=LOAD Dec 16 12:52:12.264000 audit[5313]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5302 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:12.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636353761326364323266643133373461633565393866363961656335 Dec 16 12:52:12.309531 containerd[1969]: time="2025-12-16T12:52:12.309489505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b5d97d74b-j9kr5,Uid:07dabb60-b494-485b-be91-7522183aff41,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f657a2cd22fd1374ac5e98f69aec5927d1e4f1bba8853b83b11304ba858aa4bb\"" Dec 16 12:52:12.313049 containerd[1969]: time="2025-12-16T12:52:12.313012290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:52:12.591489 containerd[1969]: time="2025-12-16T12:52:12.591362433Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:52:12.593770 containerd[1969]: time="2025-12-16T12:52:12.593711482Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:52:12.593929 containerd[1969]: time="2025-12-16T12:52:12.593812152Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:52:12.594084 kubelet[3300]: E1216 12:52:12.594022 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:52:12.594084 kubelet[3300]: E1216 12:52:12.594077 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:52:12.594549 kubelet[3300]: E1216 12:52:12.594262 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-spx94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b5d97d74b-j9kr5_calico-apiserver(07dabb60-b494-485b-be91-7522183aff41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:52:12.595959 kubelet[3300]: E1216 12:52:12.595917 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b5d97d74b-j9kr5" podUID="07dabb60-b494-485b-be91-7522183aff41" Dec 16 12:52:12.921729 containerd[1969]: time="2025-12-16T12:52:12.921544167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8gcbc,Uid:39a10cdc-f6c5-430e-ac11-7f9183d0c949,Namespace:calico-system,Attempt:0,}" Dec 16 12:52:13.040234 (udev-worker)[5288]: Network interface NamePolicy= disabled on kernel command line. Dec 16 12:52:13.042390 systemd-networkd[1548]: calie0fbb88eb5d: Link UP Dec 16 12:52:13.044248 systemd-networkd[1548]: calie0fbb88eb5d: Gained carrier Dec 16 12:52:13.064219 containerd[1969]: 2025-12-16 12:52:12.965 [INFO][5346] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--11-k8s-goldmane--666569f655--8gcbc-eth0 goldmane-666569f655- calico-system 39a10cdc-f6c5-430e-ac11-7f9183d0c949 843 0 2025-12-16 12:51:32 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-17-11 goldmane-666569f655-8gcbc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie0fbb88eb5d [] [] }} ContainerID="175902fadf7ee644cb795df6de76d90926bb0b31c31f8215f35daf6574dddffd" Namespace="calico-system" Pod="goldmane-666569f655-8gcbc" WorkloadEndpoint="ip--172--31--17--11-k8s-goldmane--666569f655--8gcbc-" Dec 16 12:52:13.064219 containerd[1969]: 2025-12-16 12:52:12.965 [INFO][5346] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="175902fadf7ee644cb795df6de76d90926bb0b31c31f8215f35daf6574dddffd" Namespace="calico-system" Pod="goldmane-666569f655-8gcbc" WorkloadEndpoint="ip--172--31--17--11-k8s-goldmane--666569f655--8gcbc-eth0" Dec 16 12:52:13.064219 containerd[1969]: 2025-12-16 12:52:12.992 [INFO][5358] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="175902fadf7ee644cb795df6de76d90926bb0b31c31f8215f35daf6574dddffd" HandleID="k8s-pod-network.175902fadf7ee644cb795df6de76d90926bb0b31c31f8215f35daf6574dddffd" Workload="ip--172--31--17--11-k8s-goldmane--666569f655--8gcbc-eth0" Dec 16 12:52:13.064596 containerd[1969]: 2025-12-16 12:52:12.993 [INFO][5358] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="175902fadf7ee644cb795df6de76d90926bb0b31c31f8215f35daf6574dddffd" HandleID="k8s-pod-network.175902fadf7ee644cb795df6de76d90926bb0b31c31f8215f35daf6574dddffd" Workload="ip--172--31--17--11-k8s-goldmane--666569f655--8gcbc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f090), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-11", "pod":"goldmane-666569f655-8gcbc", "timestamp":"2025-12-16 12:52:12.99280136 +0000 UTC"}, Hostname:"ip-172-31-17-11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:52:13.064596 containerd[1969]: 2025-12-16 12:52:12.993 [INFO][5358] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:52:13.064596 containerd[1969]: 2025-12-16 12:52:12.993 [INFO][5358] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:52:13.064596 containerd[1969]: 2025-12-16 12:52:12.993 [INFO][5358] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-11' Dec 16 12:52:13.064596 containerd[1969]: 2025-12-16 12:52:13.000 [INFO][5358] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.175902fadf7ee644cb795df6de76d90926bb0b31c31f8215f35daf6574dddffd" host="ip-172-31-17-11" Dec 16 12:52:13.064596 containerd[1969]: 2025-12-16 12:52:13.005 [INFO][5358] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-11" Dec 16 12:52:13.064596 containerd[1969]: 2025-12-16 12:52:13.010 [INFO][5358] ipam/ipam.go 511: Trying affinity for 192.168.15.192/26 host="ip-172-31-17-11" Dec 16 12:52:13.064596 containerd[1969]: 2025-12-16 12:52:13.012 [INFO][5358] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.192/26 host="ip-172-31-17-11" Dec 16 12:52:13.064596 containerd[1969]: 2025-12-16 12:52:13.014 [INFO][5358] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.192/26 host="ip-172-31-17-11" Dec 16 12:52:13.064843 containerd[1969]: 2025-12-16 12:52:13.014 [INFO][5358] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.192/26 handle="k8s-pod-network.175902fadf7ee644cb795df6de76d90926bb0b31c31f8215f35daf6574dddffd" host="ip-172-31-17-11" Dec 16 12:52:13.064843 containerd[1969]: 2025-12-16 12:52:13.016 [INFO][5358] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.175902fadf7ee644cb795df6de76d90926bb0b31c31f8215f35daf6574dddffd Dec 16 12:52:13.064843 containerd[1969]: 2025-12-16 12:52:13.024 [INFO][5358] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.192/26 handle="k8s-pod-network.175902fadf7ee644cb795df6de76d90926bb0b31c31f8215f35daf6574dddffd" host="ip-172-31-17-11" Dec 16 12:52:13.064843 containerd[1969]: 2025-12-16 12:52:13.035 [INFO][5358] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.195/26] block=192.168.15.192/26 handle="k8s-pod-network.175902fadf7ee644cb795df6de76d90926bb0b31c31f8215f35daf6574dddffd" host="ip-172-31-17-11" Dec 16 12:52:13.064843 containerd[1969]: 2025-12-16 12:52:13.035 [INFO][5358] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.195/26] handle="k8s-pod-network.175902fadf7ee644cb795df6de76d90926bb0b31c31f8215f35daf6574dddffd" host="ip-172-31-17-11" Dec 16 12:52:13.064843 containerd[1969]: 2025-12-16 12:52:13.035 [INFO][5358] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:52:13.064843 containerd[1969]: 2025-12-16 12:52:13.035 [INFO][5358] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.195/26] IPv6=[] ContainerID="175902fadf7ee644cb795df6de76d90926bb0b31c31f8215f35daf6574dddffd" HandleID="k8s-pod-network.175902fadf7ee644cb795df6de76d90926bb0b31c31f8215f35daf6574dddffd" Workload="ip--172--31--17--11-k8s-goldmane--666569f655--8gcbc-eth0" Dec 16 12:52:13.065129 containerd[1969]: 2025-12-16 12:52:13.037 [INFO][5346] cni-plugin/k8s.go 418: Populated endpoint ContainerID="175902fadf7ee644cb795df6de76d90926bb0b31c31f8215f35daf6574dddffd" Namespace="calico-system" Pod="goldmane-666569f655-8gcbc" WorkloadEndpoint="ip--172--31--17--11-k8s-goldmane--666569f655--8gcbc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--11-k8s-goldmane--666569f655--8gcbc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"39a10cdc-f6c5-430e-ac11-7f9183d0c949", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 51, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-11", ContainerID:"", Pod:"goldmane-666569f655-8gcbc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.15.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie0fbb88eb5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:52:13.065129 containerd[1969]: 2025-12-16 12:52:13.037 [INFO][5346] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.195/32] ContainerID="175902fadf7ee644cb795df6de76d90926bb0b31c31f8215f35daf6574dddffd" Namespace="calico-system" Pod="goldmane-666569f655-8gcbc" WorkloadEndpoint="ip--172--31--17--11-k8s-goldmane--666569f655--8gcbc-eth0" Dec 16 12:52:13.065242 containerd[1969]: 2025-12-16 12:52:13.037 [INFO][5346] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie0fbb88eb5d ContainerID="175902fadf7ee644cb795df6de76d90926bb0b31c31f8215f35daf6574dddffd" Namespace="calico-system" Pod="goldmane-666569f655-8gcbc" WorkloadEndpoint="ip--172--31--17--11-k8s-goldmane--666569f655--8gcbc-eth0" Dec 16 12:52:13.065242 containerd[1969]: 2025-12-16 12:52:13.044 [INFO][5346] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="175902fadf7ee644cb795df6de76d90926bb0b31c31f8215f35daf6574dddffd" Namespace="calico-system" Pod="goldmane-666569f655-8gcbc" WorkloadEndpoint="ip--172--31--17--11-k8s-goldmane--666569f655--8gcbc-eth0" Dec 16 12:52:13.065308 containerd[1969]: 2025-12-16 12:52:13.047 [INFO][5346] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="175902fadf7ee644cb795df6de76d90926bb0b31c31f8215f35daf6574dddffd" Namespace="calico-system" Pod="goldmane-666569f655-8gcbc" WorkloadEndpoint="ip--172--31--17--11-k8s-goldmane--666569f655--8gcbc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--11-k8s-goldmane--666569f655--8gcbc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"39a10cdc-f6c5-430e-ac11-7f9183d0c949", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 51, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-11", ContainerID:"175902fadf7ee644cb795df6de76d90926bb0b31c31f8215f35daf6574dddffd", Pod:"goldmane-666569f655-8gcbc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.15.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie0fbb88eb5d", MAC:"92:0a:0e:61:3d:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:52:13.065395 containerd[1969]: 2025-12-16 12:52:13.059 [INFO][5346] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="175902fadf7ee644cb795df6de76d90926bb0b31c31f8215f35daf6574dddffd" Namespace="calico-system" Pod="goldmane-666569f655-8gcbc" WorkloadEndpoint="ip--172--31--17--11-k8s-goldmane--666569f655--8gcbc-eth0" Dec 16 12:52:13.076000 audit[5375]: NETFILTER_CFG table=filter:128 family=2 entries=48 op=nft_register_chain pid=5375 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:52:13.076000 audit[5375]: SYSCALL arch=c000003e syscall=46 success=yes exit=26368 a0=3 a1=7ffd0fb86ff0 a2=0 a3=7ffd0fb86fdc items=0 ppid=5011 pid=5375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:13.076000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:52:13.098930 containerd[1969]: time="2025-12-16T12:52:13.098828815Z" level=info msg="connecting to shim 175902fadf7ee644cb795df6de76d90926bb0b31c31f8215f35daf6574dddffd" address="unix:///run/containerd/s/1ea40a81566d60717efc8c99c7317ed54dcf6d0bb9939539c31932fb5c7b6f65" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:52:13.139191 systemd[1]: Started cri-containerd-175902fadf7ee644cb795df6de76d90926bb0b31c31f8215f35daf6574dddffd.scope - libcontainer container 175902fadf7ee644cb795df6de76d90926bb0b31c31f8215f35daf6574dddffd. Dec 16 12:52:13.153000 audit: BPF prog-id=226 op=LOAD Dec 16 12:52:13.154000 audit: BPF prog-id=227 op=LOAD Dec 16 12:52:13.154000 audit[5396]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5383 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:13.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137353930326661646637656536343463623739356466366465373664 Dec 16 12:52:13.154000 audit: BPF prog-id=227 op=UNLOAD Dec 16 12:52:13.154000 audit[5396]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5383 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:13.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137353930326661646637656536343463623739356466366465373664 Dec 16 12:52:13.154000 audit: BPF prog-id=228 op=LOAD Dec 16 12:52:13.154000 audit[5396]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5383 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:13.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137353930326661646637656536343463623739356466366465373664 Dec 16 12:52:13.154000 audit: BPF prog-id=229 op=LOAD Dec 16 12:52:13.154000 audit[5396]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5383 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:13.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137353930326661646637656536343463623739356466366465373664 Dec 16 12:52:13.154000 audit: BPF prog-id=229 op=UNLOAD Dec 16 12:52:13.154000 audit[5396]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5383 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:13.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137353930326661646637656536343463623739356466366465373664 Dec 16 12:52:13.154000 audit: BPF prog-id=228 op=UNLOAD Dec 16 12:52:13.154000 audit[5396]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5383 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:13.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137353930326661646637656536343463623739356466366465373664 Dec 16 12:52:13.154000 audit: BPF prog-id=230 op=LOAD Dec 16 12:52:13.154000 audit[5396]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5383 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:13.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137353930326661646637656536343463623739356466366465373664 Dec 16 12:52:13.196449 containerd[1969]: time="2025-12-16T12:52:13.196261648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8gcbc,Uid:39a10cdc-f6c5-430e-ac11-7f9183d0c949,Namespace:calico-system,Attempt:0,} returns sandbox id \"175902fadf7ee644cb795df6de76d90926bb0b31c31f8215f35daf6574dddffd\"" Dec 16 12:52:13.198109 containerd[1969]: time="2025-12-16T12:52:13.198083313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:52:13.280751 kubelet[3300]: E1216 12:52:13.278901 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b5d97d74b-j9kr5" podUID="07dabb60-b494-485b-be91-7522183aff41" Dec 16 12:52:13.306000 audit[5422]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=5422 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:52:13.306000 audit[5422]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe4d1721b0 a2=0 a3=7ffe4d17219c items=0 ppid=3722 pid=5422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:13.306000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:52:13.313000 audit[5422]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=5422 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:52:13.313000 audit[5422]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe4d1721b0 a2=0 a3=0 items=0 ppid=3722 pid=5422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:13.313000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:52:13.462686 systemd-networkd[1548]: calic462f062ea4: Gained IPv6LL Dec 16 12:52:13.560677 containerd[1969]: time="2025-12-16T12:52:13.560455327Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:52:13.562803 containerd[1969]: time="2025-12-16T12:52:13.562700478Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:52:13.562803 containerd[1969]: time="2025-12-16T12:52:13.562765680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:52:13.563143 kubelet[3300]: E1216 12:52:13.563066 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:52:13.563143 kubelet[3300]: E1216 12:52:13.563109 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:52:13.563359 kubelet[3300]: E1216 12:52:13.563268 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hrjjg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8gcbc_calico-system(39a10cdc-f6c5-430e-ac11-7f9183d0c949): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:52:13.564632 kubelet[3300]: E1216 12:52:13.564426 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8gcbc" podUID="39a10cdc-f6c5-430e-ac11-7f9183d0c949" Dec 16 12:52:13.922086 containerd[1969]: time="2025-12-16T12:52:13.922049321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rscvc,Uid:677a62d7-b648-43b1-ac2f-6337ecae67c0,Namespace:kube-system,Attempt:0,}" Dec 16 12:52:13.922416 containerd[1969]: time="2025-12-16T12:52:13.922238162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b97648649-sj8kb,Uid:97ebcca4-f43d-4a70-b294-e93b18442671,Namespace:calico-system,Attempt:0,}" Dec 16 12:52:14.073570 systemd-networkd[1548]: cali2c7024f463c: Link UP Dec 16 12:52:14.075801 systemd-networkd[1548]: cali2c7024f463c: Gained carrier Dec 16 12:52:14.097819 containerd[1969]: 2025-12-16 12:52:13.982 [INFO][5425] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--11-k8s-calico--kube--controllers--5b97648649--sj8kb-eth0 calico-kube-controllers-5b97648649- calico-system 97ebcca4-f43d-4a70-b294-e93b18442671 844 0 2025-12-16 12:51:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5b97648649 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-17-11 calico-kube-controllers-5b97648649-sj8kb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2c7024f463c [] [] }} ContainerID="5320b19ccf48992d200cd511566e1aad0f963b9c40c10690e162872c429ec712" Namespace="calico-system" Pod="calico-kube-controllers-5b97648649-sj8kb" WorkloadEndpoint="ip--172--31--17--11-k8s-calico--kube--controllers--5b97648649--sj8kb-" Dec 16 12:52:14.097819 containerd[1969]: 2025-12-16 12:52:13.983 [INFO][5425] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5320b19ccf48992d200cd511566e1aad0f963b9c40c10690e162872c429ec712" Namespace="calico-system" Pod="calico-kube-controllers-5b97648649-sj8kb" WorkloadEndpoint="ip--172--31--17--11-k8s-calico--kube--controllers--5b97648649--sj8kb-eth0" Dec 16 12:52:14.097819 containerd[1969]: 2025-12-16 12:52:14.025 [INFO][5450] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5320b19ccf48992d200cd511566e1aad0f963b9c40c10690e162872c429ec712" HandleID="k8s-pod-network.5320b19ccf48992d200cd511566e1aad0f963b9c40c10690e162872c429ec712" Workload="ip--172--31--17--11-k8s-calico--kube--controllers--5b97648649--sj8kb-eth0" Dec 16 12:52:14.098437 containerd[1969]: 2025-12-16 12:52:14.025 [INFO][5450] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5320b19ccf48992d200cd511566e1aad0f963b9c40c10690e162872c429ec712" HandleID="k8s-pod-network.5320b19ccf48992d200cd511566e1aad0f963b9c40c10690e162872c429ec712" Workload="ip--172--31--17--11-k8s-calico--kube--controllers--5b97648649--sj8kb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d57e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-11", "pod":"calico-kube-controllers-5b97648649-sj8kb", "timestamp":"2025-12-16 12:52:14.025364691 +0000 UTC"}, Hostname:"ip-172-31-17-11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:52:14.098437 containerd[1969]: 2025-12-16 12:52:14.025 [INFO][5450] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:52:14.098437 containerd[1969]: 2025-12-16 12:52:14.025 [INFO][5450] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:52:14.098437 containerd[1969]: 2025-12-16 12:52:14.025 [INFO][5450] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-11' Dec 16 12:52:14.098437 containerd[1969]: 2025-12-16 12:52:14.033 [INFO][5450] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5320b19ccf48992d200cd511566e1aad0f963b9c40c10690e162872c429ec712" host="ip-172-31-17-11" Dec 16 12:52:14.098437 containerd[1969]: 2025-12-16 12:52:14.039 [INFO][5450] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-11" Dec 16 12:52:14.098437 containerd[1969]: 2025-12-16 12:52:14.043 [INFO][5450] ipam/ipam.go 511: Trying affinity for 192.168.15.192/26 host="ip-172-31-17-11" Dec 16 12:52:14.098437 containerd[1969]: 2025-12-16 12:52:14.046 [INFO][5450] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.192/26 host="ip-172-31-17-11" Dec 16 12:52:14.098437 containerd[1969]: 2025-12-16 12:52:14.048 [INFO][5450] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.192/26 host="ip-172-31-17-11" Dec 16 12:52:14.098836 containerd[1969]: 2025-12-16 12:52:14.049 [INFO][5450] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.192/26 handle="k8s-pod-network.5320b19ccf48992d200cd511566e1aad0f963b9c40c10690e162872c429ec712" host="ip-172-31-17-11" Dec 16 12:52:14.098836 containerd[1969]: 2025-12-16 12:52:14.050 [INFO][5450] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5320b19ccf48992d200cd511566e1aad0f963b9c40c10690e162872c429ec712 Dec 16 12:52:14.098836 containerd[1969]: 2025-12-16 12:52:14.058 [INFO][5450] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.192/26 handle="k8s-pod-network.5320b19ccf48992d200cd511566e1aad0f963b9c40c10690e162872c429ec712" host="ip-172-31-17-11" Dec 16 12:52:14.098836 containerd[1969]: 2025-12-16 12:52:14.067 [INFO][5450] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.196/26] block=192.168.15.192/26 handle="k8s-pod-network.5320b19ccf48992d200cd511566e1aad0f963b9c40c10690e162872c429ec712" host="ip-172-31-17-11" Dec 16 12:52:14.098836 containerd[1969]: 2025-12-16 12:52:14.067 [INFO][5450] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.196/26] handle="k8s-pod-network.5320b19ccf48992d200cd511566e1aad0f963b9c40c10690e162872c429ec712" host="ip-172-31-17-11" Dec 16 12:52:14.098836 containerd[1969]: 2025-12-16 12:52:14.067 [INFO][5450] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:52:14.098836 containerd[1969]: 2025-12-16 12:52:14.067 [INFO][5450] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.196/26] IPv6=[] ContainerID="5320b19ccf48992d200cd511566e1aad0f963b9c40c10690e162872c429ec712" HandleID="k8s-pod-network.5320b19ccf48992d200cd511566e1aad0f963b9c40c10690e162872c429ec712" Workload="ip--172--31--17--11-k8s-calico--kube--controllers--5b97648649--sj8kb-eth0" Dec 16 12:52:14.099505 containerd[1969]: 2025-12-16 12:52:14.070 [INFO][5425] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5320b19ccf48992d200cd511566e1aad0f963b9c40c10690e162872c429ec712" Namespace="calico-system" Pod="calico-kube-controllers-5b97648649-sj8kb" WorkloadEndpoint="ip--172--31--17--11-k8s-calico--kube--controllers--5b97648649--sj8kb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--11-k8s-calico--kube--controllers--5b97648649--sj8kb-eth0", GenerateName:"calico-kube-controllers-5b97648649-", Namespace:"calico-system", SelfLink:"", UID:"97ebcca4-f43d-4a70-b294-e93b18442671", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 51, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b97648649", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-11", ContainerID:"", Pod:"calico-kube-controllers-5b97648649-sj8kb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.15.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2c7024f463c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:52:14.099604 containerd[1969]: 2025-12-16 12:52:14.070 [INFO][5425] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.196/32] ContainerID="5320b19ccf48992d200cd511566e1aad0f963b9c40c10690e162872c429ec712" Namespace="calico-system" Pod="calico-kube-controllers-5b97648649-sj8kb" WorkloadEndpoint="ip--172--31--17--11-k8s-calico--kube--controllers--5b97648649--sj8kb-eth0" Dec 16 12:52:14.099604 containerd[1969]: 2025-12-16 12:52:14.070 [INFO][5425] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2c7024f463c ContainerID="5320b19ccf48992d200cd511566e1aad0f963b9c40c10690e162872c429ec712" Namespace="calico-system" Pod="calico-kube-controllers-5b97648649-sj8kb" WorkloadEndpoint="ip--172--31--17--11-k8s-calico--kube--controllers--5b97648649--sj8kb-eth0" Dec 16 12:52:14.099604 containerd[1969]: 2025-12-16 12:52:14.077 [INFO][5425] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5320b19ccf48992d200cd511566e1aad0f963b9c40c10690e162872c429ec712" Namespace="calico-system" Pod="calico-kube-controllers-5b97648649-sj8kb" WorkloadEndpoint="ip--172--31--17--11-k8s-calico--kube--controllers--5b97648649--sj8kb-eth0" Dec 16 12:52:14.099723 containerd[1969]: 2025-12-16 12:52:14.080 [INFO][5425] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5320b19ccf48992d200cd511566e1aad0f963b9c40c10690e162872c429ec712" Namespace="calico-system" Pod="calico-kube-controllers-5b97648649-sj8kb" WorkloadEndpoint="ip--172--31--17--11-k8s-calico--kube--controllers--5b97648649--sj8kb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--11-k8s-calico--kube--controllers--5b97648649--sj8kb-eth0", GenerateName:"calico-kube-controllers-5b97648649-", Namespace:"calico-system", SelfLink:"", UID:"97ebcca4-f43d-4a70-b294-e93b18442671", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 51, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b97648649", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-11", ContainerID:"5320b19ccf48992d200cd511566e1aad0f963b9c40c10690e162872c429ec712", Pod:"calico-kube-controllers-5b97648649-sj8kb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.15.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2c7024f463c", MAC:"f2:9a:29:c1:21:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:52:14.099817 containerd[1969]: 2025-12-16 12:52:14.093 [INFO][5425] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5320b19ccf48992d200cd511566e1aad0f963b9c40c10690e162872c429ec712" Namespace="calico-system" Pod="calico-kube-controllers-5b97648649-sj8kb" WorkloadEndpoint="ip--172--31--17--11-k8s-calico--kube--controllers--5b97648649--sj8kb-eth0" Dec 16 12:52:14.117000 audit[5472]: NETFILTER_CFG table=filter:131 family=2 entries=44 op=nft_register_chain pid=5472 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:52:14.117000 audit[5472]: SYSCALL arch=c000003e syscall=46 success=yes exit=21952 a0=3 a1=7fff68757640 a2=0 a3=7fff6875762c items=0 ppid=5011 pid=5472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:14.117000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:52:14.149649 containerd[1969]: time="2025-12-16T12:52:14.149597424Z" level=info msg="connecting to shim 5320b19ccf48992d200cd511566e1aad0f963b9c40c10690e162872c429ec712" address="unix:///run/containerd/s/45e1f9f8e601b9322b8204862d9132e3ca3fd86730ffd8cd5e60dd69edf13e6b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:52:14.216140 systemd[1]: Started cri-containerd-5320b19ccf48992d200cd511566e1aad0f963b9c40c10690e162872c429ec712.scope - libcontainer container 5320b19ccf48992d200cd511566e1aad0f963b9c40c10690e162872c429ec712. Dec 16 12:52:14.263983 systemd-networkd[1548]: calic17bf330188: Link UP Dec 16 12:52:14.268045 systemd-networkd[1548]: calic17bf330188: Gained carrier Dec 16 12:52:14.293071 kubelet[3300]: E1216 12:52:14.292478 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8gcbc" podUID="39a10cdc-f6c5-430e-ac11-7f9183d0c949" Dec 16 12:52:14.293071 kubelet[3300]: E1216 12:52:14.293003 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b5d97d74b-j9kr5" podUID="07dabb60-b494-485b-be91-7522183aff41" Dec 16 12:52:14.293641 containerd[1969]: 2025-12-16 12:52:13.986 [INFO][5424] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--11-k8s-coredns--674b8bbfcf--rscvc-eth0 coredns-674b8bbfcf- kube-system 677a62d7-b648-43b1-ac2f-6337ecae67c0 840 0 2025-12-16 12:51:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-11 coredns-674b8bbfcf-rscvc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic17bf330188 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1d2da8a155c335d3a47bbc68f2029420beedc4842d8f32b72325309d6e24e306" Namespace="kube-system" Pod="coredns-674b8bbfcf-rscvc" WorkloadEndpoint="ip--172--31--17--11-k8s-coredns--674b8bbfcf--rscvc-" Dec 16 12:52:14.293641 containerd[1969]: 2025-12-16 12:52:13.986 [INFO][5424] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1d2da8a155c335d3a47bbc68f2029420beedc4842d8f32b72325309d6e24e306" Namespace="kube-system" Pod="coredns-674b8bbfcf-rscvc" WorkloadEndpoint="ip--172--31--17--11-k8s-coredns--674b8bbfcf--rscvc-eth0" Dec 16 12:52:14.293641 containerd[1969]: 2025-12-16 12:52:14.037 [INFO][5455] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d2da8a155c335d3a47bbc68f2029420beedc4842d8f32b72325309d6e24e306" HandleID="k8s-pod-network.1d2da8a155c335d3a47bbc68f2029420beedc4842d8f32b72325309d6e24e306" Workload="ip--172--31--17--11-k8s-coredns--674b8bbfcf--rscvc-eth0" Dec 16 12:52:14.294283 containerd[1969]: 2025-12-16 12:52:14.037 [INFO][5455] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1d2da8a155c335d3a47bbc68f2029420beedc4842d8f32b72325309d6e24e306" HandleID="k8s-pod-network.1d2da8a155c335d3a47bbc68f2029420beedc4842d8f32b72325309d6e24e306" Workload="ip--172--31--17--11-k8s-coredns--674b8bbfcf--rscvc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f6f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-11", "pod":"coredns-674b8bbfcf-rscvc", "timestamp":"2025-12-16 12:52:14.037134389 +0000 UTC"}, Hostname:"ip-172-31-17-11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:52:14.294283 containerd[1969]: 2025-12-16 12:52:14.037 [INFO][5455] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:52:14.294283 containerd[1969]: 2025-12-16 12:52:14.067 [INFO][5455] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:52:14.294283 containerd[1969]: 2025-12-16 12:52:14.068 [INFO][5455] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-11' Dec 16 12:52:14.294283 containerd[1969]: 2025-12-16 12:52:14.135 [INFO][5455] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1d2da8a155c335d3a47bbc68f2029420beedc4842d8f32b72325309d6e24e306" host="ip-172-31-17-11" Dec 16 12:52:14.294283 containerd[1969]: 2025-12-16 12:52:14.144 [INFO][5455] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-11" Dec 16 12:52:14.294283 containerd[1969]: 2025-12-16 12:52:14.158 [INFO][5455] ipam/ipam.go 511: Trying affinity for 192.168.15.192/26 host="ip-172-31-17-11" Dec 16 12:52:14.294283 containerd[1969]: 2025-12-16 12:52:14.174 [INFO][5455] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.192/26 host="ip-172-31-17-11" Dec 16 12:52:14.294283 containerd[1969]: 2025-12-16 12:52:14.182 [INFO][5455] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.192/26 host="ip-172-31-17-11" Dec 16 12:52:14.294283 containerd[1969]: 2025-12-16 12:52:14.183 [INFO][5455] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.192/26 handle="k8s-pod-network.1d2da8a155c335d3a47bbc68f2029420beedc4842d8f32b72325309d6e24e306" host="ip-172-31-17-11" Dec 16 12:52:14.295410 containerd[1969]: 2025-12-16 12:52:14.192 [INFO][5455] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1d2da8a155c335d3a47bbc68f2029420beedc4842d8f32b72325309d6e24e306 Dec 16 12:52:14.295410 containerd[1969]: 2025-12-16 12:52:14.207 [INFO][5455] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.192/26 handle="k8s-pod-network.1d2da8a155c335d3a47bbc68f2029420beedc4842d8f32b72325309d6e24e306" host="ip-172-31-17-11" Dec 16 12:52:14.295410 containerd[1969]: 2025-12-16 12:52:14.229 [INFO][5455] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.197/26] block=192.168.15.192/26 handle="k8s-pod-network.1d2da8a155c335d3a47bbc68f2029420beedc4842d8f32b72325309d6e24e306" host="ip-172-31-17-11" Dec 16 12:52:14.295410 containerd[1969]: 2025-12-16 12:52:14.229 [INFO][5455] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.197/26] handle="k8s-pod-network.1d2da8a155c335d3a47bbc68f2029420beedc4842d8f32b72325309d6e24e306" host="ip-172-31-17-11" Dec 16 12:52:14.295410 containerd[1969]: 2025-12-16 12:52:14.229 [INFO][5455] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:52:14.295410 containerd[1969]: 2025-12-16 12:52:14.229 [INFO][5455] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.197/26] IPv6=[] ContainerID="1d2da8a155c335d3a47bbc68f2029420beedc4842d8f32b72325309d6e24e306" HandleID="k8s-pod-network.1d2da8a155c335d3a47bbc68f2029420beedc4842d8f32b72325309d6e24e306" Workload="ip--172--31--17--11-k8s-coredns--674b8bbfcf--rscvc-eth0" Dec 16 12:52:14.295555 containerd[1969]: 2025-12-16 12:52:14.237 [INFO][5424] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1d2da8a155c335d3a47bbc68f2029420beedc4842d8f32b72325309d6e24e306" Namespace="kube-system" Pod="coredns-674b8bbfcf-rscvc" WorkloadEndpoint="ip--172--31--17--11-k8s-coredns--674b8bbfcf--rscvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--11-k8s-coredns--674b8bbfcf--rscvc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"677a62d7-b648-43b1-ac2f-6337ecae67c0", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 51, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-11", ContainerID:"", Pod:"coredns-674b8bbfcf-rscvc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic17bf330188", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:52:14.295555 containerd[1969]: 2025-12-16 12:52:14.237 [INFO][5424] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.197/32] ContainerID="1d2da8a155c335d3a47bbc68f2029420beedc4842d8f32b72325309d6e24e306" Namespace="kube-system" Pod="coredns-674b8bbfcf-rscvc" WorkloadEndpoint="ip--172--31--17--11-k8s-coredns--674b8bbfcf--rscvc-eth0" Dec 16 12:52:14.295555 containerd[1969]: 2025-12-16 12:52:14.237 [INFO][5424] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic17bf330188 ContainerID="1d2da8a155c335d3a47bbc68f2029420beedc4842d8f32b72325309d6e24e306" Namespace="kube-system" Pod="coredns-674b8bbfcf-rscvc" WorkloadEndpoint="ip--172--31--17--11-k8s-coredns--674b8bbfcf--rscvc-eth0" Dec 16 12:52:14.295555 containerd[1969]: 2025-12-16 12:52:14.271 [INFO][5424] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d2da8a155c335d3a47bbc68f2029420beedc4842d8f32b72325309d6e24e306" Namespace="kube-system" Pod="coredns-674b8bbfcf-rscvc" WorkloadEndpoint="ip--172--31--17--11-k8s-coredns--674b8bbfcf--rscvc-eth0" Dec 16 12:52:14.295555 containerd[1969]: 2025-12-16 12:52:14.271 [INFO][5424] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1d2da8a155c335d3a47bbc68f2029420beedc4842d8f32b72325309d6e24e306" Namespace="kube-system" Pod="coredns-674b8bbfcf-rscvc" WorkloadEndpoint="ip--172--31--17--11-k8s-coredns--674b8bbfcf--rscvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--11-k8s-coredns--674b8bbfcf--rscvc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"677a62d7-b648-43b1-ac2f-6337ecae67c0", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 51, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-11", ContainerID:"1d2da8a155c335d3a47bbc68f2029420beedc4842d8f32b72325309d6e24e306", Pod:"coredns-674b8bbfcf-rscvc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic17bf330188", MAC:"96:47:df:c5:cd:60", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:52:14.295555 containerd[1969]: 2025-12-16 12:52:14.282 [INFO][5424] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1d2da8a155c335d3a47bbc68f2029420beedc4842d8f32b72325309d6e24e306" Namespace="kube-system" Pod="coredns-674b8bbfcf-rscvc" WorkloadEndpoint="ip--172--31--17--11-k8s-coredns--674b8bbfcf--rscvc-eth0" Dec 16 12:52:14.369494 containerd[1969]: time="2025-12-16T12:52:14.369278309Z" level=info msg="connecting to shim 1d2da8a155c335d3a47bbc68f2029420beedc4842d8f32b72325309d6e24e306" address="unix:///run/containerd/s/3458eea771e1bb310fc87b5d4e65ebc118a8f590be34f6b8ef2f17f22a1b482d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:52:14.371000 audit: BPF prog-id=231 op=LOAD Dec 16 12:52:14.376000 audit: BPF prog-id=232 op=LOAD Dec 16 12:52:14.376000 audit[5492]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228238 a2=98 a3=0 items=0 ppid=5481 pid=5492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:14.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533323062313963636634383939326432303063643531313536366531 Dec 16 12:52:14.377000 audit: BPF prog-id=232 op=UNLOAD Dec 16 12:52:14.377000 audit[5492]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5481 pid=5492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:14.377000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533323062313963636634383939326432303063643531313536366531 Dec 16 12:52:14.378000 audit: BPF prog-id=233 op=LOAD Dec 16 12:52:14.378000 audit[5492]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228488 a2=98 a3=0 items=0 ppid=5481 pid=5492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:14.378000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533323062313963636634383939326432303063643531313536366531 Dec 16 12:52:14.380000 audit: BPF prog-id=234 op=LOAD Dec 16 12:52:14.380000 audit[5492]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000228218 a2=98 a3=0 items=0 ppid=5481 pid=5492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:14.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533323062313963636634383939326432303063643531313536366531 Dec 16 12:52:14.381000 audit: BPF prog-id=234 op=UNLOAD Dec 16 12:52:14.381000 audit[5492]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5481 pid=5492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:14.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533323062313963636634383939326432303063643531313536366531 Dec 16 12:52:14.381000 audit: BPF prog-id=233 op=UNLOAD Dec 16 12:52:14.381000 audit[5492]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5481 pid=5492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:14.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533323062313963636634383939326432303063643531313536366531 Dec 16 12:52:14.381000 audit: BPF prog-id=235 op=LOAD Dec 16 12:52:14.381000 audit[5492]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002286e8 a2=98 a3=0 items=0 ppid=5481 pid=5492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:14.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533323062313963636634383939326432303063643531313536366531 Dec 16 12:52:14.403000 audit[5537]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=5537 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:52:14.403000 audit[5537]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcd6cfe510 a2=0 a3=7ffcd6cfe4fc items=0 ppid=3722 pid=5537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:14.403000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:52:14.412000 audit[5537]: NETFILTER_CFG table=nat:133 family=2 entries=14 op=nft_register_rule pid=5537 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:52:14.412000 audit[5537]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcd6cfe510 a2=0 a3=0 items=0 ppid=3722 pid=5537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:14.412000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:52:14.449497 systemd[1]: Started cri-containerd-1d2da8a155c335d3a47bbc68f2029420beedc4842d8f32b72325309d6e24e306.scope - libcontainer container 1d2da8a155c335d3a47bbc68f2029420beedc4842d8f32b72325309d6e24e306. Dec 16 12:52:14.467000 audit[5558]: NETFILTER_CFG table=filter:134 family=2 entries=60 op=nft_register_chain pid=5558 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:52:14.467000 audit[5558]: SYSCALL arch=c000003e syscall=46 success=yes exit=28968 a0=3 a1=7ffe3ce87430 a2=0 a3=7ffe3ce8741c items=0 ppid=5011 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:14.467000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:52:14.492000 audit: BPF prog-id=236 op=LOAD Dec 16 12:52:14.493000 audit: BPF prog-id=237 op=LOAD Dec 16 12:52:14.493000 audit[5544]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5531 pid=5544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:14.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164326461386131353563333335643361343762626336386632303239 Dec 16 12:52:14.493000 audit: BPF prog-id=237 op=UNLOAD Dec 16 12:52:14.493000 audit[5544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5531 pid=5544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:14.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164326461386131353563333335643361343762626336386632303239 Dec 16 12:52:14.493000 audit: BPF prog-id=238 op=LOAD Dec 16 12:52:14.493000 audit[5544]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5531 pid=5544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:14.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164326461386131353563333335643361343762626336386632303239 Dec 16 12:52:14.493000 audit: BPF prog-id=239 op=LOAD Dec 16 12:52:14.493000 audit[5544]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5531 pid=5544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:14.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164326461386131353563333335643361343762626336386632303239 Dec 16 12:52:14.493000 audit: BPF prog-id=239 op=UNLOAD Dec 16 12:52:14.493000 audit[5544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5531 pid=5544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:14.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164326461386131353563333335643361343762626336386632303239 Dec 16 12:52:14.493000 audit: BPF prog-id=238 op=UNLOAD Dec 16 12:52:14.493000 audit[5544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5531 pid=5544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:14.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164326461386131353563333335643361343762626336386632303239 Dec 16 12:52:14.493000 audit: BPF prog-id=240 op=LOAD Dec 16 12:52:14.493000 audit[5544]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5531 pid=5544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:14.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164326461386131353563333335643361343762626336386632303239 Dec 16 12:52:14.519982 containerd[1969]: time="2025-12-16T12:52:14.519941882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b97648649-sj8kb,Uid:97ebcca4-f43d-4a70-b294-e93b18442671,Namespace:calico-system,Attempt:0,} returns sandbox id \"5320b19ccf48992d200cd511566e1aad0f963b9c40c10690e162872c429ec712\"" Dec 16 12:52:14.523531 containerd[1969]: time="2025-12-16T12:52:14.523493505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:52:14.553928 containerd[1969]: time="2025-12-16T12:52:14.553881944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rscvc,Uid:677a62d7-b648-43b1-ac2f-6337ecae67c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d2da8a155c335d3a47bbc68f2029420beedc4842d8f32b72325309d6e24e306\"" Dec 16 12:52:14.561404 containerd[1969]: time="2025-12-16T12:52:14.561364430Z" level=info msg="CreateContainer within sandbox \"1d2da8a155c335d3a47bbc68f2029420beedc4842d8f32b72325309d6e24e306\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:52:14.584166 containerd[1969]: time="2025-12-16T12:52:14.584113391Z" level=info msg="Container 68248ba7d91e68c10be95044d8ad963ed0f3583ab7604badc8797862941e4dd3: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:52:14.596825 containerd[1969]: time="2025-12-16T12:52:14.596783681Z" level=info msg="CreateContainer within sandbox \"1d2da8a155c335d3a47bbc68f2029420beedc4842d8f32b72325309d6e24e306\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"68248ba7d91e68c10be95044d8ad963ed0f3583ab7604badc8797862941e4dd3\"" Dec 16 12:52:14.597788 containerd[1969]: time="2025-12-16T12:52:14.597746182Z" level=info msg="StartContainer for \"68248ba7d91e68c10be95044d8ad963ed0f3583ab7604badc8797862941e4dd3\"" Dec 16 12:52:14.602752 containerd[1969]: time="2025-12-16T12:52:14.602712430Z" level=info msg="connecting to shim 68248ba7d91e68c10be95044d8ad963ed0f3583ab7604badc8797862941e4dd3" address="unix:///run/containerd/s/3458eea771e1bb310fc87b5d4e65ebc118a8f590be34f6b8ef2f17f22a1b482d" protocol=ttrpc version=3 Dec 16 12:52:14.626185 systemd[1]: Started cri-containerd-68248ba7d91e68c10be95044d8ad963ed0f3583ab7604badc8797862941e4dd3.scope - libcontainer container 68248ba7d91e68c10be95044d8ad963ed0f3583ab7604badc8797862941e4dd3. Dec 16 12:52:14.643000 audit: BPF prog-id=241 op=LOAD Dec 16 12:52:14.644000 audit: BPF prog-id=242 op=LOAD Dec 16 12:52:14.644000 audit[5579]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5531 pid=5579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:14.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638323438626137643931653638633130626539353034346438616439 Dec 16 12:52:14.644000 audit: BPF prog-id=242 op=UNLOAD Dec 16 12:52:14.644000 audit[5579]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5531 pid=5579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:14.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638323438626137643931653638633130626539353034346438616439 Dec 16 12:52:14.645000 audit: BPF prog-id=243 op=LOAD Dec 16 12:52:14.645000 audit[5579]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5531 pid=5579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:14.645000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638323438626137643931653638633130626539353034346438616439 Dec 16 12:52:14.645000 audit: BPF prog-id=244 op=LOAD Dec 16 12:52:14.645000 audit[5579]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5531 pid=5579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:14.645000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638323438626137643931653638633130626539353034346438616439 Dec 16 12:52:14.645000 audit: BPF prog-id=244 op=UNLOAD Dec 16 12:52:14.645000 audit[5579]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5531 pid=5579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:14.645000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638323438626137643931653638633130626539353034346438616439 Dec 16 12:52:14.645000 audit: BPF prog-id=243 op=UNLOAD Dec 16 12:52:14.645000 audit[5579]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5531 pid=5579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:14.645000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638323438626137643931653638633130626539353034346438616439 Dec 16 12:52:14.645000 audit: BPF prog-id=245 op=LOAD Dec 16 12:52:14.645000 audit[5579]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5531 pid=5579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:14.645000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638323438626137643931653638633130626539353034346438616439 Dec 16 12:52:14.677215 containerd[1969]: time="2025-12-16T12:52:14.677177005Z" level=info msg="StartContainer for \"68248ba7d91e68c10be95044d8ad963ed0f3583ab7604badc8797862941e4dd3\" returns successfully" Dec 16 12:52:14.778427 containerd[1969]: time="2025-12-16T12:52:14.778308146Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:52:14.782828 containerd[1969]: time="2025-12-16T12:52:14.780435583Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:52:14.782828 containerd[1969]: time="2025-12-16T12:52:14.780659947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:52:14.783027 kubelet[3300]: E1216 12:52:14.780853 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:52:14.783027 kubelet[3300]: E1216 12:52:14.780938 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:52:14.783027 kubelet[3300]: E1216 12:52:14.781070 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z28sw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5b97648649-sj8kb_calico-system(97ebcca4-f43d-4a70-b294-e93b18442671): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:52:14.783027 kubelet[3300]: E1216 12:52:14.782439 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b97648649-sj8kb" podUID="97ebcca4-f43d-4a70-b294-e93b18442671" Dec 16 12:52:14.806240 systemd-networkd[1548]: calie0fbb88eb5d: Gained IPv6LL Dec 16 12:52:14.921763 containerd[1969]: time="2025-12-16T12:52:14.921712759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb4c65f97-cxgtk,Uid:5fa2e950-afe9-46cc-8df6-b014aa90c32b,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:52:15.115807 systemd-networkd[1548]: cali923246644ca: Link UP Dec 16 12:52:15.119218 systemd-networkd[1548]: cali923246644ca: Gained carrier Dec 16 12:52:15.134769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4120596743.mount: Deactivated successfully. Dec 16 12:52:15.166022 containerd[1969]: 2025-12-16 12:52:15.004 [INFO][5610] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--11-k8s-calico--apiserver--6fb4c65f97--cxgtk-eth0 calico-apiserver-6fb4c65f97- calico-apiserver 5fa2e950-afe9-46cc-8df6-b014aa90c32b 836 0 2025-12-16 12:51:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fb4c65f97 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-17-11 calico-apiserver-6fb4c65f97-cxgtk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali923246644ca [] [] }} ContainerID="dcc06a2af72b89da52472a8afdbecd9757922d4313d72d236078d9ea3e37f451" Namespace="calico-apiserver" Pod="calico-apiserver-6fb4c65f97-cxgtk" WorkloadEndpoint="ip--172--31--17--11-k8s-calico--apiserver--6fb4c65f97--cxgtk-" Dec 16 12:52:15.166022 containerd[1969]: 2025-12-16 12:52:15.004 [INFO][5610] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dcc06a2af72b89da52472a8afdbecd9757922d4313d72d236078d9ea3e37f451" Namespace="calico-apiserver" Pod="calico-apiserver-6fb4c65f97-cxgtk" WorkloadEndpoint="ip--172--31--17--11-k8s-calico--apiserver--6fb4c65f97--cxgtk-eth0" Dec 16 12:52:15.166022 containerd[1969]: 2025-12-16 12:52:15.046 [INFO][5624] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dcc06a2af72b89da52472a8afdbecd9757922d4313d72d236078d9ea3e37f451" HandleID="k8s-pod-network.dcc06a2af72b89da52472a8afdbecd9757922d4313d72d236078d9ea3e37f451" Workload="ip--172--31--17--11-k8s-calico--apiserver--6fb4c65f97--cxgtk-eth0" Dec 16 12:52:15.166022 containerd[1969]: 2025-12-16 12:52:15.046 [INFO][5624] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dcc06a2af72b89da52472a8afdbecd9757922d4313d72d236078d9ea3e37f451" HandleID="k8s-pod-network.dcc06a2af72b89da52472a8afdbecd9757922d4313d72d236078d9ea3e37f451" Workload="ip--172--31--17--11-k8s-calico--apiserver--6fb4c65f97--cxgtk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-17-11", "pod":"calico-apiserver-6fb4c65f97-cxgtk", "timestamp":"2025-12-16 12:52:15.046464207 +0000 UTC"}, Hostname:"ip-172-31-17-11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:52:15.166022 containerd[1969]: 2025-12-16 12:52:15.046 [INFO][5624] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:52:15.166022 containerd[1969]: 2025-12-16 12:52:15.046 [INFO][5624] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:52:15.166022 containerd[1969]: 2025-12-16 12:52:15.046 [INFO][5624] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-11' Dec 16 12:52:15.166022 containerd[1969]: 2025-12-16 12:52:15.057 [INFO][5624] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dcc06a2af72b89da52472a8afdbecd9757922d4313d72d236078d9ea3e37f451" host="ip-172-31-17-11" Dec 16 12:52:15.166022 containerd[1969]: 2025-12-16 12:52:15.067 [INFO][5624] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-11" Dec 16 12:52:15.166022 containerd[1969]: 2025-12-16 12:52:15.077 [INFO][5624] ipam/ipam.go 511: Trying affinity for 192.168.15.192/26 host="ip-172-31-17-11" Dec 16 12:52:15.166022 containerd[1969]: 2025-12-16 12:52:15.085 [INFO][5624] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.192/26 host="ip-172-31-17-11" Dec 16 12:52:15.166022 containerd[1969]: 2025-12-16 12:52:15.088 [INFO][5624] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.192/26 host="ip-172-31-17-11" Dec 16 12:52:15.166022 containerd[1969]: 2025-12-16 12:52:15.088 [INFO][5624] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.192/26 handle="k8s-pod-network.dcc06a2af72b89da52472a8afdbecd9757922d4313d72d236078d9ea3e37f451" host="ip-172-31-17-11" Dec 16 12:52:15.166022 containerd[1969]: 2025-12-16 12:52:15.090 [INFO][5624] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dcc06a2af72b89da52472a8afdbecd9757922d4313d72d236078d9ea3e37f451 Dec 16 12:52:15.166022 containerd[1969]: 2025-12-16 12:52:15.095 [INFO][5624] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.192/26 handle="k8s-pod-network.dcc06a2af72b89da52472a8afdbecd9757922d4313d72d236078d9ea3e37f451" host="ip-172-31-17-11" Dec 16 12:52:15.166022 containerd[1969]: 2025-12-16 12:52:15.104 [INFO][5624] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.198/26] block=192.168.15.192/26 handle="k8s-pod-network.dcc06a2af72b89da52472a8afdbecd9757922d4313d72d236078d9ea3e37f451" host="ip-172-31-17-11" Dec 16 12:52:15.166022 containerd[1969]: 2025-12-16 12:52:15.104 [INFO][5624] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.198/26] handle="k8s-pod-network.dcc06a2af72b89da52472a8afdbecd9757922d4313d72d236078d9ea3e37f451" host="ip-172-31-17-11" Dec 16 12:52:15.166022 containerd[1969]: 2025-12-16 12:52:15.104 [INFO][5624] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:52:15.166022 containerd[1969]: 2025-12-16 12:52:15.104 [INFO][5624] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.198/26] IPv6=[] ContainerID="dcc06a2af72b89da52472a8afdbecd9757922d4313d72d236078d9ea3e37f451" HandleID="k8s-pod-network.dcc06a2af72b89da52472a8afdbecd9757922d4313d72d236078d9ea3e37f451" Workload="ip--172--31--17--11-k8s-calico--apiserver--6fb4c65f97--cxgtk-eth0" Dec 16 12:52:15.167437 containerd[1969]: 2025-12-16 12:52:15.107 [INFO][5610] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dcc06a2af72b89da52472a8afdbecd9757922d4313d72d236078d9ea3e37f451" Namespace="calico-apiserver" Pod="calico-apiserver-6fb4c65f97-cxgtk" WorkloadEndpoint="ip--172--31--17--11-k8s-calico--apiserver--6fb4c65f97--cxgtk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--11-k8s-calico--apiserver--6fb4c65f97--cxgtk-eth0", GenerateName:"calico-apiserver-6fb4c65f97-", Namespace:"calico-apiserver", SelfLink:"", UID:"5fa2e950-afe9-46cc-8df6-b014aa90c32b", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 51, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fb4c65f97", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-11", ContainerID:"", Pod:"calico-apiserver-6fb4c65f97-cxgtk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali923246644ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:52:15.167437 containerd[1969]: 2025-12-16 12:52:15.108 [INFO][5610] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.198/32] ContainerID="dcc06a2af72b89da52472a8afdbecd9757922d4313d72d236078d9ea3e37f451" Namespace="calico-apiserver" Pod="calico-apiserver-6fb4c65f97-cxgtk" WorkloadEndpoint="ip--172--31--17--11-k8s-calico--apiserver--6fb4c65f97--cxgtk-eth0" Dec 16 12:52:15.167437 containerd[1969]: 2025-12-16 12:52:15.108 [INFO][5610] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali923246644ca ContainerID="dcc06a2af72b89da52472a8afdbecd9757922d4313d72d236078d9ea3e37f451" Namespace="calico-apiserver" Pod="calico-apiserver-6fb4c65f97-cxgtk" WorkloadEndpoint="ip--172--31--17--11-k8s-calico--apiserver--6fb4c65f97--cxgtk-eth0" Dec 16 12:52:15.167437 containerd[1969]: 2025-12-16 12:52:15.116 [INFO][5610] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dcc06a2af72b89da52472a8afdbecd9757922d4313d72d236078d9ea3e37f451" Namespace="calico-apiserver" Pod="calico-apiserver-6fb4c65f97-cxgtk" WorkloadEndpoint="ip--172--31--17--11-k8s-calico--apiserver--6fb4c65f97--cxgtk-eth0" Dec 16 12:52:15.167437 containerd[1969]: 2025-12-16 12:52:15.117 [INFO][5610] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dcc06a2af72b89da52472a8afdbecd9757922d4313d72d236078d9ea3e37f451" Namespace="calico-apiserver" Pod="calico-apiserver-6fb4c65f97-cxgtk" WorkloadEndpoint="ip--172--31--17--11-k8s-calico--apiserver--6fb4c65f97--cxgtk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--11-k8s-calico--apiserver--6fb4c65f97--cxgtk-eth0", GenerateName:"calico-apiserver-6fb4c65f97-", Namespace:"calico-apiserver", SelfLink:"", UID:"5fa2e950-afe9-46cc-8df6-b014aa90c32b", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 51, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fb4c65f97", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-11", ContainerID:"dcc06a2af72b89da52472a8afdbecd9757922d4313d72d236078d9ea3e37f451", Pod:"calico-apiserver-6fb4c65f97-cxgtk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali923246644ca", MAC:"8a:c0:b3:ac:ff:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:52:15.167437 containerd[1969]: 2025-12-16 12:52:15.155 [INFO][5610] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dcc06a2af72b89da52472a8afdbecd9757922d4313d72d236078d9ea3e37f451" Namespace="calico-apiserver" Pod="calico-apiserver-6fb4c65f97-cxgtk" WorkloadEndpoint="ip--172--31--17--11-k8s-calico--apiserver--6fb4c65f97--cxgtk-eth0" Dec 16 12:52:15.202000 audit[5639]: NETFILTER_CFG table=filter:135 family=2 entries=49 op=nft_register_chain pid=5639 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:52:15.202000 audit[5639]: SYSCALL arch=c000003e syscall=46 success=yes exit=25436 a0=3 a1=7ffe14c02bf0 a2=0 a3=7ffe14c02bdc items=0 ppid=5011 pid=5639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:15.202000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:52:15.231494 containerd[1969]: time="2025-12-16T12:52:15.231406636Z" level=info msg="connecting to shim dcc06a2af72b89da52472a8afdbecd9757922d4313d72d236078d9ea3e37f451" address="unix:///run/containerd/s/4419219863d3848214da86b16bc3adb25d190c3da54f04ace17a62ee122b163a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:52:15.266227 systemd[1]: Started cri-containerd-dcc06a2af72b89da52472a8afdbecd9757922d4313d72d236078d9ea3e37f451.scope - libcontainer container dcc06a2af72b89da52472a8afdbecd9757922d4313d72d236078d9ea3e37f451. Dec 16 12:52:15.280000 audit: BPF prog-id=246 op=LOAD Dec 16 12:52:15.280000 audit: BPF prog-id=247 op=LOAD Dec 16 12:52:15.280000 audit[5659]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=5648 pid=5659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:15.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463633036613261663732623839646135323437326138616664626563 Dec 16 12:52:15.281000 audit: BPF prog-id=247 op=UNLOAD Dec 16 12:52:15.281000 audit[5659]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5648 pid=5659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:15.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463633036613261663732623839646135323437326138616664626563 Dec 16 12:52:15.281000 audit: BPF prog-id=248 op=LOAD Dec 16 12:52:15.281000 audit[5659]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=5648 pid=5659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:15.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463633036613261663732623839646135323437326138616664626563 Dec 16 12:52:15.281000 audit: BPF prog-id=249 op=LOAD Dec 16 12:52:15.281000 audit[5659]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=5648 pid=5659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:15.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463633036613261663732623839646135323437326138616664626563 Dec 16 12:52:15.281000 audit: BPF prog-id=249 op=UNLOAD Dec 16 12:52:15.281000 audit[5659]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5648 pid=5659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:15.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463633036613261663732623839646135323437326138616664626563 Dec 16 12:52:15.281000 audit: BPF prog-id=248 op=UNLOAD Dec 16 12:52:15.281000 audit[5659]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5648 pid=5659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:15.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463633036613261663732623839646135323437326138616664626563 Dec 16 12:52:15.281000 audit: BPF prog-id=250 op=LOAD Dec 16 12:52:15.281000 audit[5659]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=5648 pid=5659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:15.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463633036613261663732623839646135323437326138616664626563 Dec 16 12:52:15.312381 kubelet[3300]: E1216 12:52:15.311855 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b97648649-sj8kb" podUID="97ebcca4-f43d-4a70-b294-e93b18442671" Dec 16 12:52:15.367299 kubelet[3300]: I1216 12:52:15.337523 3300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rscvc" podStartSLOduration=59.3375006 podStartE2EDuration="59.3375006s" podCreationTimestamp="2025-12-16 12:51:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:52:15.332784767 +0000 UTC m=+66.569886712" watchObservedRunningTime="2025-12-16 12:52:15.3375006 +0000 UTC m=+66.574602546" Dec 16 12:52:15.379726 containerd[1969]: time="2025-12-16T12:52:15.379668054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb4c65f97-cxgtk,Uid:5fa2e950-afe9-46cc-8df6-b014aa90c32b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"dcc06a2af72b89da52472a8afdbecd9757922d4313d72d236078d9ea3e37f451\"" Dec 16 12:52:15.385009 systemd-networkd[1548]: calic17bf330188: Gained IPv6LL Dec 16 12:52:15.396556 containerd[1969]: time="2025-12-16T12:52:15.396283369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:52:15.402000 audit[5690]: NETFILTER_CFG table=filter:136 family=2 entries=20 op=nft_register_rule pid=5690 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:52:15.402000 audit[5690]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc1a962dc0 a2=0 a3=7ffc1a962dac items=0 ppid=3722 pid=5690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:15.402000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:52:15.411000 audit[5690]: NETFILTER_CFG table=nat:137 family=2 entries=14 op=nft_register_rule pid=5690 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:52:15.411000 audit[5690]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc1a962dc0 a2=0 a3=0 items=0 ppid=3722 pid=5690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:15.411000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:52:15.446038 systemd-networkd[1548]: cali2c7024f463c: Gained IPv6LL Dec 16 12:52:15.460000 audit[5692]: NETFILTER_CFG table=filter:138 family=2 entries=17 op=nft_register_rule pid=5692 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:52:15.460000 audit[5692]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc6f7d5fd0 a2=0 a3=7ffc6f7d5fbc items=0 ppid=3722 pid=5692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:15.460000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:52:15.465000 audit[5692]: NETFILTER_CFG table=nat:139 family=2 entries=35 op=nft_register_chain pid=5692 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:52:15.465000 audit[5692]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc6f7d5fd0 a2=0 a3=7ffc6f7d5fbc items=0 ppid=3722 pid=5692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:15.465000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:52:15.655871 containerd[1969]: time="2025-12-16T12:52:15.655825578Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:52:15.658033 containerd[1969]: time="2025-12-16T12:52:15.657985877Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:52:15.658828 containerd[1969]: time="2025-12-16T12:52:15.658005706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:52:15.658895 kubelet[3300]: E1216 12:52:15.658253 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:52:15.658895 kubelet[3300]: E1216 12:52:15.658310 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:52:15.658895 kubelet[3300]: E1216 12:52:15.658449 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzbm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6fb4c65f97-cxgtk_calico-apiserver(5fa2e950-afe9-46cc-8df6-b014aa90c32b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:52:15.659635 kubelet[3300]: E1216 12:52:15.659602 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb4c65f97-cxgtk" podUID="5fa2e950-afe9-46cc-8df6-b014aa90c32b" Dec 16 12:52:15.922332 containerd[1969]: time="2025-12-16T12:52:15.922143005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb4c65f97-rg8xf,Uid:631dcf21-0085-4dd5-b7a8-35fb5c10f8ab,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:52:16.075669 systemd-networkd[1548]: cali1c4aa2d28d5: Link UP Dec 16 12:52:16.078956 systemd-networkd[1548]: cali1c4aa2d28d5: Gained carrier Dec 16 12:52:16.103345 containerd[1969]: 2025-12-16 12:52:15.968 [INFO][5694] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--11-k8s-calico--apiserver--6fb4c65f97--rg8xf-eth0 calico-apiserver-6fb4c65f97- calico-apiserver 631dcf21-0085-4dd5-b7a8-35fb5c10f8ab 846 0 2025-12-16 12:51:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fb4c65f97 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-17-11 calico-apiserver-6fb4c65f97-rg8xf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1c4aa2d28d5 [] [] }} ContainerID="fe161848def4bbb379c42be238c7e9cded7778e28d7068a8e9aa82e29805650e" Namespace="calico-apiserver" Pod="calico-apiserver-6fb4c65f97-rg8xf" WorkloadEndpoint="ip--172--31--17--11-k8s-calico--apiserver--6fb4c65f97--rg8xf-" Dec 16 12:52:16.103345 containerd[1969]: 2025-12-16 12:52:15.969 [INFO][5694] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fe161848def4bbb379c42be238c7e9cded7778e28d7068a8e9aa82e29805650e" Namespace="calico-apiserver" Pod="calico-apiserver-6fb4c65f97-rg8xf" WorkloadEndpoint="ip--172--31--17--11-k8s-calico--apiserver--6fb4c65f97--rg8xf-eth0" Dec 16 12:52:16.103345 containerd[1969]: 2025-12-16 12:52:16.015 [INFO][5705] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fe161848def4bbb379c42be238c7e9cded7778e28d7068a8e9aa82e29805650e" HandleID="k8s-pod-network.fe161848def4bbb379c42be238c7e9cded7778e28d7068a8e9aa82e29805650e" Workload="ip--172--31--17--11-k8s-calico--apiserver--6fb4c65f97--rg8xf-eth0" Dec 16 12:52:16.103345 containerd[1969]: 2025-12-16 12:52:16.015 [INFO][5705] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fe161848def4bbb379c42be238c7e9cded7778e28d7068a8e9aa82e29805650e" HandleID="k8s-pod-network.fe161848def4bbb379c42be238c7e9cded7778e28d7068a8e9aa82e29805650e" Workload="ip--172--31--17--11-k8s-calico--apiserver--6fb4c65f97--rg8xf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5680), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-17-11", "pod":"calico-apiserver-6fb4c65f97-rg8xf", "timestamp":"2025-12-16 12:52:16.015248707 +0000 UTC"}, Hostname:"ip-172-31-17-11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:52:16.103345 containerd[1969]: 2025-12-16 12:52:16.015 [INFO][5705] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:52:16.103345 containerd[1969]: 2025-12-16 12:52:16.015 [INFO][5705] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:52:16.103345 containerd[1969]: 2025-12-16 12:52:16.015 [INFO][5705] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-11' Dec 16 12:52:16.103345 containerd[1969]: 2025-12-16 12:52:16.025 [INFO][5705] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fe161848def4bbb379c42be238c7e9cded7778e28d7068a8e9aa82e29805650e" host="ip-172-31-17-11" Dec 16 12:52:16.103345 containerd[1969]: 2025-12-16 12:52:16.031 [INFO][5705] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-11" Dec 16 12:52:16.103345 containerd[1969]: 2025-12-16 12:52:16.039 [INFO][5705] ipam/ipam.go 511: Trying affinity for 192.168.15.192/26 host="ip-172-31-17-11" Dec 16 12:52:16.103345 containerd[1969]: 2025-12-16 12:52:16.046 [INFO][5705] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.192/26 host="ip-172-31-17-11" Dec 16 12:52:16.103345 containerd[1969]: 2025-12-16 12:52:16.051 [INFO][5705] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.192/26 host="ip-172-31-17-11" Dec 16 12:52:16.103345 containerd[1969]: 2025-12-16 12:52:16.051 [INFO][5705] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.192/26 handle="k8s-pod-network.fe161848def4bbb379c42be238c7e9cded7778e28d7068a8e9aa82e29805650e" host="ip-172-31-17-11" Dec 16 12:52:16.103345 containerd[1969]: 2025-12-16 12:52:16.053 [INFO][5705] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fe161848def4bbb379c42be238c7e9cded7778e28d7068a8e9aa82e29805650e Dec 16 12:52:16.103345 containerd[1969]: 2025-12-16 12:52:16.058 [INFO][5705] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.192/26 handle="k8s-pod-network.fe161848def4bbb379c42be238c7e9cded7778e28d7068a8e9aa82e29805650e" host="ip-172-31-17-11" Dec 16 12:52:16.103345 containerd[1969]: 2025-12-16 12:52:16.067 [INFO][5705] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.199/26] block=192.168.15.192/26 handle="k8s-pod-network.fe161848def4bbb379c42be238c7e9cded7778e28d7068a8e9aa82e29805650e" host="ip-172-31-17-11" Dec 16 12:52:16.103345 containerd[1969]: 2025-12-16 12:52:16.067 [INFO][5705] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.199/26] handle="k8s-pod-network.fe161848def4bbb379c42be238c7e9cded7778e28d7068a8e9aa82e29805650e" host="ip-172-31-17-11" Dec 16 12:52:16.103345 containerd[1969]: 2025-12-16 12:52:16.067 [INFO][5705] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:52:16.103345 containerd[1969]: 2025-12-16 12:52:16.067 [INFO][5705] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.199/26] IPv6=[] ContainerID="fe161848def4bbb379c42be238c7e9cded7778e28d7068a8e9aa82e29805650e" HandleID="k8s-pod-network.fe161848def4bbb379c42be238c7e9cded7778e28d7068a8e9aa82e29805650e" Workload="ip--172--31--17--11-k8s-calico--apiserver--6fb4c65f97--rg8xf-eth0" Dec 16 12:52:16.104264 containerd[1969]: 2025-12-16 12:52:16.071 [INFO][5694] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fe161848def4bbb379c42be238c7e9cded7778e28d7068a8e9aa82e29805650e" Namespace="calico-apiserver" Pod="calico-apiserver-6fb4c65f97-rg8xf" WorkloadEndpoint="ip--172--31--17--11-k8s-calico--apiserver--6fb4c65f97--rg8xf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--11-k8s-calico--apiserver--6fb4c65f97--rg8xf-eth0", GenerateName:"calico-apiserver-6fb4c65f97-", Namespace:"calico-apiserver", SelfLink:"", UID:"631dcf21-0085-4dd5-b7a8-35fb5c10f8ab", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 51, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fb4c65f97", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-11", ContainerID:"", Pod:"calico-apiserver-6fb4c65f97-rg8xf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1c4aa2d28d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:52:16.104264 containerd[1969]: 2025-12-16 12:52:16.071 [INFO][5694] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.199/32] ContainerID="fe161848def4bbb379c42be238c7e9cded7778e28d7068a8e9aa82e29805650e" Namespace="calico-apiserver" Pod="calico-apiserver-6fb4c65f97-rg8xf" WorkloadEndpoint="ip--172--31--17--11-k8s-calico--apiserver--6fb4c65f97--rg8xf-eth0" Dec 16 12:52:16.104264 containerd[1969]: 2025-12-16 12:52:16.071 [INFO][5694] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c4aa2d28d5 ContainerID="fe161848def4bbb379c42be238c7e9cded7778e28d7068a8e9aa82e29805650e" Namespace="calico-apiserver" Pod="calico-apiserver-6fb4c65f97-rg8xf" WorkloadEndpoint="ip--172--31--17--11-k8s-calico--apiserver--6fb4c65f97--rg8xf-eth0" Dec 16 12:52:16.104264 containerd[1969]: 2025-12-16 12:52:16.080 [INFO][5694] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fe161848def4bbb379c42be238c7e9cded7778e28d7068a8e9aa82e29805650e" Namespace="calico-apiserver" Pod="calico-apiserver-6fb4c65f97-rg8xf" WorkloadEndpoint="ip--172--31--17--11-k8s-calico--apiserver--6fb4c65f97--rg8xf-eth0" Dec 16 12:52:16.104264 containerd[1969]: 2025-12-16 12:52:16.081 [INFO][5694] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fe161848def4bbb379c42be238c7e9cded7778e28d7068a8e9aa82e29805650e" Namespace="calico-apiserver" Pod="calico-apiserver-6fb4c65f97-rg8xf" WorkloadEndpoint="ip--172--31--17--11-k8s-calico--apiserver--6fb4c65f97--rg8xf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--11-k8s-calico--apiserver--6fb4c65f97--rg8xf-eth0", GenerateName:"calico-apiserver-6fb4c65f97-", Namespace:"calico-apiserver", SelfLink:"", UID:"631dcf21-0085-4dd5-b7a8-35fb5c10f8ab", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 51, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fb4c65f97", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-11", ContainerID:"fe161848def4bbb379c42be238c7e9cded7778e28d7068a8e9aa82e29805650e", Pod:"calico-apiserver-6fb4c65f97-rg8xf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1c4aa2d28d5", MAC:"76:70:40:05:39:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:52:16.104264 containerd[1969]: 2025-12-16 12:52:16.098 [INFO][5694] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fe161848def4bbb379c42be238c7e9cded7778e28d7068a8e9aa82e29805650e" Namespace="calico-apiserver" Pod="calico-apiserver-6fb4c65f97-rg8xf" WorkloadEndpoint="ip--172--31--17--11-k8s-calico--apiserver--6fb4c65f97--rg8xf-eth0" Dec 16 12:52:16.121000 audit[5720]: NETFILTER_CFG table=filter:140 family=2 entries=53 op=nft_register_chain pid=5720 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:52:16.121000 audit[5720]: SYSCALL arch=c000003e syscall=46 success=yes exit=26624 a0=3 a1=7fffc7aeee30 a2=0 a3=7fffc7aeee1c items=0 ppid=5011 pid=5720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:16.121000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:52:16.149964 containerd[1969]: time="2025-12-16T12:52:16.148618503Z" level=info msg="connecting to shim fe161848def4bbb379c42be238c7e9cded7778e28d7068a8e9aa82e29805650e" address="unix:///run/containerd/s/e5540d3f19cc4fbd26a0452d8da579d64df58418fa86f09f16d2abe2d71f04c5" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:52:16.150139 systemd-networkd[1548]: cali923246644ca: Gained IPv6LL Dec 16 12:52:16.194334 systemd[1]: Started cri-containerd-fe161848def4bbb379c42be238c7e9cded7778e28d7068a8e9aa82e29805650e.scope - libcontainer container fe161848def4bbb379c42be238c7e9cded7778e28d7068a8e9aa82e29805650e. Dec 16 12:52:16.214000 audit: BPF prog-id=251 op=LOAD Dec 16 12:52:16.215000 audit: BPF prog-id=252 op=LOAD Dec 16 12:52:16.215000 audit[5740]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=5729 pid=5740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:16.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665313631383438646566346262623337396334326265323338633765 Dec 16 12:52:16.215000 audit: BPF prog-id=252 op=UNLOAD Dec 16 12:52:16.215000 audit[5740]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5729 pid=5740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:16.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665313631383438646566346262623337396334326265323338633765 Dec 16 12:52:16.216000 audit: BPF prog-id=253 op=LOAD Dec 16 12:52:16.216000 audit[5740]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=5729 pid=5740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:16.216000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665313631383438646566346262623337396334326265323338633765 Dec 16 12:52:16.216000 audit: BPF prog-id=254 op=LOAD Dec 16 12:52:16.216000 audit[5740]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=5729 pid=5740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:16.216000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665313631383438646566346262623337396334326265323338633765 Dec 16 12:52:16.216000 audit: BPF prog-id=254 op=UNLOAD Dec 16 12:52:16.216000 audit[5740]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5729 pid=5740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:16.216000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665313631383438646566346262623337396334326265323338633765 Dec 16 12:52:16.216000 audit: BPF prog-id=253 op=UNLOAD Dec 16 12:52:16.216000 audit[5740]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5729 pid=5740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:16.216000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665313631383438646566346262623337396334326265323338633765 Dec 16 12:52:16.216000 audit: BPF prog-id=255 op=LOAD Dec 16 12:52:16.216000 audit[5740]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=5729 pid=5740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:16.216000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665313631383438646566346262623337396334326265323338633765 Dec 16 12:52:16.269395 containerd[1969]: time="2025-12-16T12:52:16.269339970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb4c65f97-rg8xf,Uid:631dcf21-0085-4dd5-b7a8-35fb5c10f8ab,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"fe161848def4bbb379c42be238c7e9cded7778e28d7068a8e9aa82e29805650e\"" Dec 16 12:52:16.274207 containerd[1969]: time="2025-12-16T12:52:16.274169855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:52:16.310323 kubelet[3300]: E1216 12:52:16.310281 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb4c65f97-cxgtk" podUID="5fa2e950-afe9-46cc-8df6-b014aa90c32b" Dec 16 12:52:16.315564 kubelet[3300]: E1216 12:52:16.315520 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b97648649-sj8kb" podUID="97ebcca4-f43d-4a70-b294-e93b18442671" Dec 16 12:52:16.426545 systemd[1]: Started sshd@8-172.31.17.11:22-147.75.109.163:47032.service - OpenSSH per-connection server daemon (147.75.109.163:47032). Dec 16 12:52:16.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.17.11:22-147.75.109.163:47032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:16.495000 audit[5769]: NETFILTER_CFG table=filter:141 family=2 entries=14 op=nft_register_rule pid=5769 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:52:16.495000 audit[5769]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc7f8833e0 a2=0 a3=7ffc7f8833cc items=0 ppid=3722 pid=5769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:16.495000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:52:16.498000 audit[5769]: NETFILTER_CFG table=nat:142 family=2 entries=20 op=nft_register_rule pid=5769 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:52:16.498000 audit[5769]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc7f8833e0 a2=0 a3=7ffc7f8833cc items=0 ppid=3722 pid=5769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:16.498000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:52:16.533434 containerd[1969]: time="2025-12-16T12:52:16.533229399Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:52:16.535407 containerd[1969]: time="2025-12-16T12:52:16.535276838Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:52:16.535407 containerd[1969]: time="2025-12-16T12:52:16.535379946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:52:16.535805 kubelet[3300]: E1216 12:52:16.535739 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:52:16.535805 kubelet[3300]: E1216 12:52:16.535781 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:52:16.536376 kubelet[3300]: E1216 12:52:16.536261 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gm8jc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6fb4c65f97-rg8xf_calico-apiserver(631dcf21-0085-4dd5-b7a8-35fb5c10f8ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:52:16.538010 kubelet[3300]: E1216 12:52:16.537973 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb4c65f97-rg8xf" podUID="631dcf21-0085-4dd5-b7a8-35fb5c10f8ab" Dec 16 12:52:16.650000 audit[5766]: USER_ACCT pid=5766 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:16.651318 sshd[5766]: Accepted publickey for core from 147.75.109.163 port 47032 ssh2: RSA SHA256:U/DoHe1xJfGISnzLa/L/V6WMQEeU/gRWF7ew5p5tFyo Dec 16 12:52:16.651000 audit[5766]: CRED_ACQ pid=5766 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:16.651000 audit[5766]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdeb4309e0 a2=3 a3=0 items=0 ppid=1 pid=5766 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:16.651000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:52:16.654298 sshd-session[5766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:52:16.659977 systemd-logind[1926]: New session 10 of user core. Dec 16 12:52:16.667154 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 12:52:16.669000 audit[5766]: USER_START pid=5766 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:16.671000 audit[5772]: CRED_ACQ pid=5772 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:16.922923 containerd[1969]: time="2025-12-16T12:52:16.922862550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j62tw,Uid:6847d7c3-2098-48b8-8794-1b664366eb8c,Namespace:kube-system,Attempt:0,}" Dec 16 12:52:17.074701 systemd-networkd[1548]: caliea811648aaf: Link UP Dec 16 12:52:17.077052 systemd-networkd[1548]: caliea811648aaf: Gained carrier Dec 16 12:52:17.102398 containerd[1969]: 2025-12-16 12:52:16.977 [INFO][5780] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--11-k8s-coredns--674b8bbfcf--j62tw-eth0 coredns-674b8bbfcf- kube-system 6847d7c3-2098-48b8-8794-1b664366eb8c 833 0 2025-12-16 12:51:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-11 coredns-674b8bbfcf-j62tw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliea811648aaf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="df05eb89fa2d9eaa596a7174924796ad409b4a1fe0bcf7c8ce6e11420e9e83fc" Namespace="kube-system" Pod="coredns-674b8bbfcf-j62tw" WorkloadEndpoint="ip--172--31--17--11-k8s-coredns--674b8bbfcf--j62tw-" Dec 16 12:52:17.102398 containerd[1969]: 2025-12-16 12:52:16.977 [INFO][5780] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="df05eb89fa2d9eaa596a7174924796ad409b4a1fe0bcf7c8ce6e11420e9e83fc" Namespace="kube-system" Pod="coredns-674b8bbfcf-j62tw" WorkloadEndpoint="ip--172--31--17--11-k8s-coredns--674b8bbfcf--j62tw-eth0" Dec 16 12:52:17.102398 containerd[1969]: 2025-12-16 12:52:17.019 [INFO][5794] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df05eb89fa2d9eaa596a7174924796ad409b4a1fe0bcf7c8ce6e11420e9e83fc" HandleID="k8s-pod-network.df05eb89fa2d9eaa596a7174924796ad409b4a1fe0bcf7c8ce6e11420e9e83fc" Workload="ip--172--31--17--11-k8s-coredns--674b8bbfcf--j62tw-eth0" Dec 16 12:52:17.102398 containerd[1969]: 2025-12-16 12:52:17.020 [INFO][5794] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="df05eb89fa2d9eaa596a7174924796ad409b4a1fe0bcf7c8ce6e11420e9e83fc" HandleID="k8s-pod-network.df05eb89fa2d9eaa596a7174924796ad409b4a1fe0bcf7c8ce6e11420e9e83fc" Workload="ip--172--31--17--11-k8s-coredns--674b8bbfcf--j62tw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5980), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-11", "pod":"coredns-674b8bbfcf-j62tw", "timestamp":"2025-12-16 12:52:17.019952534 +0000 UTC"}, Hostname:"ip-172-31-17-11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:52:17.102398 containerd[1969]: 2025-12-16 12:52:17.020 [INFO][5794] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:52:17.102398 containerd[1969]: 2025-12-16 12:52:17.020 [INFO][5794] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:52:17.102398 containerd[1969]: 2025-12-16 12:52:17.020 [INFO][5794] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-11' Dec 16 12:52:17.102398 containerd[1969]: 2025-12-16 12:52:17.027 [INFO][5794] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.df05eb89fa2d9eaa596a7174924796ad409b4a1fe0bcf7c8ce6e11420e9e83fc" host="ip-172-31-17-11" Dec 16 12:52:17.102398 containerd[1969]: 2025-12-16 12:52:17.035 [INFO][5794] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-11" Dec 16 12:52:17.102398 containerd[1969]: 2025-12-16 12:52:17.042 [INFO][5794] ipam/ipam.go 511: Trying affinity for 192.168.15.192/26 host="ip-172-31-17-11" Dec 16 12:52:17.102398 containerd[1969]: 2025-12-16 12:52:17.046 [INFO][5794] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.192/26 host="ip-172-31-17-11" Dec 16 12:52:17.102398 containerd[1969]: 2025-12-16 12:52:17.049 [INFO][5794] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.192/26 host="ip-172-31-17-11" Dec 16 12:52:17.102398 containerd[1969]: 2025-12-16 12:52:17.049 [INFO][5794] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.192/26 handle="k8s-pod-network.df05eb89fa2d9eaa596a7174924796ad409b4a1fe0bcf7c8ce6e11420e9e83fc" host="ip-172-31-17-11" Dec 16 12:52:17.102398 containerd[1969]: 2025-12-16 12:52:17.052 [INFO][5794] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.df05eb89fa2d9eaa596a7174924796ad409b4a1fe0bcf7c8ce6e11420e9e83fc Dec 16 12:52:17.102398 containerd[1969]: 2025-12-16 12:52:17.057 [INFO][5794] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.192/26 handle="k8s-pod-network.df05eb89fa2d9eaa596a7174924796ad409b4a1fe0bcf7c8ce6e11420e9e83fc" host="ip-172-31-17-11" Dec 16 12:52:17.102398 containerd[1969]: 2025-12-16 12:52:17.067 [INFO][5794] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.200/26] block=192.168.15.192/26 handle="k8s-pod-network.df05eb89fa2d9eaa596a7174924796ad409b4a1fe0bcf7c8ce6e11420e9e83fc" host="ip-172-31-17-11" Dec 16 12:52:17.102398 containerd[1969]: 2025-12-16 12:52:17.067 [INFO][5794] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.200/26] handle="k8s-pod-network.df05eb89fa2d9eaa596a7174924796ad409b4a1fe0bcf7c8ce6e11420e9e83fc" host="ip-172-31-17-11" Dec 16 12:52:17.102398 containerd[1969]: 2025-12-16 12:52:17.067 [INFO][5794] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:52:17.102398 containerd[1969]: 2025-12-16 12:52:17.067 [INFO][5794] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.200/26] IPv6=[] ContainerID="df05eb89fa2d9eaa596a7174924796ad409b4a1fe0bcf7c8ce6e11420e9e83fc" HandleID="k8s-pod-network.df05eb89fa2d9eaa596a7174924796ad409b4a1fe0bcf7c8ce6e11420e9e83fc" Workload="ip--172--31--17--11-k8s-coredns--674b8bbfcf--j62tw-eth0" Dec 16 12:52:17.104605 containerd[1969]: 2025-12-16 12:52:17.071 [INFO][5780] cni-plugin/k8s.go 418: Populated endpoint ContainerID="df05eb89fa2d9eaa596a7174924796ad409b4a1fe0bcf7c8ce6e11420e9e83fc" Namespace="kube-system" Pod="coredns-674b8bbfcf-j62tw" WorkloadEndpoint="ip--172--31--17--11-k8s-coredns--674b8bbfcf--j62tw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--11-k8s-coredns--674b8bbfcf--j62tw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6847d7c3-2098-48b8-8794-1b664366eb8c", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 51, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-11", ContainerID:"", Pod:"coredns-674b8bbfcf-j62tw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliea811648aaf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:52:17.104605 containerd[1969]: 2025-12-16 12:52:17.071 [INFO][5780] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.200/32] ContainerID="df05eb89fa2d9eaa596a7174924796ad409b4a1fe0bcf7c8ce6e11420e9e83fc" Namespace="kube-system" Pod="coredns-674b8bbfcf-j62tw" WorkloadEndpoint="ip--172--31--17--11-k8s-coredns--674b8bbfcf--j62tw-eth0" Dec 16 12:52:17.104605 containerd[1969]: 2025-12-16 12:52:17.071 [INFO][5780] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliea811648aaf ContainerID="df05eb89fa2d9eaa596a7174924796ad409b4a1fe0bcf7c8ce6e11420e9e83fc" Namespace="kube-system" Pod="coredns-674b8bbfcf-j62tw" WorkloadEndpoint="ip--172--31--17--11-k8s-coredns--674b8bbfcf--j62tw-eth0" Dec 16 12:52:17.104605 containerd[1969]: 2025-12-16 12:52:17.074 [INFO][5780] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df05eb89fa2d9eaa596a7174924796ad409b4a1fe0bcf7c8ce6e11420e9e83fc" Namespace="kube-system" Pod="coredns-674b8bbfcf-j62tw" WorkloadEndpoint="ip--172--31--17--11-k8s-coredns--674b8bbfcf--j62tw-eth0" Dec 16 12:52:17.104605 containerd[1969]: 2025-12-16 12:52:17.076 [INFO][5780] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="df05eb89fa2d9eaa596a7174924796ad409b4a1fe0bcf7c8ce6e11420e9e83fc" Namespace="kube-system" Pod="coredns-674b8bbfcf-j62tw" WorkloadEndpoint="ip--172--31--17--11-k8s-coredns--674b8bbfcf--j62tw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--11-k8s-coredns--674b8bbfcf--j62tw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6847d7c3-2098-48b8-8794-1b664366eb8c", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 51, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-11", ContainerID:"df05eb89fa2d9eaa596a7174924796ad409b4a1fe0bcf7c8ce6e11420e9e83fc", Pod:"coredns-674b8bbfcf-j62tw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliea811648aaf", MAC:"2a:d1:99:4b:b8:9b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:52:17.104605 containerd[1969]: 2025-12-16 12:52:17.095 [INFO][5780] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="df05eb89fa2d9eaa596a7174924796ad409b4a1fe0bcf7c8ce6e11420e9e83fc" Namespace="kube-system" Pod="coredns-674b8bbfcf-j62tw" WorkloadEndpoint="ip--172--31--17--11-k8s-coredns--674b8bbfcf--j62tw-eth0" Dec 16 12:52:17.132000 audit[5810]: NETFILTER_CFG table=filter:143 family=2 entries=58 op=nft_register_chain pid=5810 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:52:17.132000 audit[5810]: SYSCALL arch=c000003e syscall=46 success=yes exit=26744 a0=3 a1=7ffec6415250 a2=0 a3=7ffec641523c items=0 ppid=5011 pid=5810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:17.132000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:52:17.148171 containerd[1969]: time="2025-12-16T12:52:17.148098030Z" level=info msg="connecting to shim df05eb89fa2d9eaa596a7174924796ad409b4a1fe0bcf7c8ce6e11420e9e83fc" address="unix:///run/containerd/s/b9e67ed8f12f19c4c6f62a9070f1b4b33495f2f2b09a6a6dcf55169be3a53257" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:52:17.185124 systemd[1]: Started cri-containerd-df05eb89fa2d9eaa596a7174924796ad409b4a1fe0bcf7c8ce6e11420e9e83fc.scope - libcontainer container df05eb89fa2d9eaa596a7174924796ad409b4a1fe0bcf7c8ce6e11420e9e83fc. Dec 16 12:52:17.202000 audit: BPF prog-id=256 op=LOAD Dec 16 12:52:17.202000 audit: BPF prog-id=257 op=LOAD Dec 16 12:52:17.202000 audit[5833]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=5820 pid=5833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:17.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466303565623839666132643965616135393661373137343932343739 Dec 16 12:52:17.202000 audit: BPF prog-id=257 op=UNLOAD Dec 16 12:52:17.202000 audit[5833]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5820 pid=5833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:17.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466303565623839666132643965616135393661373137343932343739 Dec 16 12:52:17.202000 audit: BPF prog-id=258 op=LOAD Dec 16 12:52:17.202000 audit[5833]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=5820 pid=5833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:17.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466303565623839666132643965616135393661373137343932343739 Dec 16 12:52:17.202000 audit: BPF prog-id=259 op=LOAD Dec 16 12:52:17.202000 audit[5833]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=5820 pid=5833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:17.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466303565623839666132643965616135393661373137343932343739 Dec 16 12:52:17.202000 audit: BPF prog-id=259 op=UNLOAD Dec 16 12:52:17.202000 audit[5833]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5820 pid=5833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:17.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466303565623839666132643965616135393661373137343932343739 Dec 16 12:52:17.202000 audit: BPF prog-id=258 op=UNLOAD Dec 16 12:52:17.202000 audit[5833]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5820 pid=5833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:17.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466303565623839666132643965616135393661373137343932343739 Dec 16 12:52:17.202000 audit: BPF prog-id=260 op=LOAD Dec 16 12:52:17.202000 audit[5833]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=5820 pid=5833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:17.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466303565623839666132643965616135393661373137343932343739 Dec 16 12:52:17.266355 containerd[1969]: time="2025-12-16T12:52:17.266252652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j62tw,Uid:6847d7c3-2098-48b8-8794-1b664366eb8c,Namespace:kube-system,Attempt:0,} returns sandbox id \"df05eb89fa2d9eaa596a7174924796ad409b4a1fe0bcf7c8ce6e11420e9e83fc\"" Dec 16 12:52:17.276151 containerd[1969]: time="2025-12-16T12:52:17.276102884Z" level=info msg="CreateContainer within sandbox \"df05eb89fa2d9eaa596a7174924796ad409b4a1fe0bcf7c8ce6e11420e9e83fc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:52:17.292412 sshd[5772]: Connection closed by 147.75.109.163 port 47032 Dec 16 12:52:17.293362 sshd-session[5766]: pam_unix(sshd:session): session closed for user core Dec 16 12:52:17.294000 audit[5766]: USER_END pid=5766 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:17.297000 kernel: kauditd_printk_skb: 225 callbacks suppressed Dec 16 12:52:17.297058 kernel: audit: type=1106 audit(1765889537.294:772): pid=5766 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:17.298682 systemd[1]: sshd@8-172.31.17.11:22-147.75.109.163:47032.service: Deactivated successfully. Dec 16 12:52:17.301511 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 12:52:17.294000 audit[5766]: CRED_DISP pid=5766 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:17.307271 kernel: audit: type=1104 audit(1765889537.294:773): pid=5766 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:17.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.17.11:22-147.75.109.163:47032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:17.311983 kernel: audit: type=1131 audit(1765889537.296:774): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.17.11:22-147.75.109.163:47032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:17.315707 containerd[1969]: time="2025-12-16T12:52:17.315673259Z" level=info msg="Container d59b0fdcbbc78ef9a0bbe2509bb9119edb4496107bc4bfd37c91973cc670870e: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:52:17.319481 kubelet[3300]: E1216 12:52:17.319449 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb4c65f97-rg8xf" podUID="631dcf21-0085-4dd5-b7a8-35fb5c10f8ab" Dec 16 12:52:17.320569 kubelet[3300]: E1216 12:52:17.319655 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb4c65f97-cxgtk" podUID="5fa2e950-afe9-46cc-8df6-b014aa90c32b" Dec 16 12:52:17.324011 systemd-logind[1926]: Session 10 logged out. Waiting for processes to exit. Dec 16 12:52:17.325360 systemd-logind[1926]: Removed session 10. Dec 16 12:52:17.331780 containerd[1969]: time="2025-12-16T12:52:17.331696588Z" level=info msg="CreateContainer within sandbox \"df05eb89fa2d9eaa596a7174924796ad409b4a1fe0bcf7c8ce6e11420e9e83fc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d59b0fdcbbc78ef9a0bbe2509bb9119edb4496107bc4bfd37c91973cc670870e\"" Dec 16 12:52:17.333207 containerd[1969]: time="2025-12-16T12:52:17.333155280Z" level=info msg="StartContainer for \"d59b0fdcbbc78ef9a0bbe2509bb9119edb4496107bc4bfd37c91973cc670870e\"" Dec 16 12:52:17.338502 containerd[1969]: time="2025-12-16T12:52:17.338268714Z" level=info msg="connecting to shim d59b0fdcbbc78ef9a0bbe2509bb9119edb4496107bc4bfd37c91973cc670870e" address="unix:///run/containerd/s/b9e67ed8f12f19c4c6f62a9070f1b4b33495f2f2b09a6a6dcf55169be3a53257" protocol=ttrpc version=3 Dec 16 12:52:17.382331 systemd[1]: Started cri-containerd-d59b0fdcbbc78ef9a0bbe2509bb9119edb4496107bc4bfd37c91973cc670870e.scope - libcontainer container d59b0fdcbbc78ef9a0bbe2509bb9119edb4496107bc4bfd37c91973cc670870e. Dec 16 12:52:17.397000 audit: BPF prog-id=261 op=LOAD Dec 16 12:52:17.400095 kernel: audit: type=1334 audit(1765889537.397:775): prog-id=261 op=LOAD Dec 16 12:52:17.401938 kernel: audit: type=1334 audit(1765889537.399:776): prog-id=262 op=LOAD Dec 16 12:52:17.407381 kernel: audit: type=1300 audit(1765889537.399:776): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=5820 pid=5862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:17.399000 audit: BPF prog-id=262 op=LOAD Dec 16 12:52:17.399000 audit[5862]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=5820 pid=5862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:17.409986 kernel: audit: type=1327 audit(1765889537.399:776): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435396230666463626263373865663961306262653235303962623931 Dec 16 12:52:17.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435396230666463626263373865663961306262653235303962623931 Dec 16 12:52:17.399000 audit: BPF prog-id=262 op=UNLOAD Dec 16 12:52:17.414852 kernel: audit: type=1334 audit(1765889537.399:777): prog-id=262 op=UNLOAD Dec 16 12:52:17.399000 audit[5862]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5820 pid=5862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:17.426191 kernel: audit: type=1300 audit(1765889537.399:777): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5820 pid=5862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:17.426312 kernel: audit: type=1327 audit(1765889537.399:777): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435396230666463626263373865663961306262653235303962623931 Dec 16 12:52:17.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435396230666463626263373865663961306262653235303962623931 Dec 16 12:52:17.399000 audit: BPF prog-id=263 op=LOAD Dec 16 12:52:17.399000 audit[5862]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=5820 pid=5862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:17.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435396230666463626263373865663961306262653235303962623931 Dec 16 12:52:17.399000 audit: BPF prog-id=264 op=LOAD Dec 16 12:52:17.399000 audit[5862]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=5820 pid=5862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:17.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435396230666463626263373865663961306262653235303962623931 Dec 16 12:52:17.399000 audit: BPF prog-id=264 op=UNLOAD Dec 16 12:52:17.399000 audit[5862]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5820 pid=5862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:17.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435396230666463626263373865663961306262653235303962623931 Dec 16 12:52:17.399000 audit: BPF prog-id=263 op=UNLOAD Dec 16 12:52:17.399000 audit[5862]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5820 pid=5862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:17.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435396230666463626263373865663961306262653235303962623931 Dec 16 12:52:17.399000 audit: BPF prog-id=265 op=LOAD Dec 16 12:52:17.399000 audit[5862]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=5820 pid=5862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:17.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435396230666463626263373865663961306262653235303962623931 Dec 16 12:52:17.443700 containerd[1969]: time="2025-12-16T12:52:17.443584134Z" level=info msg="StartContainer for \"d59b0fdcbbc78ef9a0bbe2509bb9119edb4496107bc4bfd37c91973cc670870e\" returns successfully" Dec 16 12:52:17.519000 audit[5901]: NETFILTER_CFG table=filter:144 family=2 entries=14 op=nft_register_rule pid=5901 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:52:17.519000 audit[5901]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffede581130 a2=0 a3=7ffede58111c items=0 ppid=3722 pid=5901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:17.519000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:52:17.528000 audit[5901]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5901 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:52:17.528000 audit[5901]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffede581130 a2=0 a3=7ffede58111c items=0 ppid=3722 pid=5901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:17.528000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:52:17.623089 systemd-networkd[1548]: cali1c4aa2d28d5: Gained IPv6LL Dec 16 12:52:17.921872 containerd[1969]: time="2025-12-16T12:52:17.921828901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l2s79,Uid:a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0,Namespace:calico-system,Attempt:0,}" Dec 16 12:52:18.061027 systemd-networkd[1548]: cali6de4f4146ee: Link UP Dec 16 12:52:18.064652 systemd-networkd[1548]: cali6de4f4146ee: Gained carrier Dec 16 12:52:18.089078 containerd[1969]: 2025-12-16 12:52:17.969 [INFO][5905] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--11-k8s-csi--node--driver--l2s79-eth0 csi-node-driver- calico-system a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0 733 0 2025-12-16 12:51:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-17-11 csi-node-driver-l2s79 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6de4f4146ee [] [] }} ContainerID="d7c4f1c6079487d433272254f07dc40a57f1564b343801e991efd2212f1d0455" Namespace="calico-system" Pod="csi-node-driver-l2s79" WorkloadEndpoint="ip--172--31--17--11-k8s-csi--node--driver--l2s79-" Dec 16 12:52:18.089078 containerd[1969]: 2025-12-16 12:52:17.970 [INFO][5905] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d7c4f1c6079487d433272254f07dc40a57f1564b343801e991efd2212f1d0455" Namespace="calico-system" Pod="csi-node-driver-l2s79" WorkloadEndpoint="ip--172--31--17--11-k8s-csi--node--driver--l2s79-eth0" Dec 16 12:52:18.089078 containerd[1969]: 2025-12-16 12:52:18.002 [INFO][5916] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7c4f1c6079487d433272254f07dc40a57f1564b343801e991efd2212f1d0455" HandleID="k8s-pod-network.d7c4f1c6079487d433272254f07dc40a57f1564b343801e991efd2212f1d0455" Workload="ip--172--31--17--11-k8s-csi--node--driver--l2s79-eth0" Dec 16 12:52:18.089078 containerd[1969]: 2025-12-16 12:52:18.002 [INFO][5916] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d7c4f1c6079487d433272254f07dc40a57f1564b343801e991efd2212f1d0455" HandleID="k8s-pod-network.d7c4f1c6079487d433272254f07dc40a57f1564b343801e991efd2212f1d0455" Workload="ip--172--31--17--11-k8s-csi--node--driver--l2s79-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f030), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-11", "pod":"csi-node-driver-l2s79", "timestamp":"2025-12-16 12:52:18.002515433 +0000 UTC"}, Hostname:"ip-172-31-17-11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:52:18.089078 containerd[1969]: 2025-12-16 12:52:18.002 [INFO][5916] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:52:18.089078 containerd[1969]: 2025-12-16 12:52:18.002 [INFO][5916] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:52:18.089078 containerd[1969]: 2025-12-16 12:52:18.002 [INFO][5916] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-11' Dec 16 12:52:18.089078 containerd[1969]: 2025-12-16 12:52:18.012 [INFO][5916] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d7c4f1c6079487d433272254f07dc40a57f1564b343801e991efd2212f1d0455" host="ip-172-31-17-11" Dec 16 12:52:18.089078 containerd[1969]: 2025-12-16 12:52:18.019 [INFO][5916] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-11" Dec 16 12:52:18.089078 containerd[1969]: 2025-12-16 12:52:18.025 [INFO][5916] ipam/ipam.go 511: Trying affinity for 192.168.15.192/26 host="ip-172-31-17-11" Dec 16 12:52:18.089078 containerd[1969]: 2025-12-16 12:52:18.027 [INFO][5916] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.192/26 host="ip-172-31-17-11" Dec 16 12:52:18.089078 containerd[1969]: 2025-12-16 12:52:18.030 [INFO][5916] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.192/26 host="ip-172-31-17-11" Dec 16 12:52:18.089078 containerd[1969]: 2025-12-16 12:52:18.030 [INFO][5916] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.192/26 handle="k8s-pod-network.d7c4f1c6079487d433272254f07dc40a57f1564b343801e991efd2212f1d0455" host="ip-172-31-17-11" Dec 16 12:52:18.089078 containerd[1969]: 2025-12-16 12:52:18.032 [INFO][5916] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d7c4f1c6079487d433272254f07dc40a57f1564b343801e991efd2212f1d0455 Dec 16 12:52:18.089078 containerd[1969]: 2025-12-16 12:52:18.039 [INFO][5916] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.192/26 handle="k8s-pod-network.d7c4f1c6079487d433272254f07dc40a57f1564b343801e991efd2212f1d0455" host="ip-172-31-17-11" Dec 16 12:52:18.089078 containerd[1969]: 2025-12-16 12:52:18.051 [INFO][5916] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.201/26] block=192.168.15.192/26 handle="k8s-pod-network.d7c4f1c6079487d433272254f07dc40a57f1564b343801e991efd2212f1d0455" host="ip-172-31-17-11" Dec 16 12:52:18.089078 containerd[1969]: 2025-12-16 12:52:18.052 [INFO][5916] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.201/26] handle="k8s-pod-network.d7c4f1c6079487d433272254f07dc40a57f1564b343801e991efd2212f1d0455" host="ip-172-31-17-11" Dec 16 12:52:18.089078 containerd[1969]: 2025-12-16 12:52:18.052 [INFO][5916] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:52:18.089078 containerd[1969]: 2025-12-16 12:52:18.052 [INFO][5916] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.201/26] IPv6=[] ContainerID="d7c4f1c6079487d433272254f07dc40a57f1564b343801e991efd2212f1d0455" HandleID="k8s-pod-network.d7c4f1c6079487d433272254f07dc40a57f1564b343801e991efd2212f1d0455" Workload="ip--172--31--17--11-k8s-csi--node--driver--l2s79-eth0" Dec 16 12:52:18.090590 containerd[1969]: 2025-12-16 12:52:18.055 [INFO][5905] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d7c4f1c6079487d433272254f07dc40a57f1564b343801e991efd2212f1d0455" Namespace="calico-system" Pod="csi-node-driver-l2s79" WorkloadEndpoint="ip--172--31--17--11-k8s-csi--node--driver--l2s79-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--11-k8s-csi--node--driver--l2s79-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 51, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-11", ContainerID:"", Pod:"csi-node-driver-l2s79", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.15.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6de4f4146ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:52:18.090590 containerd[1969]: 2025-12-16 12:52:18.055 [INFO][5905] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.201/32] ContainerID="d7c4f1c6079487d433272254f07dc40a57f1564b343801e991efd2212f1d0455" Namespace="calico-system" Pod="csi-node-driver-l2s79" WorkloadEndpoint="ip--172--31--17--11-k8s-csi--node--driver--l2s79-eth0" Dec 16 12:52:18.090590 containerd[1969]: 2025-12-16 12:52:18.055 [INFO][5905] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6de4f4146ee ContainerID="d7c4f1c6079487d433272254f07dc40a57f1564b343801e991efd2212f1d0455" Namespace="calico-system" Pod="csi-node-driver-l2s79" WorkloadEndpoint="ip--172--31--17--11-k8s-csi--node--driver--l2s79-eth0" Dec 16 12:52:18.090590 containerd[1969]: 2025-12-16 12:52:18.066 [INFO][5905] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7c4f1c6079487d433272254f07dc40a57f1564b343801e991efd2212f1d0455" Namespace="calico-system" Pod="csi-node-driver-l2s79" WorkloadEndpoint="ip--172--31--17--11-k8s-csi--node--driver--l2s79-eth0" Dec 16 12:52:18.090590 containerd[1969]: 2025-12-16 12:52:18.069 [INFO][5905] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d7c4f1c6079487d433272254f07dc40a57f1564b343801e991efd2212f1d0455" Namespace="calico-system" Pod="csi-node-driver-l2s79" WorkloadEndpoint="ip--172--31--17--11-k8s-csi--node--driver--l2s79-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--11-k8s-csi--node--driver--l2s79-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 51, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-11", ContainerID:"d7c4f1c6079487d433272254f07dc40a57f1564b343801e991efd2212f1d0455", Pod:"csi-node-driver-l2s79", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.15.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6de4f4146ee", MAC:"6a:0f:61:7f:db:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:52:18.090590 containerd[1969]: 2025-12-16 12:52:18.085 [INFO][5905] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d7c4f1c6079487d433272254f07dc40a57f1564b343801e991efd2212f1d0455" Namespace="calico-system" Pod="csi-node-driver-l2s79" WorkloadEndpoint="ip--172--31--17--11-k8s-csi--node--driver--l2s79-eth0" Dec 16 12:52:18.111000 audit[5930]: NETFILTER_CFG table=filter:146 family=2 entries=56 op=nft_register_chain pid=5930 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:52:18.111000 audit[5930]: SYSCALL arch=c000003e syscall=46 success=yes exit=25484 a0=3 a1=7fff43ab70a0 a2=0 a3=7fff43ab708c items=0 ppid=5011 pid=5930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:18.111000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:52:18.126645 containerd[1969]: time="2025-12-16T12:52:18.126411792Z" level=info msg="connecting to shim d7c4f1c6079487d433272254f07dc40a57f1564b343801e991efd2212f1d0455" address="unix:///run/containerd/s/06f4e0ef74b08c573ebc00db95d7b4a1fd9244ecebbcd91045da927c4cfcbc50" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:52:18.162175 systemd[1]: Started cri-containerd-d7c4f1c6079487d433272254f07dc40a57f1564b343801e991efd2212f1d0455.scope - libcontainer container d7c4f1c6079487d433272254f07dc40a57f1564b343801e991efd2212f1d0455. Dec 16 12:52:18.177000 audit: BPF prog-id=266 op=LOAD Dec 16 12:52:18.178000 audit: BPF prog-id=267 op=LOAD Dec 16 12:52:18.178000 audit[5951]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=5939 pid=5951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:18.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437633466316336303739343837643433333237323235346630376463 Dec 16 12:52:18.178000 audit: BPF prog-id=267 op=UNLOAD Dec 16 12:52:18.178000 audit[5951]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5939 pid=5951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:18.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437633466316336303739343837643433333237323235346630376463 Dec 16 12:52:18.178000 audit: BPF prog-id=268 op=LOAD Dec 16 12:52:18.178000 audit[5951]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=5939 pid=5951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:18.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437633466316336303739343837643433333237323235346630376463 Dec 16 12:52:18.178000 audit: BPF prog-id=269 op=LOAD Dec 16 12:52:18.178000 audit[5951]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=5939 pid=5951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:18.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437633466316336303739343837643433333237323235346630376463 Dec 16 12:52:18.179000 audit: BPF prog-id=269 op=UNLOAD Dec 16 12:52:18.179000 audit[5951]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5939 pid=5951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:18.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437633466316336303739343837643433333237323235346630376463 Dec 16 12:52:18.179000 audit: BPF prog-id=268 op=UNLOAD Dec 16 12:52:18.179000 audit[5951]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5939 pid=5951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:18.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437633466316336303739343837643433333237323235346630376463 Dec 16 12:52:18.179000 audit: BPF prog-id=270 op=LOAD Dec 16 12:52:18.179000 audit[5951]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=5939 pid=5951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:18.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437633466316336303739343837643433333237323235346630376463 Dec 16 12:52:18.202546 containerd[1969]: time="2025-12-16T12:52:18.202489570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l2s79,Uid:a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0,Namespace:calico-system,Attempt:0,} returns sandbox id \"d7c4f1c6079487d433272254f07dc40a57f1564b343801e991efd2212f1d0455\"" Dec 16 12:52:18.207050 containerd[1969]: time="2025-12-16T12:52:18.207007286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:52:18.326211 systemd-networkd[1548]: caliea811648aaf: Gained IPv6LL Dec 16 12:52:18.338410 kubelet[3300]: I1216 12:52:18.338345 3300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-j62tw" podStartSLOduration=62.338329553 podStartE2EDuration="1m2.338329553s" podCreationTimestamp="2025-12-16 12:51:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:52:18.337838029 +0000 UTC m=+69.574940275" watchObservedRunningTime="2025-12-16 12:52:18.338329553 +0000 UTC m=+69.575431497" Dec 16 12:52:18.452286 containerd[1969]: time="2025-12-16T12:52:18.452046045Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:52:18.454606 containerd[1969]: time="2025-12-16T12:52:18.454495286Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:52:18.454996 containerd[1969]: time="2025-12-16T12:52:18.454537687Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:52:18.455189 kubelet[3300]: E1216 12:52:18.455158 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:52:18.455346 kubelet[3300]: E1216 12:52:18.455235 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:52:18.466392 kubelet[3300]: E1216 12:52:18.466327 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zn6r2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l2s79_calico-system(a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:52:18.468937 containerd[1969]: time="2025-12-16T12:52:18.468850303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:52:18.543000 audit[5977]: NETFILTER_CFG table=filter:147 family=2 entries=14 op=nft_register_rule pid=5977 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:52:18.543000 audit[5977]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc08c537d0 a2=0 a3=7ffc08c537bc items=0 ppid=3722 pid=5977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:18.543000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:52:18.559000 audit[5977]: NETFILTER_CFG table=nat:148 family=2 entries=56 op=nft_register_chain pid=5977 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:52:18.559000 audit[5977]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc08c537d0 a2=0 a3=7ffc08c537bc items=0 ppid=3722 pid=5977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:18.559000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:52:18.737458 containerd[1969]: time="2025-12-16T12:52:18.737343906Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:52:18.739575 containerd[1969]: time="2025-12-16T12:52:18.739537035Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:52:18.739732 containerd[1969]: time="2025-12-16T12:52:18.739618844Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:52:18.740135 kubelet[3300]: E1216 12:52:18.739886 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:52:18.740135 kubelet[3300]: E1216 12:52:18.739966 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:52:18.740135 kubelet[3300]: E1216 12:52:18.740091 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zn6r2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l2s79_calico-system(a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:52:18.742250 kubelet[3300]: E1216 12:52:18.742211 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l2s79" podUID="a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0" Dec 16 12:52:18.928240 containerd[1969]: time="2025-12-16T12:52:18.927869188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:52:19.206379 containerd[1969]: time="2025-12-16T12:52:19.206324265Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:52:19.208598 containerd[1969]: time="2025-12-16T12:52:19.208524191Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:52:19.208932 containerd[1969]: time="2025-12-16T12:52:19.208585036Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:52:19.209159 kubelet[3300]: E1216 12:52:19.209051 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:52:19.209159 kubelet[3300]: E1216 12:52:19.209109 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:52:19.210087 kubelet[3300]: E1216 12:52:19.210038 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:47cc487b7b6d43fa90aba00c924ab97e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4vw2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-595b9f8d69-rdqmx_calico-system(0a0e312f-8a41-4806-bbcc-0e2c96e89982): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:52:19.213113 containerd[1969]: time="2025-12-16T12:52:19.213078689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:52:19.326718 kubelet[3300]: E1216 12:52:19.326555 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l2s79" podUID="a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0" Dec 16 12:52:19.510226 containerd[1969]: time="2025-12-16T12:52:19.510107585Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:52:19.512190 containerd[1969]: time="2025-12-16T12:52:19.512101297Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:52:19.512190 containerd[1969]: time="2025-12-16T12:52:19.512143693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:52:19.512453 kubelet[3300]: E1216 12:52:19.512350 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:52:19.512724 kubelet[3300]: E1216 12:52:19.512502 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:52:19.512724 kubelet[3300]: E1216 12:52:19.512616 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4vw2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-595b9f8d69-rdqmx_calico-system(0a0e312f-8a41-4806-bbcc-0e2c96e89982): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:52:19.514065 kubelet[3300]: E1216 12:52:19.514012 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-595b9f8d69-rdqmx" podUID="0a0e312f-8a41-4806-bbcc-0e2c96e89982" Dec 16 12:52:19.798083 systemd-networkd[1548]: cali6de4f4146ee: Gained IPv6LL Dec 16 12:52:21.855411 ntpd[1914]: Listen normally on 9 calic462f062ea4 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 16 12:52:21.855460 ntpd[1914]: Listen normally on 10 calie0fbb88eb5d [fe80::ecee:eeff:feee:eeee%9]:123 Dec 16 12:52:21.855871 ntpd[1914]: 16 Dec 12:52:21 ntpd[1914]: Listen normally on 9 calic462f062ea4 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 16 12:52:21.855871 ntpd[1914]: 16 Dec 12:52:21 ntpd[1914]: Listen normally on 10 calie0fbb88eb5d [fe80::ecee:eeff:feee:eeee%9]:123 Dec 16 12:52:21.855871 ntpd[1914]: 16 Dec 12:52:21 ntpd[1914]: Listen normally on 11 cali2c7024f463c [fe80::ecee:eeff:feee:eeee%10]:123 Dec 16 12:52:21.855871 ntpd[1914]: 16 Dec 12:52:21 ntpd[1914]: Listen normally on 12 calic17bf330188 [fe80::ecee:eeff:feee:eeee%11]:123 Dec 16 12:52:21.855871 ntpd[1914]: 16 Dec 12:52:21 ntpd[1914]: Listen normally on 13 cali923246644ca [fe80::ecee:eeff:feee:eeee%12]:123 Dec 16 12:52:21.855871 ntpd[1914]: 16 Dec 12:52:21 ntpd[1914]: Listen normally on 14 cali1c4aa2d28d5 [fe80::ecee:eeff:feee:eeee%13]:123 Dec 16 12:52:21.855871 ntpd[1914]: 16 Dec 12:52:21 ntpd[1914]: Listen normally on 15 caliea811648aaf [fe80::ecee:eeff:feee:eeee%14]:123 Dec 16 12:52:21.855871 ntpd[1914]: 16 Dec 12:52:21 ntpd[1914]: Listen normally on 16 cali6de4f4146ee [fe80::ecee:eeff:feee:eeee%15]:123 Dec 16 12:52:21.855482 ntpd[1914]: Listen normally on 11 cali2c7024f463c [fe80::ecee:eeff:feee:eeee%10]:123 Dec 16 12:52:21.855502 ntpd[1914]: Listen normally on 12 calic17bf330188 [fe80::ecee:eeff:feee:eeee%11]:123 Dec 16 12:52:21.855523 ntpd[1914]: Listen normally on 13 cali923246644ca [fe80::ecee:eeff:feee:eeee%12]:123 Dec 16 12:52:21.855544 ntpd[1914]: Listen normally on 14 cali1c4aa2d28d5 [fe80::ecee:eeff:feee:eeee%13]:123 Dec 16 12:52:21.855563 ntpd[1914]: Listen normally on 15 caliea811648aaf [fe80::ecee:eeff:feee:eeee%14]:123 Dec 16 12:52:21.855582 ntpd[1914]: Listen normally on 16 cali6de4f4146ee [fe80::ecee:eeff:feee:eeee%15]:123 Dec 16 12:52:22.323575 systemd[1]: Started sshd@9-172.31.17.11:22-147.75.109.163:38596.service - OpenSSH per-connection server daemon (147.75.109.163:38596). Dec 16 12:52:22.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.17.11:22-147.75.109.163:38596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:22.325194 kernel: kauditd_printk_skb: 52 callbacks suppressed Dec 16 12:52:22.325272 kernel: audit: type=1130 audit(1765889542.323:796): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.17.11:22-147.75.109.163:38596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:22.499000 audit[5981]: USER_ACCT pid=5981 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:22.502139 sshd-session[5981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:52:22.503398 sshd[5981]: Accepted publickey for core from 147.75.109.163 port 38596 ssh2: RSA SHA256:U/DoHe1xJfGISnzLa/L/V6WMQEeU/gRWF7ew5p5tFyo Dec 16 12:52:22.504954 kernel: audit: type=1101 audit(1765889542.499:797): pid=5981 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:22.500000 audit[5981]: CRED_ACQ pid=5981 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:22.512275 kernel: audit: type=1103 audit(1765889542.500:798): pid=5981 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:22.512391 kernel: audit: type=1006 audit(1765889542.500:799): pid=5981 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 16 12:52:22.510550 systemd-logind[1926]: New session 11 of user core. Dec 16 12:52:22.514084 kernel: audit: type=1300 audit(1765889542.500:799): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec5389bb0 a2=3 a3=0 items=0 ppid=1 pid=5981 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:22.500000 audit[5981]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec5389bb0 a2=3 a3=0 items=0 ppid=1 pid=5981 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:22.519561 kernel: audit: type=1327 audit(1765889542.500:799): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:52:22.500000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:52:22.519120 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 12:52:22.522000 audit[5981]: USER_START pid=5981 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:22.528000 audit[5985]: CRED_ACQ pid=5985 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:22.530555 kernel: audit: type=1105 audit(1765889542.522:800): pid=5981 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:22.530603 kernel: audit: type=1103 audit(1765889542.528:801): pid=5985 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:22.678677 sshd[5985]: Connection closed by 147.75.109.163 port 38596 Dec 16 12:52:22.679360 sshd-session[5981]: pam_unix(sshd:session): session closed for user core Dec 16 12:52:22.680000 audit[5981]: USER_END pid=5981 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:22.684686 systemd[1]: sshd@9-172.31.17.11:22-147.75.109.163:38596.service: Deactivated successfully. Dec 16 12:52:22.687328 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 12:52:22.687966 kernel: audit: type=1106 audit(1765889542.680:802): pid=5981 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:22.688952 systemd-logind[1926]: Session 11 logged out. Waiting for processes to exit. Dec 16 12:52:22.680000 audit[5981]: CRED_DISP pid=5981 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:22.690941 systemd-logind[1926]: Removed session 11. Dec 16 12:52:22.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.17.11:22-147.75.109.163:38596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:22.694011 kernel: audit: type=1104 audit(1765889542.680:803): pid=5981 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:22.711306 systemd[1]: Started sshd@10-172.31.17.11:22-147.75.109.163:38600.service - OpenSSH per-connection server daemon (147.75.109.163:38600). Dec 16 12:52:22.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.17.11:22-147.75.109.163:38600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:22.878000 audit[5997]: USER_ACCT pid=5997 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:22.879247 sshd[5997]: Accepted publickey for core from 147.75.109.163 port 38600 ssh2: RSA SHA256:U/DoHe1xJfGISnzLa/L/V6WMQEeU/gRWF7ew5p5tFyo Dec 16 12:52:22.879000 audit[5997]: CRED_ACQ pid=5997 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:22.879000 audit[5997]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff91b09ee0 a2=3 a3=0 items=0 ppid=1 pid=5997 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:22.879000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:52:22.881096 sshd-session[5997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:52:22.886669 systemd-logind[1926]: New session 12 of user core. Dec 16 12:52:22.897163 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 12:52:22.899000 audit[5997]: USER_START pid=5997 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:22.901000 audit[6001]: CRED_ACQ pid=6001 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:23.104769 sshd[6001]: Connection closed by 147.75.109.163 port 38600 Dec 16 12:52:23.106386 sshd-session[5997]: pam_unix(sshd:session): session closed for user core Dec 16 12:52:23.108000 audit[5997]: USER_END pid=5997 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:23.109000 audit[5997]: CRED_DISP pid=5997 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:23.115617 systemd-logind[1926]: Session 12 logged out. Waiting for processes to exit. Dec 16 12:52:23.116708 systemd[1]: sshd@10-172.31.17.11:22-147.75.109.163:38600.service: Deactivated successfully. Dec 16 12:52:23.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.17.11:22-147.75.109.163:38600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:23.121730 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 12:52:23.126366 systemd-logind[1926]: Removed session 12. Dec 16 12:52:23.138744 systemd[1]: Started sshd@11-172.31.17.11:22-147.75.109.163:38616.service - OpenSSH per-connection server daemon (147.75.109.163:38616). Dec 16 12:52:23.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.17.11:22-147.75.109.163:38616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:23.304000 audit[6013]: USER_ACCT pid=6013 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:23.305952 sshd[6013]: Accepted publickey for core from 147.75.109.163 port 38616 ssh2: RSA SHA256:U/DoHe1xJfGISnzLa/L/V6WMQEeU/gRWF7ew5p5tFyo Dec 16 12:52:23.305000 audit[6013]: CRED_ACQ pid=6013 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:23.306000 audit[6013]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc35673970 a2=3 a3=0 items=0 ppid=1 pid=6013 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:23.306000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:52:23.307752 sshd-session[6013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:52:23.313610 systemd-logind[1926]: New session 13 of user core. Dec 16 12:52:23.319113 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 12:52:23.321000 audit[6013]: USER_START pid=6013 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:23.323000 audit[6017]: CRED_ACQ pid=6017 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:23.482155 sshd[6017]: Connection closed by 147.75.109.163 port 38616 Dec 16 12:52:23.482467 sshd-session[6013]: pam_unix(sshd:session): session closed for user core Dec 16 12:52:23.484000 audit[6013]: USER_END pid=6013 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:23.484000 audit[6013]: CRED_DISP pid=6013 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:23.489097 systemd[1]: sshd@11-172.31.17.11:22-147.75.109.163:38616.service: Deactivated successfully. Dec 16 12:52:23.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.17.11:22-147.75.109.163:38616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:23.494600 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 12:52:23.497878 systemd-logind[1926]: Session 13 logged out. Waiting for processes to exit. Dec 16 12:52:23.500053 systemd-logind[1926]: Removed session 13. Dec 16 12:52:26.922981 containerd[1969]: time="2025-12-16T12:52:26.922154262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:52:27.165162 containerd[1969]: time="2025-12-16T12:52:27.165116565Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:52:27.167466 containerd[1969]: time="2025-12-16T12:52:27.167411839Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:52:27.167577 containerd[1969]: time="2025-12-16T12:52:27.167491090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:52:27.167710 kubelet[3300]: E1216 12:52:27.167633 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:52:27.168136 kubelet[3300]: E1216 12:52:27.167722 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:52:27.168136 kubelet[3300]: E1216 12:52:27.167847 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-spx94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b5d97d74b-j9kr5_calico-apiserver(07dabb60-b494-485b-be91-7522183aff41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:52:27.169048 kubelet[3300]: E1216 12:52:27.169003 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b5d97d74b-j9kr5" podUID="07dabb60-b494-485b-be91-7522183aff41" Dec 16 12:52:28.522056 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 12:52:28.522177 kernel: audit: type=1130 audit(1765889548.516:823): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.17.11:22-147.75.109.163:38624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:28.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.17.11:22-147.75.109.163:38624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:28.515305 systemd[1]: Started sshd@12-172.31.17.11:22-147.75.109.163:38624.service - OpenSSH per-connection server daemon (147.75.109.163:38624). Dec 16 12:52:28.679000 audit[6037]: USER_ACCT pid=6037 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:28.682105 sshd[6037]: Accepted publickey for core from 147.75.109.163 port 38624 ssh2: RSA SHA256:U/DoHe1xJfGISnzLa/L/V6WMQEeU/gRWF7ew5p5tFyo Dec 16 12:52:28.687970 kernel: audit: type=1101 audit(1765889548.679:824): pid=6037 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:28.691458 sshd-session[6037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:52:28.698141 kernel: audit: type=1103 audit(1765889548.689:825): pid=6037 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:28.689000 audit[6037]: CRED_ACQ pid=6037 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:28.705861 kernel: audit: type=1006 audit(1765889548.689:826): pid=6037 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 16 12:52:28.689000 audit[6037]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffddd1aa9c0 a2=3 a3=0 items=0 ppid=1 pid=6037 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:28.708644 systemd-logind[1926]: New session 14 of user core. Dec 16 12:52:28.714937 kernel: audit: type=1300 audit(1765889548.689:826): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffddd1aa9c0 a2=3 a3=0 items=0 ppid=1 pid=6037 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:28.689000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:52:28.718736 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 12:52:28.720120 kernel: audit: type=1327 audit(1765889548.689:826): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:52:28.724000 audit[6037]: USER_START pid=6037 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:28.733107 kernel: audit: type=1105 audit(1765889548.724:827): pid=6037 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:28.732000 audit[6041]: CRED_ACQ pid=6041 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:28.741940 kernel: audit: type=1103 audit(1765889548.732:828): pid=6041 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:28.889960 sshd[6041]: Connection closed by 147.75.109.163 port 38624 Dec 16 12:52:28.891632 sshd-session[6037]: pam_unix(sshd:session): session closed for user core Dec 16 12:52:28.896000 audit[6037]: USER_END pid=6037 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:28.901464 systemd-logind[1926]: Session 14 logged out. Waiting for processes to exit. Dec 16 12:52:28.904087 systemd[1]: sshd@12-172.31.17.11:22-147.75.109.163:38624.service: Deactivated successfully. Dec 16 12:52:28.904929 kernel: audit: type=1106 audit(1765889548.896:829): pid=6037 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:28.912994 kernel: audit: type=1104 audit(1765889548.896:830): pid=6037 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:28.896000 audit[6037]: CRED_DISP pid=6037 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:28.908155 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 12:52:28.911175 systemd-logind[1926]: Removed session 14. Dec 16 12:52:28.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.17.11:22-147.75.109.163:38624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:28.927999 containerd[1969]: time="2025-12-16T12:52:28.927781177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:52:29.194038 containerd[1969]: time="2025-12-16T12:52:29.193930970Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:52:29.197123 containerd[1969]: time="2025-12-16T12:52:29.197030080Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:52:29.197123 containerd[1969]: time="2025-12-16T12:52:29.197084165Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:52:29.197542 kubelet[3300]: E1216 12:52:29.197502 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:52:29.197862 kubelet[3300]: E1216 12:52:29.197548 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:52:29.198506 containerd[1969]: time="2025-12-16T12:52:29.198457619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:52:29.199107 kubelet[3300]: E1216 12:52:29.198553 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hrjjg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8gcbc_calico-system(39a10cdc-f6c5-430e-ac11-7f9183d0c949): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:52:29.200801 kubelet[3300]: E1216 12:52:29.200721 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8gcbc" podUID="39a10cdc-f6c5-430e-ac11-7f9183d0c949" Dec 16 12:52:29.484602 containerd[1969]: time="2025-12-16T12:52:29.484184951Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:52:29.486664 containerd[1969]: time="2025-12-16T12:52:29.486566600Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:52:29.486798 containerd[1969]: time="2025-12-16T12:52:29.486759556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:52:29.486986 kubelet[3300]: E1216 12:52:29.486896 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:52:29.487192 kubelet[3300]: E1216 12:52:29.486999 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:52:29.487668 kubelet[3300]: E1216 12:52:29.487124 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gm8jc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6fb4c65f97-rg8xf_calico-apiserver(631dcf21-0085-4dd5-b7a8-35fb5c10f8ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:52:29.488795 kubelet[3300]: E1216 12:52:29.488760 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb4c65f97-rg8xf" podUID="631dcf21-0085-4dd5-b7a8-35fb5c10f8ab" Dec 16 12:52:29.922452 containerd[1969]: time="2025-12-16T12:52:29.922316077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:52:30.234783 containerd[1969]: time="2025-12-16T12:52:30.234606226Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:52:30.236930 containerd[1969]: time="2025-12-16T12:52:30.236865289Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:52:30.237058 containerd[1969]: time="2025-12-16T12:52:30.236961727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:52:30.237469 kubelet[3300]: E1216 12:52:30.237406 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:52:30.238377 kubelet[3300]: E1216 12:52:30.237485 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:52:30.238377 kubelet[3300]: E1216 12:52:30.237703 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zn6r2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l2s79_calico-system(a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:52:30.241405 containerd[1969]: time="2025-12-16T12:52:30.241371638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:52:30.523129 containerd[1969]: time="2025-12-16T12:52:30.523005847Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:52:30.525580 containerd[1969]: time="2025-12-16T12:52:30.525500457Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:52:30.525702 containerd[1969]: time="2025-12-16T12:52:30.525627053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:52:30.526297 kubelet[3300]: E1216 12:52:30.525825 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:52:30.526565 kubelet[3300]: E1216 12:52:30.526536 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:52:30.526961 kubelet[3300]: E1216 12:52:30.526703 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zn6r2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l2s79_calico-system(a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:52:30.527946 kubelet[3300]: E1216 12:52:30.527896 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l2s79" podUID="a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0" Dec 16 12:52:30.922350 containerd[1969]: time="2025-12-16T12:52:30.922279393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:52:31.152603 containerd[1969]: time="2025-12-16T12:52:31.152560590Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:52:31.154806 containerd[1969]: time="2025-12-16T12:52:31.154734324Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:52:31.155082 containerd[1969]: time="2025-12-16T12:52:31.154774533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:52:31.155163 kubelet[3300]: E1216 12:52:31.155079 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:52:31.155163 kubelet[3300]: E1216 12:52:31.155133 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:52:31.155717 kubelet[3300]: E1216 12:52:31.155364 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z28sw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5b97648649-sj8kb_calico-system(97ebcca4-f43d-4a70-b294-e93b18442671): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:52:31.156638 kubelet[3300]: E1216 12:52:31.156602 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b97648649-sj8kb" podUID="97ebcca4-f43d-4a70-b294-e93b18442671" Dec 16 12:52:31.922269 containerd[1969]: time="2025-12-16T12:52:31.921994051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:52:32.187249 containerd[1969]: time="2025-12-16T12:52:32.186995886Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:52:32.189180 containerd[1969]: time="2025-12-16T12:52:32.189020213Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:52:32.189180 containerd[1969]: time="2025-12-16T12:52:32.189108910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:52:32.189439 kubelet[3300]: E1216 12:52:32.189325 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:52:32.189439 kubelet[3300]: E1216 12:52:32.189423 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:52:32.191214 kubelet[3300]: E1216 12:52:32.189799 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzbm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6fb4c65f97-cxgtk_calico-apiserver(5fa2e950-afe9-46cc-8df6-b014aa90c32b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:52:32.191214 kubelet[3300]: E1216 12:52:32.191024 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb4c65f97-cxgtk" podUID="5fa2e950-afe9-46cc-8df6-b014aa90c32b" Dec 16 12:52:33.924948 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:52:33.925351 kernel: audit: type=1130 audit(1765889553.919:832): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.17.11:22-147.75.109.163:54360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:33.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.17.11:22-147.75.109.163:54360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:33.920222 systemd[1]: Started sshd@13-172.31.17.11:22-147.75.109.163:54360.service - OpenSSH per-connection server daemon (147.75.109.163:54360). Dec 16 12:52:33.931585 kubelet[3300]: E1216 12:52:33.931550 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-595b9f8d69-rdqmx" podUID="0a0e312f-8a41-4806-bbcc-0e2c96e89982" Dec 16 12:52:34.092000 audit[6078]: USER_ACCT pid=6078 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:34.093744 sshd[6078]: Accepted publickey for core from 147.75.109.163 port 54360 ssh2: RSA SHA256:U/DoHe1xJfGISnzLa/L/V6WMQEeU/gRWF7ew5p5tFyo Dec 16 12:52:34.094000 audit[6078]: CRED_ACQ pid=6078 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:34.100104 sshd-session[6078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:52:34.101055 kernel: audit: type=1101 audit(1765889554.092:833): pid=6078 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:34.101122 kernel: audit: type=1103 audit(1765889554.094:834): pid=6078 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:34.105684 kernel: audit: type=1006 audit(1765889554.094:835): pid=6078 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 16 12:52:34.094000 audit[6078]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf103cdb0 a2=3 a3=0 items=0 ppid=1 pid=6078 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:34.109776 kernel: audit: type=1300 audit(1765889554.094:835): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf103cdb0 a2=3 a3=0 items=0 ppid=1 pid=6078 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:34.112343 systemd-logind[1926]: New session 15 of user core. Dec 16 12:52:34.115889 kernel: audit: type=1327 audit(1765889554.094:835): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:52:34.094000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:52:34.126170 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 12:52:34.129000 audit[6078]: USER_START pid=6078 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:34.132000 audit[6087]: CRED_ACQ pid=6087 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:34.137360 kernel: audit: type=1105 audit(1765889554.129:836): pid=6078 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:34.137434 kernel: audit: type=1103 audit(1765889554.132:837): pid=6087 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:34.300987 sshd[6087]: Connection closed by 147.75.109.163 port 54360 Dec 16 12:52:34.301742 sshd-session[6078]: pam_unix(sshd:session): session closed for user core Dec 16 12:52:34.310000 audit[6078]: USER_END pid=6078 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:34.319196 kernel: audit: type=1106 audit(1765889554.310:838): pid=6078 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:34.313000 audit[6078]: CRED_DISP pid=6078 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:34.324495 systemd[1]: sshd@13-172.31.17.11:22-147.75.109.163:54360.service: Deactivated successfully. Dec 16 12:52:34.328048 kernel: audit: type=1104 audit(1765889554.313:839): pid=6078 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:34.329050 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 12:52:34.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.17.11:22-147.75.109.163:54360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:34.335445 systemd-logind[1926]: Session 15 logged out. Waiting for processes to exit. Dec 16 12:52:34.337849 systemd-logind[1926]: Removed session 15. Dec 16 12:52:37.922594 kubelet[3300]: E1216 12:52:37.922551 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b5d97d74b-j9kr5" podUID="07dabb60-b494-485b-be91-7522183aff41" Dec 16 12:52:39.332289 systemd[1]: Started sshd@14-172.31.17.11:22-147.75.109.163:54366.service - OpenSSH per-connection server daemon (147.75.109.163:54366). Dec 16 12:52:39.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.17.11:22-147.75.109.163:54366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:39.333124 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:52:39.334686 kernel: audit: type=1130 audit(1765889559.331:841): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.17.11:22-147.75.109.163:54366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:39.544000 audit[6102]: USER_ACCT pid=6102 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:39.545944 sshd[6102]: Accepted publickey for core from 147.75.109.163 port 54366 ssh2: RSA SHA256:U/DoHe1xJfGISnzLa/L/V6WMQEeU/gRWF7ew5p5tFyo Dec 16 12:52:39.548580 sshd-session[6102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:52:39.546000 audit[6102]: CRED_ACQ pid=6102 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:39.551978 kernel: audit: type=1101 audit(1765889559.544:842): pid=6102 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:39.552049 kernel: audit: type=1103 audit(1765889559.546:843): pid=6102 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:39.555489 systemd-logind[1926]: New session 16 of user core. Dec 16 12:52:39.557261 kernel: audit: type=1006 audit(1765889559.546:844): pid=6102 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 16 12:52:39.546000 audit[6102]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea8397000 a2=3 a3=0 items=0 ppid=1 pid=6102 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:39.559534 kernel: audit: type=1300 audit(1765889559.546:844): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea8397000 a2=3 a3=0 items=0 ppid=1 pid=6102 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:39.546000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:52:39.564285 kernel: audit: type=1327 audit(1765889559.546:844): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:52:39.565299 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 12:52:39.568000 audit[6102]: USER_START pid=6102 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:39.573000 audit[6106]: CRED_ACQ pid=6106 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:39.576525 kernel: audit: type=1105 audit(1765889559.568:845): pid=6102 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:39.576603 kernel: audit: type=1103 audit(1765889559.573:846): pid=6106 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:39.852426 sshd[6106]: Connection closed by 147.75.109.163 port 54366 Dec 16 12:52:39.853055 sshd-session[6102]: pam_unix(sshd:session): session closed for user core Dec 16 12:52:39.853000 audit[6102]: USER_END pid=6102 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:39.857863 systemd[1]: sshd@14-172.31.17.11:22-147.75.109.163:54366.service: Deactivated successfully. Dec 16 12:52:39.860371 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 12:52:39.860924 kernel: audit: type=1106 audit(1765889559.853:847): pid=6102 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:39.860986 kernel: audit: type=1104 audit(1765889559.853:848): pid=6102 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:39.853000 audit[6102]: CRED_DISP pid=6102 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:39.863409 systemd-logind[1926]: Session 16 logged out. Waiting for processes to exit. Dec 16 12:52:39.864772 systemd-logind[1926]: Removed session 16. Dec 16 12:52:39.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.17.11:22-147.75.109.163:54366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:41.922525 kubelet[3300]: E1216 12:52:41.922468 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b97648649-sj8kb" podUID="97ebcca4-f43d-4a70-b294-e93b18442671" Dec 16 12:52:41.923401 kubelet[3300]: E1216 12:52:41.922979 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb4c65f97-rg8xf" podUID="631dcf21-0085-4dd5-b7a8-35fb5c10f8ab" Dec 16 12:52:42.924449 kubelet[3300]: E1216 12:52:42.924408 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8gcbc" podUID="39a10cdc-f6c5-430e-ac11-7f9183d0c949" Dec 16 12:52:43.922171 kubelet[3300]: E1216 12:52:43.922114 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb4c65f97-cxgtk" podUID="5fa2e950-afe9-46cc-8df6-b014aa90c32b" Dec 16 12:52:44.886537 systemd[1]: Started sshd@15-172.31.17.11:22-147.75.109.163:37600.service - OpenSSH per-connection server daemon (147.75.109.163:37600). Dec 16 12:52:44.892192 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:52:44.893044 kernel: audit: type=1130 audit(1765889564.885:850): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.17.11:22-147.75.109.163:37600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:44.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.17.11:22-147.75.109.163:37600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:45.066000 audit[6120]: USER_ACCT pid=6120 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:45.067763 sshd[6120]: Accepted publickey for core from 147.75.109.163 port 37600 ssh2: RSA SHA256:U/DoHe1xJfGISnzLa/L/V6WMQEeU/gRWF7ew5p5tFyo Dec 16 12:52:45.070266 sshd-session[6120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:52:45.072941 kernel: audit: type=1101 audit(1765889565.066:851): pid=6120 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:45.068000 audit[6120]: CRED_ACQ pid=6120 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:45.077976 kernel: audit: type=1103 audit(1765889565.068:852): pid=6120 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:45.079147 systemd-logind[1926]: New session 17 of user core. Dec 16 12:52:45.068000 audit[6120]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4d004930 a2=3 a3=0 items=0 ppid=1 pid=6120 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:45.082129 kernel: audit: type=1006 audit(1765889565.068:853): pid=6120 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 16 12:52:45.082282 kernel: audit: type=1300 audit(1765889565.068:853): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4d004930 a2=3 a3=0 items=0 ppid=1 pid=6120 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:45.068000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:52:45.088228 kernel: audit: type=1327 audit(1765889565.068:853): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:52:45.089230 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 12:52:45.091000 audit[6120]: USER_START pid=6120 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:45.093000 audit[6124]: CRED_ACQ pid=6124 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:45.099506 kernel: audit: type=1105 audit(1765889565.091:854): pid=6120 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:45.099583 kernel: audit: type=1103 audit(1765889565.093:855): pid=6124 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:45.452923 sshd[6124]: Connection closed by 147.75.109.163 port 37600 Dec 16 12:52:45.453251 sshd-session[6120]: pam_unix(sshd:session): session closed for user core Dec 16 12:52:45.454000 audit[6120]: USER_END pid=6120 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:45.454000 audit[6120]: CRED_DISP pid=6120 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:45.463025 kernel: audit: type=1106 audit(1765889565.454:856): pid=6120 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:45.463097 kernel: audit: type=1104 audit(1765889565.454:857): pid=6120 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:45.466474 systemd[1]: sshd@15-172.31.17.11:22-147.75.109.163:37600.service: Deactivated successfully. Dec 16 12:52:45.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.17.11:22-147.75.109.163:37600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:45.468620 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 12:52:45.469486 systemd-logind[1926]: Session 17 logged out. Waiting for processes to exit. Dec 16 12:52:45.471295 systemd-logind[1926]: Removed session 17. Dec 16 12:52:45.482081 systemd[1]: Started sshd@16-172.31.17.11:22-147.75.109.163:37604.service - OpenSSH per-connection server daemon (147.75.109.163:37604). Dec 16 12:52:45.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.17.11:22-147.75.109.163:37604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:45.648000 audit[6136]: USER_ACCT pid=6136 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:45.651003 sshd[6136]: Accepted publickey for core from 147.75.109.163 port 37604 ssh2: RSA SHA256:U/DoHe1xJfGISnzLa/L/V6WMQEeU/gRWF7ew5p5tFyo Dec 16 12:52:45.650000 audit[6136]: CRED_ACQ pid=6136 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:45.650000 audit[6136]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff55d91470 a2=3 a3=0 items=0 ppid=1 pid=6136 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:45.650000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:52:45.652017 sshd-session[6136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:52:45.657752 systemd-logind[1926]: New session 18 of user core. Dec 16 12:52:45.667176 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 12:52:45.669000 audit[6136]: USER_START pid=6136 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:45.671000 audit[6140]: CRED_ACQ pid=6140 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:45.923154 containerd[1969]: time="2025-12-16T12:52:45.922301001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:52:45.929512 kubelet[3300]: E1216 12:52:45.926134 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l2s79" podUID="a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0" Dec 16 12:52:46.187165 sshd[6140]: Connection closed by 147.75.109.163 port 37604 Dec 16 12:52:46.188736 sshd-session[6136]: pam_unix(sshd:session): session closed for user core Dec 16 12:52:46.189000 audit[6136]: USER_END pid=6136 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:46.190000 audit[6136]: CRED_DISP pid=6136 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:46.193461 systemd[1]: sshd@16-172.31.17.11:22-147.75.109.163:37604.service: Deactivated successfully. Dec 16 12:52:46.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.17.11:22-147.75.109.163:37604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:46.195589 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 12:52:46.198358 systemd-logind[1926]: Session 18 logged out. Waiting for processes to exit. Dec 16 12:52:46.199473 systemd-logind[1926]: Removed session 18. Dec 16 12:52:46.222784 systemd[1]: Started sshd@17-172.31.17.11:22-147.75.109.163:37620.service - OpenSSH per-connection server daemon (147.75.109.163:37620). Dec 16 12:52:46.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.17.11:22-147.75.109.163:37620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:46.252319 containerd[1969]: time="2025-12-16T12:52:46.252036527Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:52:46.254444 containerd[1969]: time="2025-12-16T12:52:46.254239874Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:52:46.254444 containerd[1969]: time="2025-12-16T12:52:46.254265091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:52:46.257031 kubelet[3300]: E1216 12:52:46.254641 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:52:46.257031 kubelet[3300]: E1216 12:52:46.254707 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:52:46.257031 kubelet[3300]: E1216 12:52:46.254853 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:47cc487b7b6d43fa90aba00c924ab97e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4vw2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-595b9f8d69-rdqmx_calico-system(0a0e312f-8a41-4806-bbcc-0e2c96e89982): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:52:46.257476 containerd[1969]: time="2025-12-16T12:52:46.257452352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:52:46.397000 audit[6153]: USER_ACCT pid=6153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:46.398505 sshd[6153]: Accepted publickey for core from 147.75.109.163 port 37620 ssh2: RSA SHA256:U/DoHe1xJfGISnzLa/L/V6WMQEeU/gRWF7ew5p5tFyo Dec 16 12:52:46.398000 audit[6153]: CRED_ACQ pid=6153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:46.398000 audit[6153]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe0647820 a2=3 a3=0 items=0 ppid=1 pid=6153 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:46.398000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:52:46.400416 sshd-session[6153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:52:46.405807 systemd-logind[1926]: New session 19 of user core. Dec 16 12:52:46.413165 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 12:52:46.415000 audit[6153]: USER_START pid=6153 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:46.417000 audit[6157]: CRED_ACQ pid=6157 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:46.555397 containerd[1969]: time="2025-12-16T12:52:46.554581856Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:52:46.557669 containerd[1969]: time="2025-12-16T12:52:46.557534081Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:52:46.557669 containerd[1969]: time="2025-12-16T12:52:46.557560727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:52:46.560247 kubelet[3300]: E1216 12:52:46.560055 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:52:46.560247 kubelet[3300]: E1216 12:52:46.560209 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:52:46.560753 kubelet[3300]: E1216 12:52:46.560619 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4vw2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-595b9f8d69-rdqmx_calico-system(0a0e312f-8a41-4806-bbcc-0e2c96e89982): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:52:46.562538 kubelet[3300]: E1216 12:52:46.561933 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-595b9f8d69-rdqmx" podUID="0a0e312f-8a41-4806-bbcc-0e2c96e89982" Dec 16 12:52:47.113000 audit[6167]: NETFILTER_CFG table=filter:149 family=2 entries=26 op=nft_register_rule pid=6167 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:52:47.113000 audit[6167]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe915b5f20 a2=0 a3=7ffe915b5f0c items=0 ppid=3722 pid=6167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:47.113000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:52:47.118000 audit[6167]: NETFILTER_CFG table=nat:150 family=2 entries=20 op=nft_register_rule pid=6167 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:52:47.118000 audit[6167]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe915b5f20 a2=0 a3=0 items=0 ppid=3722 pid=6167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:47.118000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:52:47.133000 audit[6169]: NETFILTER_CFG table=filter:151 family=2 entries=38 op=nft_register_rule pid=6169 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:52:47.133000 audit[6169]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc23ec5b40 a2=0 a3=7ffc23ec5b2c items=0 ppid=3722 pid=6169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:47.133000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:52:47.134000 audit[6169]: NETFILTER_CFG table=nat:152 family=2 entries=20 op=nft_register_rule pid=6169 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:52:47.134000 audit[6169]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc23ec5b40 a2=0 a3=0 items=0 ppid=3722 pid=6169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:47.134000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:52:47.146235 sshd[6157]: Connection closed by 147.75.109.163 port 37620 Dec 16 12:52:47.148938 sshd-session[6153]: pam_unix(sshd:session): session closed for user core Dec 16 12:52:47.153000 audit[6153]: USER_END pid=6153 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:47.153000 audit[6153]: CRED_DISP pid=6153 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:47.158037 systemd[1]: sshd@17-172.31.17.11:22-147.75.109.163:37620.service: Deactivated successfully. Dec 16 12:52:47.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.17.11:22-147.75.109.163:37620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:47.164042 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 12:52:47.165903 systemd-logind[1926]: Session 19 logged out. Waiting for processes to exit. Dec 16 12:52:47.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.17.11:22-147.75.109.163:37628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:47.181267 systemd[1]: Started sshd@18-172.31.17.11:22-147.75.109.163:37628.service - OpenSSH per-connection server daemon (147.75.109.163:37628). Dec 16 12:52:47.183125 systemd-logind[1926]: Removed session 19. Dec 16 12:52:47.373000 audit[6175]: USER_ACCT pid=6175 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:47.374499 sshd[6175]: Accepted publickey for core from 147.75.109.163 port 37628 ssh2: RSA SHA256:U/DoHe1xJfGISnzLa/L/V6WMQEeU/gRWF7ew5p5tFyo Dec 16 12:52:47.374000 audit[6175]: CRED_ACQ pid=6175 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:47.374000 audit[6175]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff833b0cb0 a2=3 a3=0 items=0 ppid=1 pid=6175 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:47.374000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:52:47.376464 sshd-session[6175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:52:47.381967 systemd-logind[1926]: New session 20 of user core. Dec 16 12:52:47.387136 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 12:52:47.389000 audit[6175]: USER_START pid=6175 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:47.391000 audit[6179]: CRED_ACQ pid=6179 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:48.136030 sshd[6179]: Connection closed by 147.75.109.163 port 37628 Dec 16 12:52:48.137080 sshd-session[6175]: pam_unix(sshd:session): session closed for user core Dec 16 12:52:48.140000 audit[6175]: USER_END pid=6175 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:48.140000 audit[6175]: CRED_DISP pid=6175 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:48.145208 systemd[1]: sshd@18-172.31.17.11:22-147.75.109.163:37628.service: Deactivated successfully. Dec 16 12:52:48.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.17.11:22-147.75.109.163:37628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:48.147963 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 12:52:48.150431 systemd-logind[1926]: Session 20 logged out. Waiting for processes to exit. Dec 16 12:52:48.153170 systemd-logind[1926]: Removed session 20. Dec 16 12:52:48.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.17.11:22-147.75.109.163:37632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:48.172553 systemd[1]: Started sshd@19-172.31.17.11:22-147.75.109.163:37632.service - OpenSSH per-connection server daemon (147.75.109.163:37632). Dec 16 12:52:48.329000 audit[6198]: USER_ACCT pid=6198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:48.330100 sshd[6198]: Accepted publickey for core from 147.75.109.163 port 37632 ssh2: RSA SHA256:U/DoHe1xJfGISnzLa/L/V6WMQEeU/gRWF7ew5p5tFyo Dec 16 12:52:48.330000 audit[6198]: CRED_ACQ pid=6198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:48.330000 audit[6198]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee5c92550 a2=3 a3=0 items=0 ppid=1 pid=6198 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:48.330000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:52:48.331889 sshd-session[6198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:52:48.338444 systemd-logind[1926]: New session 21 of user core. Dec 16 12:52:48.348242 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 12:52:48.350000 audit[6198]: USER_START pid=6198 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:48.352000 audit[6202]: CRED_ACQ pid=6202 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:48.519009 sshd[6202]: Connection closed by 147.75.109.163 port 37632 Dec 16 12:52:48.519625 sshd-session[6198]: pam_unix(sshd:session): session closed for user core Dec 16 12:52:48.523000 audit[6198]: USER_END pid=6198 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:48.523000 audit[6198]: CRED_DISP pid=6198 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:48.529804 systemd[1]: sshd@19-172.31.17.11:22-147.75.109.163:37632.service: Deactivated successfully. Dec 16 12:52:48.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.17.11:22-147.75.109.163:37632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:48.535289 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 12:52:48.538165 systemd-logind[1926]: Session 21 logged out. Waiting for processes to exit. Dec 16 12:52:48.541853 systemd-logind[1926]: Removed session 21. Dec 16 12:52:52.924777 containerd[1969]: time="2025-12-16T12:52:52.924694560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:52:53.182166 containerd[1969]: time="2025-12-16T12:52:53.182033813Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:52:53.221279 containerd[1969]: time="2025-12-16T12:52:53.221199932Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:52:53.221430 containerd[1969]: time="2025-12-16T12:52:53.221246261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:52:53.221663 kubelet[3300]: E1216 12:52:53.221618 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:52:53.222164 kubelet[3300]: E1216 12:52:53.221675 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:52:53.222164 kubelet[3300]: E1216 12:52:53.221861 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-spx94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b5d97d74b-j9kr5_calico-apiserver(07dabb60-b494-485b-be91-7522183aff41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:52:53.223020 kubelet[3300]: E1216 12:52:53.222974 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b5d97d74b-j9kr5" podUID="07dabb60-b494-485b-be91-7522183aff41" Dec 16 12:52:53.437000 audit[6215]: NETFILTER_CFG table=filter:153 family=2 entries=26 op=nft_register_rule pid=6215 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:52:53.440163 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 16 12:52:53.440293 kernel: audit: type=1325 audit(1765889573.437:899): table=filter:153 family=2 entries=26 op=nft_register_rule pid=6215 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:52:53.437000 audit[6215]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe72f48c70 a2=0 a3=7ffe72f48c5c items=0 ppid=3722 pid=6215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:53.437000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:52:53.455712 kernel: audit: type=1300 audit(1765889573.437:899): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe72f48c70 a2=0 a3=7ffe72f48c5c items=0 ppid=3722 pid=6215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:53.455815 kernel: audit: type=1327 audit(1765889573.437:899): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:52:53.456000 audit[6215]: NETFILTER_CFG table=nat:154 family=2 entries=104 op=nft_register_chain pid=6215 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:52:53.464778 kernel: audit: type=1325 audit(1765889573.456:900): table=nat:154 family=2 entries=104 op=nft_register_chain pid=6215 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:52:53.464874 kernel: audit: type=1300 audit(1765889573.456:900): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffe72f48c70 a2=0 a3=7ffe72f48c5c items=0 ppid=3722 pid=6215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:53.456000 audit[6215]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffe72f48c70 a2=0 a3=7ffe72f48c5c items=0 ppid=3722 pid=6215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:53.470258 kernel: audit: type=1327 audit(1765889573.456:900): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:52:53.456000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:52:53.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.17.11:22-147.75.109.163:60804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:53.552971 systemd[1]: Started sshd@20-172.31.17.11:22-147.75.109.163:60804.service - OpenSSH per-connection server daemon (147.75.109.163:60804). Dec 16 12:52:53.557961 kernel: audit: type=1130 audit(1765889573.552:901): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.17.11:22-147.75.109.163:60804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:53.721000 audit[6217]: USER_ACCT pid=6217 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:53.724591 sshd-session[6217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:52:53.727038 sshd[6217]: Accepted publickey for core from 147.75.109.163 port 60804 ssh2: RSA SHA256:U/DoHe1xJfGISnzLa/L/V6WMQEeU/gRWF7ew5p5tFyo Dec 16 12:52:53.730964 kernel: audit: type=1101 audit(1765889573.721:902): pid=6217 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:53.732080 kernel: audit: type=1103 audit(1765889573.722:903): pid=6217 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:53.722000 audit[6217]: CRED_ACQ pid=6217 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:53.735580 kernel: audit: type=1006 audit(1765889573.722:904): pid=6217 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 16 12:52:53.735574 systemd-logind[1926]: New session 22 of user core. Dec 16 12:52:53.722000 audit[6217]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe6930b940 a2=3 a3=0 items=0 ppid=1 pid=6217 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:53.722000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:52:53.739117 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 12:52:53.741000 audit[6217]: USER_START pid=6217 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:53.743000 audit[6221]: CRED_ACQ pid=6221 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:53.864701 sshd[6221]: Connection closed by 147.75.109.163 port 60804 Dec 16 12:52:53.866107 sshd-session[6217]: pam_unix(sshd:session): session closed for user core Dec 16 12:52:53.867000 audit[6217]: USER_END pid=6217 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:53.867000 audit[6217]: CRED_DISP pid=6217 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:53.871340 systemd[1]: sshd@20-172.31.17.11:22-147.75.109.163:60804.service: Deactivated successfully. Dec 16 12:52:53.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.17.11:22-147.75.109.163:60804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:53.875009 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 12:52:53.877128 systemd-logind[1926]: Session 22 logged out. Waiting for processes to exit. Dec 16 12:52:53.878417 systemd-logind[1926]: Removed session 22. Dec 16 12:52:55.923075 containerd[1969]: time="2025-12-16T12:52:55.922751401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:52:56.174626 containerd[1969]: time="2025-12-16T12:52:56.174445745Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:52:56.176767 containerd[1969]: time="2025-12-16T12:52:56.176724167Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:52:56.176987 containerd[1969]: time="2025-12-16T12:52:56.176752695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:52:56.177104 kubelet[3300]: E1216 12:52:56.177075 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:52:56.177393 kubelet[3300]: E1216 12:52:56.177116 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:52:56.177393 kubelet[3300]: E1216 12:52:56.177246 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hrjjg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8gcbc_calico-system(39a10cdc-f6c5-430e-ac11-7f9183d0c949): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:52:56.178573 kubelet[3300]: E1216 12:52:56.178546 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8gcbc" podUID="39a10cdc-f6c5-430e-ac11-7f9183d0c949" Dec 16 12:52:56.924463 containerd[1969]: time="2025-12-16T12:52:56.923941910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:52:57.213936 containerd[1969]: time="2025-12-16T12:52:57.213791137Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:52:57.216432 containerd[1969]: time="2025-12-16T12:52:57.216334353Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:52:57.216432 containerd[1969]: time="2025-12-16T12:52:57.216365422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:52:57.216637 kubelet[3300]: E1216 12:52:57.216588 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:52:57.216637 kubelet[3300]: E1216 12:52:57.216635 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:52:57.217535 kubelet[3300]: E1216 12:52:57.216874 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gm8jc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6fb4c65f97-rg8xf_calico-apiserver(631dcf21-0085-4dd5-b7a8-35fb5c10f8ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:52:57.217632 containerd[1969]: time="2025-12-16T12:52:57.217088696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:52:57.218119 kubelet[3300]: E1216 12:52:57.218058 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb4c65f97-rg8xf" podUID="631dcf21-0085-4dd5-b7a8-35fb5c10f8ab" Dec 16 12:52:57.539145 containerd[1969]: time="2025-12-16T12:52:57.539031860Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:52:57.541222 containerd[1969]: time="2025-12-16T12:52:57.541151140Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:52:57.541387 containerd[1969]: time="2025-12-16T12:52:57.541243599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:52:57.541436 kubelet[3300]: E1216 12:52:57.541395 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:52:57.541502 kubelet[3300]: E1216 12:52:57.541444 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:52:57.541617 kubelet[3300]: E1216 12:52:57.541570 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z28sw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5b97648649-sj8kb_calico-system(97ebcca4-f43d-4a70-b294-e93b18442671): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:52:57.543122 kubelet[3300]: E1216 12:52:57.543078 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b97648649-sj8kb" podUID="97ebcca4-f43d-4a70-b294-e93b18442671" Dec 16 12:52:58.902305 systemd[1]: Started sshd@21-172.31.17.11:22-147.75.109.163:60820.service - OpenSSH per-connection server daemon (147.75.109.163:60820). Dec 16 12:52:58.904841 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 12:52:58.904902 kernel: audit: type=1130 audit(1765889578.901:910): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.17.11:22-147.75.109.163:60820 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:58.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.17.11:22-147.75.109.163:60820 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:58.954466 containerd[1969]: time="2025-12-16T12:52:58.954415894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:52:59.090000 audit[6235]: USER_ACCT pid=6235 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:59.097932 kernel: audit: type=1101 audit(1765889579.090:911): pid=6235 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:59.098339 sshd[6235]: Accepted publickey for core from 147.75.109.163 port 60820 ssh2: RSA SHA256:U/DoHe1xJfGISnzLa/L/V6WMQEeU/gRWF7ew5p5tFyo Dec 16 12:52:59.101310 sshd-session[6235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:52:59.098000 audit[6235]: CRED_ACQ pid=6235 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:59.107946 kernel: audit: type=1103 audit(1765889579.098:912): pid=6235 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:59.099000 audit[6235]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde2690ed0 a2=3 a3=0 items=0 ppid=1 pid=6235 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:59.113581 kernel: audit: type=1006 audit(1765889579.099:913): pid=6235 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 16 12:52:59.113671 kernel: audit: type=1300 audit(1765889579.099:913): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde2690ed0 a2=3 a3=0 items=0 ppid=1 pid=6235 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:52:59.099000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:52:59.121952 kernel: audit: type=1327 audit(1765889579.099:913): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:52:59.124732 systemd-logind[1926]: New session 23 of user core. Dec 16 12:52:59.130134 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 12:52:59.142936 kernel: audit: type=1105 audit(1765889579.134:914): pid=6235 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:59.134000 audit[6235]: USER_START pid=6235 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:59.142000 audit[6239]: CRED_ACQ pid=6239 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:59.153020 kernel: audit: type=1103 audit(1765889579.142:915): pid=6239 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:59.288625 containerd[1969]: time="2025-12-16T12:52:59.288457293Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:52:59.290931 containerd[1969]: time="2025-12-16T12:52:59.290815227Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:52:59.291182 containerd[1969]: time="2025-12-16T12:52:59.291096054Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:52:59.291601 kubelet[3300]: E1216 12:52:59.291405 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:52:59.291601 kubelet[3300]: E1216 12:52:59.291469 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:52:59.292338 containerd[1969]: time="2025-12-16T12:52:59.292272277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:52:59.292720 kubelet[3300]: E1216 12:52:59.292652 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zn6r2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l2s79_calico-system(a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:52:59.374489 sshd[6239]: Connection closed by 147.75.109.163 port 60820 Dec 16 12:52:59.375151 sshd-session[6235]: pam_unix(sshd:session): session closed for user core Dec 16 12:52:59.376000 audit[6235]: USER_END pid=6235 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:59.383943 kernel: audit: type=1106 audit(1765889579.376:916): pid=6235 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:59.376000 audit[6235]: CRED_DISP pid=6235 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:59.388951 kernel: audit: type=1104 audit(1765889579.376:917): pid=6235 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:52:59.395396 systemd-logind[1926]: Session 23 logged out. Waiting for processes to exit. Dec 16 12:52:59.395703 systemd[1]: sshd@21-172.31.17.11:22-147.75.109.163:60820.service: Deactivated successfully. Dec 16 12:52:59.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.17.11:22-147.75.109.163:60820 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:52:59.398177 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 12:52:59.400409 systemd-logind[1926]: Removed session 23. Dec 16 12:52:59.567401 containerd[1969]: time="2025-12-16T12:52:59.567277174Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:52:59.569502 containerd[1969]: time="2025-12-16T12:52:59.569417328Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:52:59.569502 containerd[1969]: time="2025-12-16T12:52:59.569461395Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:52:59.569691 kubelet[3300]: E1216 12:52:59.569643 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:52:59.569691 kubelet[3300]: E1216 12:52:59.569704 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:52:59.570114 containerd[1969]: time="2025-12-16T12:52:59.570075988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:52:59.570293 kubelet[3300]: E1216 12:52:59.570249 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzbm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6fb4c65f97-cxgtk_calico-apiserver(5fa2e950-afe9-46cc-8df6-b014aa90c32b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:52:59.571621 kubelet[3300]: E1216 12:52:59.571573 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb4c65f97-cxgtk" podUID="5fa2e950-afe9-46cc-8df6-b014aa90c32b" Dec 16 12:52:59.832099 containerd[1969]: time="2025-12-16T12:52:59.831685522Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:52:59.833872 containerd[1969]: time="2025-12-16T12:52:59.833816036Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:52:59.834360 containerd[1969]: time="2025-12-16T12:52:59.833901387Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:52:59.834411 kubelet[3300]: E1216 12:52:59.834058 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:52:59.834411 kubelet[3300]: E1216 12:52:59.834100 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:52:59.834411 kubelet[3300]: E1216 12:52:59.834222 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zn6r2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l2s79_calico-system(a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:52:59.835420 kubelet[3300]: E1216 12:52:59.835377 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l2s79" podUID="a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0" Dec 16 12:53:01.938413 kubelet[3300]: E1216 12:53:01.938339 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-595b9f8d69-rdqmx" podUID="0a0e312f-8a41-4806-bbcc-0e2c96e89982" Dec 16 12:53:04.409213 systemd[1]: Started sshd@22-172.31.17.11:22-147.75.109.163:36330.service - OpenSSH per-connection server daemon (147.75.109.163:36330). Dec 16 12:53:04.412987 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:53:04.413099 kernel: audit: type=1130 audit(1765889584.408:919): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.17.11:22-147.75.109.163:36330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:53:04.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.17.11:22-147.75.109.163:36330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:53:04.634000 audit[6275]: USER_ACCT pid=6275 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:04.637959 sshd[6275]: Accepted publickey for core from 147.75.109.163 port 36330 ssh2: RSA SHA256:U/DoHe1xJfGISnzLa/L/V6WMQEeU/gRWF7ew5p5tFyo Dec 16 12:53:04.640325 sshd-session[6275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:53:04.637000 audit[6275]: CRED_ACQ pid=6275 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:04.642489 kernel: audit: type=1101 audit(1765889584.634:920): pid=6275 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:04.642543 kernel: audit: type=1103 audit(1765889584.637:921): pid=6275 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:04.646562 kernel: audit: type=1006 audit(1765889584.637:922): pid=6275 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 16 12:53:04.637000 audit[6275]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb8099b90 a2=3 a3=0 items=0 ppid=1 pid=6275 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:53:04.637000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:53:04.656024 kernel: audit: type=1300 audit(1765889584.637:922): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb8099b90 a2=3 a3=0 items=0 ppid=1 pid=6275 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:53:04.656112 kernel: audit: type=1327 audit(1765889584.637:922): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:53:04.657501 systemd-logind[1926]: New session 24 of user core. Dec 16 12:53:04.663256 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 12:53:04.666000 audit[6275]: USER_START pid=6275 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:04.673000 audit[6279]: CRED_ACQ pid=6279 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:04.676043 kernel: audit: type=1105 audit(1765889584.666:923): pid=6275 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:04.676134 kernel: audit: type=1103 audit(1765889584.673:924): pid=6279 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:04.954648 sshd[6279]: Connection closed by 147.75.109.163 port 36330 Dec 16 12:53:04.955585 sshd-session[6275]: pam_unix(sshd:session): session closed for user core Dec 16 12:53:04.957000 audit[6275]: USER_END pid=6275 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:04.957000 audit[6275]: CRED_DISP pid=6275 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:04.966293 kernel: audit: type=1106 audit(1765889584.957:925): pid=6275 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:04.966394 kernel: audit: type=1104 audit(1765889584.957:926): pid=6275 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:04.968327 systemd[1]: sshd@22-172.31.17.11:22-147.75.109.163:36330.service: Deactivated successfully. Dec 16 12:53:04.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.17.11:22-147.75.109.163:36330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:53:04.969209 systemd-logind[1926]: Session 24 logged out. Waiting for processes to exit. Dec 16 12:53:04.972834 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 12:53:04.976900 systemd-logind[1926]: Removed session 24. Dec 16 12:53:06.942846 kubelet[3300]: E1216 12:53:06.942666 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b5d97d74b-j9kr5" podUID="07dabb60-b494-485b-be91-7522183aff41" Dec 16 12:53:08.946956 kubelet[3300]: E1216 12:53:08.945815 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8gcbc" podUID="39a10cdc-f6c5-430e-ac11-7f9183d0c949" Dec 16 12:53:09.922514 kubelet[3300]: E1216 12:53:09.922397 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b97648649-sj8kb" podUID="97ebcca4-f43d-4a70-b294-e93b18442671" Dec 16 12:53:09.923522 kubelet[3300]: E1216 12:53:09.923468 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb4c65f97-rg8xf" podUID="631dcf21-0085-4dd5-b7a8-35fb5c10f8ab" Dec 16 12:53:09.993279 systemd[1]: Started sshd@23-172.31.17.11:22-147.75.109.163:36344.service - OpenSSH per-connection server daemon (147.75.109.163:36344). Dec 16 12:53:09.996028 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:53:09.996148 kernel: audit: type=1130 audit(1765889589.992:928): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.17.11:22-147.75.109.163:36344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:53:09.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.17.11:22-147.75.109.163:36344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:53:10.221000 audit[6293]: USER_ACCT pid=6293 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:10.227946 kernel: audit: type=1101 audit(1765889590.221:929): pid=6293 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:10.228258 sshd[6293]: Accepted publickey for core from 147.75.109.163 port 36344 ssh2: RSA SHA256:U/DoHe1xJfGISnzLa/L/V6WMQEeU/gRWF7ew5p5tFyo Dec 16 12:53:10.228000 audit[6293]: CRED_ACQ pid=6293 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:10.230648 sshd-session[6293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:53:10.234970 kernel: audit: type=1103 audit(1765889590.228:930): pid=6293 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:10.239726 kernel: audit: type=1006 audit(1765889590.228:931): pid=6293 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 16 12:53:10.228000 audit[6293]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe99fe9230 a2=3 a3=0 items=0 ppid=1 pid=6293 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:53:10.247279 kernel: audit: type=1300 audit(1765889590.228:931): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe99fe9230 a2=3 a3=0 items=0 ppid=1 pid=6293 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:53:10.247358 kernel: audit: type=1327 audit(1765889590.228:931): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:53:10.228000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:53:10.250603 systemd-logind[1926]: New session 25 of user core. Dec 16 12:53:10.257190 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 12:53:10.262000 audit[6293]: USER_START pid=6293 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:10.270002 kernel: audit: type=1105 audit(1765889590.262:932): pid=6293 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:10.269000 audit[6297]: CRED_ACQ pid=6297 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:10.275331 kernel: audit: type=1103 audit(1765889590.269:933): pid=6297 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:10.929946 kubelet[3300]: E1216 12:53:10.929571 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb4c65f97-cxgtk" podUID="5fa2e950-afe9-46cc-8df6-b014aa90c32b" Dec 16 12:53:10.933233 kubelet[3300]: E1216 12:53:10.933106 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l2s79" podUID="a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0" Dec 16 12:53:10.994614 sshd[6297]: Connection closed by 147.75.109.163 port 36344 Dec 16 12:53:10.995488 sshd-session[6293]: pam_unix(sshd:session): session closed for user core Dec 16 12:53:11.006619 kernel: audit: type=1106 audit(1765889590.997:934): pid=6293 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:10.997000 audit[6293]: USER_END pid=6293 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:10.997000 audit[6293]: CRED_DISP pid=6293 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:11.019933 kernel: audit: type=1104 audit(1765889590.997:935): pid=6293 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:11.028227 systemd-logind[1926]: Session 25 logged out. Waiting for processes to exit. Dec 16 12:53:11.028861 systemd[1]: sshd@23-172.31.17.11:22-147.75.109.163:36344.service: Deactivated successfully. Dec 16 12:53:11.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.17.11:22-147.75.109.163:36344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:53:11.035183 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 12:53:11.040851 systemd-logind[1926]: Removed session 25. Dec 16 12:53:15.924629 kubelet[3300]: E1216 12:53:15.924498 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-595b9f8d69-rdqmx" podUID="0a0e312f-8a41-4806-bbcc-0e2c96e89982" Dec 16 12:53:16.031722 systemd[1]: Started sshd@24-172.31.17.11:22-147.75.109.163:55182.service - OpenSSH per-connection server daemon (147.75.109.163:55182). Dec 16 12:53:16.041134 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:53:16.041243 kernel: audit: type=1130 audit(1765889596.030:937): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.17.11:22-147.75.109.163:55182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:53:16.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.17.11:22-147.75.109.163:55182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:53:16.273000 audit[6310]: USER_ACCT pid=6310 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:16.280354 sshd[6310]: Accepted publickey for core from 147.75.109.163 port 55182 ssh2: RSA SHA256:U/DoHe1xJfGISnzLa/L/V6WMQEeU/gRWF7ew5p5tFyo Dec 16 12:53:16.281513 kernel: audit: type=1101 audit(1765889596.273:938): pid=6310 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:16.281000 audit[6310]: CRED_ACQ pid=6310 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:16.288943 kernel: audit: type=1103 audit(1765889596.281:939): pid=6310 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:16.289511 sshd-session[6310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:53:16.292944 kernel: audit: type=1006 audit(1765889596.281:940): pid=6310 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 16 12:53:16.281000 audit[6310]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6d9b30e0 a2=3 a3=0 items=0 ppid=1 pid=6310 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:53:16.299968 kernel: audit: type=1300 audit(1765889596.281:940): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6d9b30e0 a2=3 a3=0 items=0 ppid=1 pid=6310 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:53:16.281000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:53:16.302941 kernel: audit: type=1327 audit(1765889596.281:940): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:53:16.309723 systemd-logind[1926]: New session 26 of user core. Dec 16 12:53:16.318121 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 12:53:16.320000 audit[6310]: USER_START pid=6310 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:16.332206 kernel: audit: type=1105 audit(1765889596.320:941): pid=6310 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:16.332291 kernel: audit: type=1103 audit(1765889596.323:942): pid=6315 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:16.323000 audit[6315]: CRED_ACQ pid=6315 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:16.800833 sshd[6315]: Connection closed by 147.75.109.163 port 55182 Dec 16 12:53:16.802464 sshd-session[6310]: pam_unix(sshd:session): session closed for user core Dec 16 12:53:16.803000 audit[6310]: USER_END pid=6310 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:16.815812 kernel: audit: type=1106 audit(1765889596.803:943): pid=6310 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:16.815991 kernel: audit: type=1104 audit(1765889596.803:944): pid=6310 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:16.803000 audit[6310]: CRED_DISP pid=6310 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:53:16.825823 systemd[1]: sshd@24-172.31.17.11:22-147.75.109.163:55182.service: Deactivated successfully. Dec 16 12:53:16.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.17.11:22-147.75.109.163:55182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:53:16.835185 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 12:53:16.839510 systemd-logind[1926]: Session 26 logged out. Waiting for processes to exit. Dec 16 12:53:16.842627 systemd-logind[1926]: Removed session 26. Dec 16 12:53:17.930732 kubelet[3300]: E1216 12:53:17.930651 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b5d97d74b-j9kr5" podUID="07dabb60-b494-485b-be91-7522183aff41" Dec 16 12:53:20.922352 kubelet[3300]: E1216 12:53:20.921632 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb4c65f97-rg8xf" podUID="631dcf21-0085-4dd5-b7a8-35fb5c10f8ab" Dec 16 12:53:21.921588 kubelet[3300]: E1216 12:53:21.921539 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8gcbc" podUID="39a10cdc-f6c5-430e-ac11-7f9183d0c949" Dec 16 12:53:22.923181 kubelet[3300]: E1216 12:53:22.923107 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l2s79" podUID="a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0" Dec 16 12:53:23.922136 kubelet[3300]: E1216 12:53:23.922094 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b97648649-sj8kb" podUID="97ebcca4-f43d-4a70-b294-e93b18442671" Dec 16 12:53:24.921929 kubelet[3300]: E1216 12:53:24.921813 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb4c65f97-cxgtk" podUID="5fa2e950-afe9-46cc-8df6-b014aa90c32b" Dec 16 12:53:29.922518 containerd[1969]: time="2025-12-16T12:53:29.922447710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:53:30.193598 containerd[1969]: time="2025-12-16T12:53:30.193458612Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:53:30.195653 containerd[1969]: time="2025-12-16T12:53:30.195605173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:53:30.195653 containerd[1969]: time="2025-12-16T12:53:30.195642481Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:53:30.195944 kubelet[3300]: E1216 12:53:30.195857 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:53:30.196287 kubelet[3300]: E1216 12:53:30.195902 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:53:30.196287 kubelet[3300]: E1216 12:53:30.196075 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:47cc487b7b6d43fa90aba00c924ab97e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4vw2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-595b9f8d69-rdqmx_calico-system(0a0e312f-8a41-4806-bbcc-0e2c96e89982): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:53:30.198042 containerd[1969]: time="2025-12-16T12:53:30.197901557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:53:30.228903 systemd[1]: cri-containerd-a4bfa757c7627ebe34453ec2b20043f45a2310416daa1bd854348e411628de25.scope: Deactivated successfully. Dec 16 12:53:30.229256 systemd[1]: cri-containerd-a4bfa757c7627ebe34453ec2b20043f45a2310416daa1bd854348e411628de25.scope: Consumed 11.666s CPU time, 112.3M memory peak, 48.2M read from disk. Dec 16 12:53:30.236701 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:53:30.236784 kernel: audit: type=1334 audit(1765889610.232:946): prog-id=156 op=UNLOAD Dec 16 12:53:30.232000 audit: BPF prog-id=156 op=UNLOAD Dec 16 12:53:30.239579 kernel: audit: type=1334 audit(1765889610.232:947): prog-id=160 op=UNLOAD Dec 16 12:53:30.232000 audit: BPF prog-id=160 op=UNLOAD Dec 16 12:53:30.388486 containerd[1969]: time="2025-12-16T12:53:30.388435878Z" level=info msg="received container exit event container_id:\"a4bfa757c7627ebe34453ec2b20043f45a2310416daa1bd854348e411628de25\" id:\"a4bfa757c7627ebe34453ec2b20043f45a2310416daa1bd854348e411628de25\" pid:3892 exit_status:1 exited_at:{seconds:1765889610 nanos:281089641}" Dec 16 12:53:30.469060 containerd[1969]: time="2025-12-16T12:53:30.468371353Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:53:30.473212 containerd[1969]: time="2025-12-16T12:53:30.473157434Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:53:30.473351 containerd[1969]: time="2025-12-16T12:53:30.473288513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:53:30.473522 kubelet[3300]: E1216 12:53:30.473475 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:53:30.473606 kubelet[3300]: E1216 12:53:30.473531 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:53:30.473779 kubelet[3300]: E1216 12:53:30.473702 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4vw2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-595b9f8d69-rdqmx_calico-system(0a0e312f-8a41-4806-bbcc-0e2c96e89982): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:53:30.475330 kubelet[3300]: E1216 12:53:30.475282 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-595b9f8d69-rdqmx" podUID="0a0e312f-8a41-4806-bbcc-0e2c96e89982" Dec 16 12:53:30.489108 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4bfa757c7627ebe34453ec2b20043f45a2310416daa1bd854348e411628de25-rootfs.mount: Deactivated successfully. Dec 16 12:53:30.608799 kubelet[3300]: I1216 12:53:30.608738 3300 scope.go:117] "RemoveContainer" containerID="a4bfa757c7627ebe34453ec2b20043f45a2310416daa1bd854348e411628de25" Dec 16 12:53:30.615502 systemd[1]: cri-containerd-be075226b52f53a519e004eaee22d1b7024b2bcb30af6be32b97739aa3f67068.scope: Deactivated successfully. Dec 16 12:53:30.616932 systemd[1]: cri-containerd-be075226b52f53a519e004eaee22d1b7024b2bcb30af6be32b97739aa3f67068.scope: Consumed 4.262s CPU time, 109.6M memory peak, 102.6M read from disk. Dec 16 12:53:30.619504 kernel: audit: type=1334 audit(1765889610.616:948): prog-id=271 op=LOAD Dec 16 12:53:30.616000 audit: BPF prog-id=271 op=LOAD Dec 16 12:53:30.618000 audit: BPF prog-id=103 op=UNLOAD Dec 16 12:53:30.623793 kernel: audit: type=1334 audit(1765889610.618:949): prog-id=103 op=UNLOAD Dec 16 12:53:30.623839 kernel: audit: type=1334 audit(1765889610.621:950): prog-id=118 op=UNLOAD Dec 16 12:53:30.621000 audit: BPF prog-id=118 op=UNLOAD Dec 16 12:53:30.623898 containerd[1969]: time="2025-12-16T12:53:30.621639566Z" level=info msg="received container exit event container_id:\"be075226b52f53a519e004eaee22d1b7024b2bcb30af6be32b97739aa3f67068\" id:\"be075226b52f53a519e004eaee22d1b7024b2bcb30af6be32b97739aa3f67068\" pid:3141 exit_status:1 exited_at:{seconds:1765889610 nanos:620068044}" Dec 16 12:53:30.625967 kernel: audit: type=1334 audit(1765889610.621:951): prog-id=122 op=UNLOAD Dec 16 12:53:30.621000 audit: BPF prog-id=122 op=UNLOAD Dec 16 12:53:30.630256 containerd[1969]: time="2025-12-16T12:53:30.630223317Z" level=info msg="CreateContainer within sandbox \"325c2f49796928a1ca5639ec88c425726fa0c14ba951cb845ce27bf321c2bdc9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 16 12:53:30.649040 containerd[1969]: time="2025-12-16T12:53:30.648328275Z" level=info msg="Container b6139625ce86a4ed9f4a98d127fcb89502f46eb6ee00f4c5de6ca00d6a5ab882: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:53:30.659867 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be075226b52f53a519e004eaee22d1b7024b2bcb30af6be32b97739aa3f67068-rootfs.mount: Deactivated successfully. Dec 16 12:53:30.662733 containerd[1969]: time="2025-12-16T12:53:30.662695761Z" level=info msg="CreateContainer within sandbox \"325c2f49796928a1ca5639ec88c425726fa0c14ba951cb845ce27bf321c2bdc9\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"b6139625ce86a4ed9f4a98d127fcb89502f46eb6ee00f4c5de6ca00d6a5ab882\"" Dec 16 12:53:30.663393 containerd[1969]: time="2025-12-16T12:53:30.663358413Z" level=info msg="StartContainer for \"b6139625ce86a4ed9f4a98d127fcb89502f46eb6ee00f4c5de6ca00d6a5ab882\"" Dec 16 12:53:30.664451 containerd[1969]: time="2025-12-16T12:53:30.664419874Z" level=info msg="connecting to shim b6139625ce86a4ed9f4a98d127fcb89502f46eb6ee00f4c5de6ca00d6a5ab882" address="unix:///run/containerd/s/aef6c923143fb985842ee0d4f7790b100270ece5d6261f5c2b1ead42be0bf4b5" protocol=ttrpc version=3 Dec 16 12:53:30.704505 systemd[1]: Started cri-containerd-b6139625ce86a4ed9f4a98d127fcb89502f46eb6ee00f4c5de6ca00d6a5ab882.scope - libcontainer container b6139625ce86a4ed9f4a98d127fcb89502f46eb6ee00f4c5de6ca00d6a5ab882. Dec 16 12:53:30.738000 audit: BPF prog-id=272 op=LOAD Dec 16 12:53:30.740934 kernel: audit: type=1334 audit(1765889610.738:952): prog-id=272 op=LOAD Dec 16 12:53:30.740000 audit: BPF prog-id=273 op=LOAD Dec 16 12:53:30.747829 kernel: audit: type=1334 audit(1765889610.740:953): prog-id=273 op=LOAD Dec 16 12:53:30.747924 kernel: audit: type=1300 audit(1765889610.740:953): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3586 pid=6362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:53:30.740000 audit[6362]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3586 pid=6362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:53:30.753936 kernel: audit: type=1327 audit(1765889610.740:953): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236313339363235636538366134656439663461393864313237666362 Dec 16 12:53:30.740000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236313339363235636538366134656439663461393864313237666362 Dec 16 12:53:30.740000 audit: BPF prog-id=273 op=UNLOAD Dec 16 12:53:30.740000 audit[6362]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3586 pid=6362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:53:30.740000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236313339363235636538366134656439663461393864313237666362 Dec 16 12:53:30.744000 audit: BPF prog-id=274 op=LOAD Dec 16 12:53:30.744000 audit[6362]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3586 pid=6362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:53:30.744000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236313339363235636538366134656439663461393864313237666362 Dec 16 12:53:30.744000 audit: BPF prog-id=275 op=LOAD Dec 16 12:53:30.744000 audit[6362]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3586 pid=6362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:53:30.744000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236313339363235636538366134656439663461393864313237666362 Dec 16 12:53:30.744000 audit: BPF prog-id=275 op=UNLOAD Dec 16 12:53:30.744000 audit[6362]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3586 pid=6362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:53:30.744000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236313339363235636538366134656439663461393864313237666362 Dec 16 12:53:30.744000 audit: BPF prog-id=274 op=UNLOAD Dec 16 12:53:30.744000 audit[6362]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3586 pid=6362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:53:30.744000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236313339363235636538366134656439663461393864313237666362 Dec 16 12:53:30.744000 audit: BPF prog-id=276 op=LOAD Dec 16 12:53:30.744000 audit[6362]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3586 pid=6362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:53:30.744000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236313339363235636538366134656439663461393864313237666362 Dec 16 12:53:30.779711 containerd[1969]: time="2025-12-16T12:53:30.779466267Z" level=info msg="StartContainer for \"b6139625ce86a4ed9f4a98d127fcb89502f46eb6ee00f4c5de6ca00d6a5ab882\" returns successfully" Dec 16 12:53:31.573075 kubelet[3300]: I1216 12:53:31.573019 3300 scope.go:117] "RemoveContainer" containerID="be075226b52f53a519e004eaee22d1b7024b2bcb30af6be32b97739aa3f67068" Dec 16 12:53:31.575125 containerd[1969]: time="2025-12-16T12:53:31.575079450Z" level=info msg="CreateContainer within sandbox \"eb3deb725e374506e5f16ba373f63b1530bddda23191f54d75d8f7fc6b7e98cf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 16 12:53:31.597934 containerd[1969]: time="2025-12-16T12:53:31.595313191Z" level=info msg="Container 54e0c09f3063b78a72b4252df8d9975517778bddc646693cda3cf07ee32416a8: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:53:31.615991 containerd[1969]: time="2025-12-16T12:53:31.615945122Z" level=info msg="CreateContainer within sandbox \"eb3deb725e374506e5f16ba373f63b1530bddda23191f54d75d8f7fc6b7e98cf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"54e0c09f3063b78a72b4252df8d9975517778bddc646693cda3cf07ee32416a8\"" Dec 16 12:53:31.617253 containerd[1969]: time="2025-12-16T12:53:31.616563840Z" level=info msg="StartContainer for \"54e0c09f3063b78a72b4252df8d9975517778bddc646693cda3cf07ee32416a8\"" Dec 16 12:53:31.617870 containerd[1969]: time="2025-12-16T12:53:31.617833643Z" level=info msg="connecting to shim 54e0c09f3063b78a72b4252df8d9975517778bddc646693cda3cf07ee32416a8" address="unix:///run/containerd/s/67e5560b8eabc64d1d8fda1e469d862b9c7c952b00a0e59467b162f3f3bbec4e" protocol=ttrpc version=3 Dec 16 12:53:31.637163 systemd[1]: Started cri-containerd-54e0c09f3063b78a72b4252df8d9975517778bddc646693cda3cf07ee32416a8.scope - libcontainer container 54e0c09f3063b78a72b4252df8d9975517778bddc646693cda3cf07ee32416a8. Dec 16 12:53:31.654000 audit: BPF prog-id=277 op=LOAD Dec 16 12:53:31.655000 audit: BPF prog-id=278 op=LOAD Dec 16 12:53:31.655000 audit[6396]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2993 pid=6396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:53:31.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534653063303966333036336237386137326234323532646638643939 Dec 16 12:53:31.655000 audit: BPF prog-id=278 op=UNLOAD Dec 16 12:53:31.655000 audit[6396]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2993 pid=6396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:53:31.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534653063303966333036336237386137326234323532646638643939 Dec 16 12:53:31.655000 audit: BPF prog-id=279 op=LOAD Dec 16 12:53:31.655000 audit[6396]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2993 pid=6396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:53:31.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534653063303966333036336237386137326234323532646638643939 Dec 16 12:53:31.655000 audit: BPF prog-id=280 op=LOAD Dec 16 12:53:31.655000 audit[6396]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2993 pid=6396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:53:31.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534653063303966333036336237386137326234323532646638643939 Dec 16 12:53:31.655000 audit: BPF prog-id=280 op=UNLOAD Dec 16 12:53:31.655000 audit[6396]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2993 pid=6396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:53:31.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534653063303966333036336237386137326234323532646638643939 Dec 16 12:53:31.655000 audit: BPF prog-id=279 op=UNLOAD Dec 16 12:53:31.655000 audit[6396]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2993 pid=6396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:53:31.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534653063303966333036336237386137326234323532646638643939 Dec 16 12:53:31.655000 audit: BPF prog-id=281 op=LOAD Dec 16 12:53:31.655000 audit[6396]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2993 pid=6396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:53:31.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534653063303966333036336237386137326234323532646638643939 Dec 16 12:53:31.702410 containerd[1969]: time="2025-12-16T12:53:31.702366514Z" level=info msg="StartContainer for \"54e0c09f3063b78a72b4252df8d9975517778bddc646693cda3cf07ee32416a8\" returns successfully" Dec 16 12:53:31.831393 kubelet[3300]: E1216 12:53:31.831263 3300 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-11?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 16 12:53:31.922210 kubelet[3300]: E1216 12:53:31.922145 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b5d97d74b-j9kr5" podUID="07dabb60-b494-485b-be91-7522183aff41" Dec 16 12:53:33.922408 kubelet[3300]: E1216 12:53:33.922366 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8gcbc" podUID="39a10cdc-f6c5-430e-ac11-7f9183d0c949" Dec 16 12:53:35.426782 systemd[1]: cri-containerd-b9c63722573e7d4cbe093ec6202dc534de2c4d2c6ec3bb61cc68b43bc832d846.scope: Deactivated successfully. Dec 16 12:53:35.428054 systemd[1]: cri-containerd-b9c63722573e7d4cbe093ec6202dc534de2c4d2c6ec3bb61cc68b43bc832d846.scope: Consumed 2.294s CPU time, 38.5M memory peak, 39.3M read from disk. Dec 16 12:53:35.431194 kernel: kauditd_printk_skb: 40 callbacks suppressed Dec 16 12:53:35.431283 kernel: audit: type=1334 audit(1765889615.428:968): prog-id=282 op=LOAD Dec 16 12:53:35.428000 audit: BPF prog-id=282 op=LOAD Dec 16 12:53:35.432494 containerd[1969]: time="2025-12-16T12:53:35.432454277Z" level=info msg="received container exit event container_id:\"b9c63722573e7d4cbe093ec6202dc534de2c4d2c6ec3bb61cc68b43bc832d846\" id:\"b9c63722573e7d4cbe093ec6202dc534de2c4d2c6ec3bb61cc68b43bc832d846\" pid:3127 exit_status:1 exited_at:{seconds:1765889615 nanos:432118382}" Dec 16 12:53:35.428000 audit: BPF prog-id=93 op=UNLOAD Dec 16 12:53:35.436005 kernel: audit: type=1334 audit(1765889615.428:969): prog-id=93 op=UNLOAD Dec 16 12:53:35.432000 audit: BPF prog-id=108 op=UNLOAD Dec 16 12:53:35.437924 kernel: audit: type=1334 audit(1765889615.432:970): prog-id=108 op=UNLOAD Dec 16 12:53:35.432000 audit: BPF prog-id=112 op=UNLOAD Dec 16 12:53:35.439957 kernel: audit: type=1334 audit(1765889615.432:971): prog-id=112 op=UNLOAD Dec 16 12:53:35.475521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9c63722573e7d4cbe093ec6202dc534de2c4d2c6ec3bb61cc68b43bc832d846-rootfs.mount: Deactivated successfully. Dec 16 12:53:35.586930 kubelet[3300]: I1216 12:53:35.586876 3300 scope.go:117] "RemoveContainer" containerID="b9c63722573e7d4cbe093ec6202dc534de2c4d2c6ec3bb61cc68b43bc832d846" Dec 16 12:53:35.589807 containerd[1969]: time="2025-12-16T12:53:35.589614479Z" level=info msg="CreateContainer within sandbox \"51dc2255cb111bd00a51931e0e52a571c44b1a7d2d8918b53e9ccc83b2cf16af\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 16 12:53:35.607871 containerd[1969]: time="2025-12-16T12:53:35.607834748Z" level=info msg="Container 26a8c731df07867ffca5809e150294b05a32d4b0c91d5fc9f98dc97217e23249: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:53:35.622569 containerd[1969]: time="2025-12-16T12:53:35.622523984Z" level=info msg="CreateContainer within sandbox \"51dc2255cb111bd00a51931e0e52a571c44b1a7d2d8918b53e9ccc83b2cf16af\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"26a8c731df07867ffca5809e150294b05a32d4b0c91d5fc9f98dc97217e23249\"" Dec 16 12:53:35.623266 containerd[1969]: time="2025-12-16T12:53:35.623234451Z" level=info msg="StartContainer for \"26a8c731df07867ffca5809e150294b05a32d4b0c91d5fc9f98dc97217e23249\"" Dec 16 12:53:35.624451 containerd[1969]: time="2025-12-16T12:53:35.624417092Z" level=info msg="connecting to shim 26a8c731df07867ffca5809e150294b05a32d4b0c91d5fc9f98dc97217e23249" address="unix:///run/containerd/s/2e21a8e396de733680d4baae5d95f7bf589b8ad9391f9d79495d569123797f28" protocol=ttrpc version=3 Dec 16 12:53:35.648294 systemd[1]: Started cri-containerd-26a8c731df07867ffca5809e150294b05a32d4b0c91d5fc9f98dc97217e23249.scope - libcontainer container 26a8c731df07867ffca5809e150294b05a32d4b0c91d5fc9f98dc97217e23249. Dec 16 12:53:35.662000 audit: BPF prog-id=283 op=LOAD Dec 16 12:53:35.664946 kernel: audit: type=1334 audit(1765889615.662:972): prog-id=283 op=LOAD Dec 16 12:53:35.664000 audit: BPF prog-id=284 op=LOAD Dec 16 12:53:35.664000 audit[6469]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2986 pid=6469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:53:35.667820 kernel: audit: type=1334 audit(1765889615.664:973): prog-id=284 op=LOAD Dec 16 12:53:35.667893 kernel: audit: type=1300 audit(1765889615.664:973): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2986 pid=6469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:53:35.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236613863373331646630373836376666636135383039653135303239 Dec 16 12:53:35.677000 kernel: audit: type=1327 audit(1765889615.664:973): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236613863373331646630373836376666636135383039653135303239 Dec 16 12:53:35.664000 audit: BPF prog-id=284 op=UNLOAD Dec 16 12:53:35.683435 kernel: audit: type=1334 audit(1765889615.664:974): prog-id=284 op=UNLOAD Dec 16 12:53:35.683553 kernel: audit: type=1300 audit(1765889615.664:974): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2986 pid=6469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:53:35.664000 audit[6469]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2986 pid=6469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:53:35.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236613863373331646630373836376666636135383039653135303239 Dec 16 12:53:35.664000 audit: BPF prog-id=285 op=LOAD Dec 16 12:53:35.664000 audit[6469]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2986 pid=6469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:53:35.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236613863373331646630373836376666636135383039653135303239 Dec 16 12:53:35.664000 audit: BPF prog-id=286 op=LOAD Dec 16 12:53:35.664000 audit[6469]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2986 pid=6469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:53:35.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236613863373331646630373836376666636135383039653135303239 Dec 16 12:53:35.664000 audit: BPF prog-id=286 op=UNLOAD Dec 16 12:53:35.664000 audit[6469]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2986 pid=6469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:53:35.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236613863373331646630373836376666636135383039653135303239 Dec 16 12:53:35.664000 audit: BPF prog-id=285 op=UNLOAD Dec 16 12:53:35.664000 audit[6469]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2986 pid=6469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:53:35.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236613863373331646630373836376666636135383039653135303239 Dec 16 12:53:35.664000 audit: BPF prog-id=287 op=LOAD Dec 16 12:53:35.664000 audit[6469]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2986 pid=6469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:53:35.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236613863373331646630373836376666636135383039653135303239 Dec 16 12:53:35.714571 containerd[1969]: time="2025-12-16T12:53:35.714472006Z" level=info msg="StartContainer for \"26a8c731df07867ffca5809e150294b05a32d4b0c91d5fc9f98dc97217e23249\" returns successfully" Dec 16 12:53:35.922671 kubelet[3300]: E1216 12:53:35.922629 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb4c65f97-rg8xf" podUID="631dcf21-0085-4dd5-b7a8-35fb5c10f8ab" Dec 16 12:53:35.923049 kubelet[3300]: E1216 12:53:35.923022 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l2s79" podUID="a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0" Dec 16 12:53:38.923199 kubelet[3300]: E1216 12:53:38.922626 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb4c65f97-cxgtk" podUID="5fa2e950-afe9-46cc-8df6-b014aa90c32b" Dec 16 12:53:38.923703 containerd[1969]: time="2025-12-16T12:53:38.922901143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:53:39.158212 containerd[1969]: time="2025-12-16T12:53:39.158160567Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:53:39.160359 containerd[1969]: time="2025-12-16T12:53:39.160311386Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:53:39.160515 containerd[1969]: time="2025-12-16T12:53:39.160407756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:53:39.160594 kubelet[3300]: E1216 12:53:39.160558 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:53:39.160695 kubelet[3300]: E1216 12:53:39.160608 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:53:39.161140 kubelet[3300]: E1216 12:53:39.161074 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z28sw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5b97648649-sj8kb_calico-system(97ebcca4-f43d-4a70-b294-e93b18442671): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:53:39.170824 kubelet[3300]: E1216 12:53:39.169996 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b97648649-sj8kb" podUID="97ebcca4-f43d-4a70-b294-e93b18442671" Dec 16 12:53:41.832272 kubelet[3300]: E1216 12:53:41.832175 3300 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-11?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 16 12:53:41.922634 kubelet[3300]: E1216 12:53:41.922584 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-595b9f8d69-rdqmx" podUID="0a0e312f-8a41-4806-bbcc-0e2c96e89982" Dec 16 12:53:42.269743 systemd[1]: cri-containerd-b6139625ce86a4ed9f4a98d127fcb89502f46eb6ee00f4c5de6ca00d6a5ab882.scope: Deactivated successfully. Dec 16 12:53:42.270332 systemd[1]: cri-containerd-b6139625ce86a4ed9f4a98d127fcb89502f46eb6ee00f4c5de6ca00d6a5ab882.scope: Consumed 262ms CPU time, 65.8M memory peak, 31M read from disk. Dec 16 12:53:42.271296 containerd[1969]: time="2025-12-16T12:53:42.271166914Z" level=info msg="received container exit event container_id:\"b6139625ce86a4ed9f4a98d127fcb89502f46eb6ee00f4c5de6ca00d6a5ab882\" id:\"b6139625ce86a4ed9f4a98d127fcb89502f46eb6ee00f4c5de6ca00d6a5ab882\" pid:6376 exit_status:1 exited_at:{seconds:1765889622 nanos:270597875}" Dec 16 12:53:42.278650 kernel: kauditd_printk_skb: 16 callbacks suppressed Dec 16 12:53:42.278770 kernel: audit: type=1334 audit(1765889622.274:980): prog-id=272 op=UNLOAD Dec 16 12:53:42.274000 audit: BPF prog-id=272 op=UNLOAD Dec 16 12:53:42.274000 audit: BPF prog-id=276 op=UNLOAD Dec 16 12:53:42.279306 kernel: audit: type=1334 audit(1765889622.274:981): prog-id=276 op=UNLOAD Dec 16 12:53:42.301513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6139625ce86a4ed9f4a98d127fcb89502f46eb6ee00f4c5de6ca00d6a5ab882-rootfs.mount: Deactivated successfully. Dec 16 12:53:42.618120 kubelet[3300]: I1216 12:53:42.618002 3300 scope.go:117] "RemoveContainer" containerID="b6139625ce86a4ed9f4a98d127fcb89502f46eb6ee00f4c5de6ca00d6a5ab882" Dec 16 12:53:42.618632 kubelet[3300]: E1216 12:53:42.618592 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-jfjh9_tigera-operator(43406cbd-716e-49b7-a43e-8de582acff72)\"" pod="tigera-operator/tigera-operator-7dcd859c48-jfjh9" podUID="43406cbd-716e-49b7-a43e-8de582acff72" Dec 16 12:53:42.624790 kubelet[3300]: I1216 12:53:42.624723 3300 scope.go:117] "RemoveContainer" containerID="a4bfa757c7627ebe34453ec2b20043f45a2310416daa1bd854348e411628de25" Dec 16 12:53:42.688495 containerd[1969]: time="2025-12-16T12:53:42.688447071Z" level=info msg="RemoveContainer for \"a4bfa757c7627ebe34453ec2b20043f45a2310416daa1bd854348e411628de25\"" Dec 16 12:53:42.732692 containerd[1969]: time="2025-12-16T12:53:42.732638930Z" level=info msg="RemoveContainer for \"a4bfa757c7627ebe34453ec2b20043f45a2310416daa1bd854348e411628de25\" returns successfully" Dec 16 12:53:42.922123 containerd[1969]: time="2025-12-16T12:53:42.921990997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:53:43.194324 containerd[1969]: time="2025-12-16T12:53:43.194200134Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:53:43.196458 containerd[1969]: time="2025-12-16T12:53:43.196401226Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:53:43.196632 containerd[1969]: time="2025-12-16T12:53:43.196498518Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:53:43.196735 kubelet[3300]: E1216 12:53:43.196662 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:53:43.196735 kubelet[3300]: E1216 12:53:43.196714 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:53:43.197190 kubelet[3300]: E1216 12:53:43.197005 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-spx94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b5d97d74b-j9kr5_calico-apiserver(07dabb60-b494-485b-be91-7522183aff41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:53:43.198285 kubelet[3300]: E1216 12:53:43.198247 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b5d97d74b-j9kr5" podUID="07dabb60-b494-485b-be91-7522183aff41" Dec 16 12:53:46.923220 containerd[1969]: time="2025-12-16T12:53:46.922946313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:53:47.161161 containerd[1969]: time="2025-12-16T12:53:47.161104533Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:53:47.163678 containerd[1969]: time="2025-12-16T12:53:47.163363843Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:53:47.163845 containerd[1969]: time="2025-12-16T12:53:47.163453632Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:53:47.164224 kubelet[3300]: E1216 12:53:47.164165 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:53:47.164224 kubelet[3300]: E1216 12:53:47.164223 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:53:47.164674 kubelet[3300]: E1216 12:53:47.164410 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hrjjg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8gcbc_calico-system(39a10cdc-f6c5-430e-ac11-7f9183d0c949): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:53:47.165664 kubelet[3300]: E1216 12:53:47.165620 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8gcbc" podUID="39a10cdc-f6c5-430e-ac11-7f9183d0c949" Dec 16 12:53:49.922616 containerd[1969]: time="2025-12-16T12:53:49.922573118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:53:50.213491 containerd[1969]: time="2025-12-16T12:53:50.213368627Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:53:50.215562 containerd[1969]: time="2025-12-16T12:53:50.215510211Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:53:50.215684 containerd[1969]: time="2025-12-16T12:53:50.215595473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:53:50.215795 kubelet[3300]: E1216 12:53:50.215746 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:53:50.216247 kubelet[3300]: E1216 12:53:50.215816 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:53:50.216316 containerd[1969]: time="2025-12-16T12:53:50.216098124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:53:50.216464 kubelet[3300]: E1216 12:53:50.216419 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzbm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6fb4c65f97-cxgtk_calico-apiserver(5fa2e950-afe9-46cc-8df6-b014aa90c32b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:53:50.217699 kubelet[3300]: E1216 12:53:50.217635 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb4c65f97-cxgtk" podUID="5fa2e950-afe9-46cc-8df6-b014aa90c32b" Dec 16 12:53:50.575714 containerd[1969]: time="2025-12-16T12:53:50.575329750Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:53:50.577600 containerd[1969]: time="2025-12-16T12:53:50.577552626Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:53:50.577732 containerd[1969]: time="2025-12-16T12:53:50.577641694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:53:50.577834 kubelet[3300]: E1216 12:53:50.577786 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:53:50.577883 kubelet[3300]: E1216 12:53:50.577835 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:53:50.578032 kubelet[3300]: E1216 12:53:50.577995 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zn6r2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l2s79_calico-system(a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:53:50.580252 containerd[1969]: time="2025-12-16T12:53:50.580219788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:53:50.834866 containerd[1969]: time="2025-12-16T12:53:50.834714476Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:53:50.836927 containerd[1969]: time="2025-12-16T12:53:50.836853505Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:53:50.837069 containerd[1969]: time="2025-12-16T12:53:50.836961852Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:53:50.837175 kubelet[3300]: E1216 12:53:50.837135 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:53:50.837222 kubelet[3300]: E1216 12:53:50.837186 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:53:50.837399 kubelet[3300]: E1216 12:53:50.837332 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zn6r2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l2s79_calico-system(a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:53:50.838582 kubelet[3300]: E1216 12:53:50.838530 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l2s79" podUID="a2c9a6a7-124f-4bab-89b2-0cbb2fd935f0" Dec 16 12:53:50.921999 containerd[1969]: time="2025-12-16T12:53:50.921888865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:53:51.175980 containerd[1969]: time="2025-12-16T12:53:51.175935889Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:53:51.178062 containerd[1969]: time="2025-12-16T12:53:51.178006043Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:53:51.178234 containerd[1969]: time="2025-12-16T12:53:51.178084168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:53:51.178299 kubelet[3300]: E1216 12:53:51.178222 3300 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:53:51.178299 kubelet[3300]: E1216 12:53:51.178264 3300 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:53:51.178443 kubelet[3300]: E1216 12:53:51.178398 3300 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gm8jc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6fb4c65f97-rg8xf_calico-apiserver(631dcf21-0085-4dd5-b7a8-35fb5c10f8ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:53:51.179605 kubelet[3300]: E1216 12:53:51.179560 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb4c65f97-rg8xf" podUID="631dcf21-0085-4dd5-b7a8-35fb5c10f8ab" Dec 16 12:53:51.868626 kubelet[3300]: E1216 12:53:51.868559 3300 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-11?timeout=10s\": context deadline exceeded" Dec 16 12:53:51.922417 kubelet[3300]: E1216 12:53:51.922341 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b97648649-sj8kb" podUID="97ebcca4-f43d-4a70-b294-e93b18442671"