Dec 12 18:19:55.511548 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:17:57 -00 2025 Dec 12 18:19:55.511614 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 12 18:19:55.511629 kernel: BIOS-provided physical RAM map: Dec 12 18:19:55.511638 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 12 18:19:55.511645 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Dec 12 18:19:55.511653 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Dec 12 18:19:55.511662 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Dec 12 18:19:55.511670 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Dec 12 18:19:55.511677 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Dec 12 18:19:55.511687 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Dec 12 18:19:55.511695 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Dec 12 18:19:55.511702 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Dec 12 18:19:55.511710 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Dec 12 18:19:55.511718 kernel: printk: legacy bootconsole [earlyser0] enabled Dec 12 18:19:55.511730 kernel: NX (Execute Disable) protection: active Dec 12 18:19:55.511738 kernel: APIC: Static calls initialized Dec 12 18:19:55.511746 kernel: efi: EFI v2.7 by Microsoft Dec 12 18:19:55.511755 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eaa1018 RNG=0x3ffd2018 Dec 12 18:19:55.511763 kernel: random: crng init done Dec 12 18:19:55.511772 kernel: secureboot: Secure boot disabled Dec 12 18:19:55.511780 kernel: SMBIOS 3.1.0 present. Dec 12 18:19:55.511788 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/25/2025 Dec 12 18:19:55.511797 kernel: DMI: Memory slots populated: 2/2 Dec 12 18:19:55.511805 kernel: Hypervisor detected: Microsoft Hyper-V Dec 12 18:19:55.511814 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Dec 12 18:19:55.511823 kernel: Hyper-V: Nested features: 0x3e0101 Dec 12 18:19:55.511831 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Dec 12 18:19:55.511839 kernel: Hyper-V: Using hypercall for remote TLB flush Dec 12 18:19:55.511848 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 12 18:19:55.511856 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 12 18:19:55.511865 kernel: tsc: Detected 2299.999 MHz processor Dec 12 18:19:55.511873 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 12 18:19:55.511883 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 12 18:19:55.511892 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Dec 12 18:19:55.511903 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 12 18:19:55.511912 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 12 18:19:55.511921 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Dec 12 18:19:55.511930 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Dec 12 18:19:55.511939 kernel: Using GB pages for direct mapping Dec 12 18:19:55.511948 kernel: ACPI: Early table checksum verification disabled Dec 12 18:19:55.511963 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Dec 12 18:19:55.511972 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 12 18:19:55.511982 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 12 18:19:55.511990 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 12 18:19:55.512000 kernel: ACPI: FACS 0x000000003FFFE000 000040 Dec 12 18:19:55.512009 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 12 18:19:55.512020 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 12 18:19:55.512029 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 12 18:19:55.512038 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Dec 12 18:19:55.512047 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Dec 12 18:19:55.512056 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 12 18:19:55.512066 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Dec 12 18:19:55.512077 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Dec 12 18:19:55.512087 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Dec 12 18:19:55.512096 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Dec 12 18:19:55.512105 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Dec 12 18:19:55.512115 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Dec 12 18:19:55.512124 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Dec 12 18:19:55.512134 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Dec 12 18:19:55.512144 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Dec 12 18:19:55.512154 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Dec 12 18:19:55.512163 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Dec 12 18:19:55.512173 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Dec 12 18:19:55.512182 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Dec 12 18:19:55.512192 kernel: Zone ranges: Dec 12 18:19:55.512201 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 12 18:19:55.512212 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 12 18:19:55.512221 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Dec 12 18:19:55.512230 kernel: Device empty Dec 12 18:19:55.512239 kernel: Movable zone start for each node Dec 12 18:19:55.512249 kernel: Early memory node ranges Dec 12 18:19:55.512258 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 12 18:19:55.512268 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Dec 12 18:19:55.512279 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Dec 12 18:19:55.512288 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Dec 12 18:19:55.512297 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Dec 12 18:19:55.512306 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Dec 12 18:19:55.512316 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 12 18:19:55.512325 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 12 18:19:55.512334 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Dec 12 18:19:55.512345 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Dec 12 18:19:55.512355 kernel: ACPI: PM-Timer IO Port: 0x408 Dec 12 18:19:55.512364 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Dec 12 18:19:55.512374 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 12 18:19:55.512383 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 12 18:19:55.512392 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 12 18:19:55.512402 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Dec 12 18:19:55.512413 kernel: TSC deadline timer available Dec 12 18:19:55.512422 kernel: CPU topo: Max. logical packages: 1 Dec 12 18:19:55.512432 kernel: CPU topo: Max. logical dies: 1 Dec 12 18:19:55.512441 kernel: CPU topo: Max. dies per package: 1 Dec 12 18:19:55.512450 kernel: CPU topo: Max. threads per core: 2 Dec 12 18:19:55.512459 kernel: CPU topo: Num. cores per package: 1 Dec 12 18:19:55.512469 kernel: CPU topo: Num. threads per package: 2 Dec 12 18:19:55.512478 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 12 18:19:55.512490 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Dec 12 18:19:55.512499 kernel: Booting paravirtualized kernel on Hyper-V Dec 12 18:19:55.512509 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 12 18:19:55.512533 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 12 18:19:55.512543 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 12 18:19:55.512553 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 12 18:19:55.512562 kernel: pcpu-alloc: [0] 0 1 Dec 12 18:19:55.512573 kernel: Hyper-V: PV spinlocks enabled Dec 12 18:19:55.512583 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 12 18:19:55.512594 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 12 18:19:55.512604 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 12 18:19:55.512614 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 18:19:55.512623 kernel: Fallback order for Node 0: 0 Dec 12 18:19:55.512634 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Dec 12 18:19:55.512644 kernel: Policy zone: Normal Dec 12 18:19:55.512653 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 18:19:55.512662 kernel: software IO TLB: area num 2. Dec 12 18:19:55.512672 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 12 18:19:55.512681 kernel: ftrace: allocating 40103 entries in 157 pages Dec 12 18:19:55.512690 kernel: ftrace: allocated 157 pages with 5 groups Dec 12 18:19:55.512701 kernel: Dynamic Preempt: voluntary Dec 12 18:19:55.512711 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 18:19:55.512721 kernel: rcu: RCU event tracing is enabled. Dec 12 18:19:55.512738 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 12 18:19:55.512750 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 18:19:55.512760 kernel: Rude variant of Tasks RCU enabled. Dec 12 18:19:55.512770 kernel: Tracing variant of Tasks RCU enabled. Dec 12 18:19:55.512780 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 18:19:55.512790 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 12 18:19:55.512800 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:19:55.512812 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:19:55.512822 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:19:55.512832 kernel: Using NULL legacy PIC Dec 12 18:19:55.512844 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Dec 12 18:19:55.512854 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 18:19:55.512864 kernel: Console: colour dummy device 80x25 Dec 12 18:19:55.512873 kernel: printk: legacy console [tty1] enabled Dec 12 18:19:55.512883 kernel: printk: legacy console [ttyS0] enabled Dec 12 18:19:55.512893 kernel: printk: legacy bootconsole [earlyser0] disabled Dec 12 18:19:55.512903 kernel: ACPI: Core revision 20240827 Dec 12 18:19:55.512913 kernel: Failed to register legacy timer interrupt Dec 12 18:19:55.512925 kernel: APIC: Switch to symmetric I/O mode setup Dec 12 18:19:55.512935 kernel: x2apic enabled Dec 12 18:19:55.512945 kernel: APIC: Switched APIC routing to: physical x2apic Dec 12 18:19:55.512955 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Dec 12 18:19:55.512965 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 12 18:19:55.512975 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Dec 12 18:19:55.512985 kernel: Hyper-V: Using IPI hypercalls Dec 12 18:19:55.512997 kernel: APIC: send_IPI() replaced with hv_send_ipi() Dec 12 18:19:55.513007 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Dec 12 18:19:55.513016 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Dec 12 18:19:55.513026 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Dec 12 18:19:55.513036 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Dec 12 18:19:55.513045 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Dec 12 18:19:55.513055 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Dec 12 18:19:55.513067 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299999) Dec 12 18:19:55.513077 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 12 18:19:55.513086 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 12 18:19:55.513095 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 12 18:19:55.513105 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 12 18:19:55.513114 kernel: Spectre V2 : Mitigation: Retpolines Dec 12 18:19:55.513123 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 12 18:19:55.513132 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 12 18:19:55.513143 kernel: RETBleed: Vulnerable Dec 12 18:19:55.513152 kernel: Speculative Store Bypass: Vulnerable Dec 12 18:19:55.513161 kernel: active return thunk: its_return_thunk Dec 12 18:19:55.513170 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 12 18:19:55.513180 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 12 18:19:55.513189 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 12 18:19:55.513198 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 12 18:19:55.513207 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 12 18:19:55.513216 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 12 18:19:55.513225 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 12 18:19:55.513236 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Dec 12 18:19:55.513246 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Dec 12 18:19:55.513255 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Dec 12 18:19:55.513264 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 12 18:19:55.513273 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 12 18:19:55.513282 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 12 18:19:55.513291 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 12 18:19:55.513300 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Dec 12 18:19:55.513309 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Dec 12 18:19:55.513318 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Dec 12 18:19:55.513328 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Dec 12 18:19:55.513339 kernel: Freeing SMP alternatives memory: 32K Dec 12 18:19:55.513348 kernel: pid_max: default: 32768 minimum: 301 Dec 12 18:19:55.513357 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 18:19:55.513366 kernel: landlock: Up and running. Dec 12 18:19:55.513376 kernel: SELinux: Initializing. Dec 12 18:19:55.513385 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 12 18:19:55.513394 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 12 18:19:55.513404 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Dec 12 18:19:55.513413 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Dec 12 18:19:55.513424 kernel: signal: max sigframe size: 11952 Dec 12 18:19:55.513436 kernel: rcu: Hierarchical SRCU implementation. Dec 12 18:19:55.513446 kernel: rcu: Max phase no-delay instances is 400. Dec 12 18:19:55.513456 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 18:19:55.513466 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 12 18:19:55.513476 kernel: smp: Bringing up secondary CPUs ... Dec 12 18:19:55.513486 kernel: smpboot: x86: Booting SMP configuration: Dec 12 18:19:55.513496 kernel: .... node #0, CPUs: #1 Dec 12 18:19:55.513508 kernel: smp: Brought up 1 node, 2 CPUs Dec 12 18:19:55.513533 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 12 18:19:55.513545 kernel: Memory: 8095536K/8383228K available (14336K kernel code, 2444K rwdata, 29892K rodata, 15464K init, 2576K bss, 281556K reserved, 0K cma-reserved) Dec 12 18:19:55.513555 kernel: devtmpfs: initialized Dec 12 18:19:55.513565 kernel: x86/mm: Memory block size: 128MB Dec 12 18:19:55.513575 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Dec 12 18:19:55.513586 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 18:19:55.513608 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 12 18:19:55.513616 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 18:19:55.519581 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 18:19:55.519630 kernel: audit: initializing netlink subsys (disabled) Dec 12 18:19:55.519643 kernel: audit: type=2000 audit(1765563590.081:1): state=initialized audit_enabled=0 res=1 Dec 12 18:19:55.519654 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 18:19:55.519687 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 12 18:19:55.519702 kernel: cpuidle: using governor menu Dec 12 18:19:55.519778 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 18:19:55.519789 kernel: dca service started, version 1.12.1 Dec 12 18:19:55.519799 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Dec 12 18:19:55.519864 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Dec 12 18:19:55.519873 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 12 18:19:55.519937 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 18:19:55.519950 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 18:19:55.519959 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 18:19:55.520024 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 18:19:55.520035 kernel: ACPI: Added _OSI(Module Device) Dec 12 18:19:55.520083 kernel: ACPI: Added _OSI(Processor Device) Dec 12 18:19:55.520091 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 18:19:55.520099 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 18:19:55.520111 kernel: ACPI: Interpreter enabled Dec 12 18:19:55.520121 kernel: ACPI: PM: (supports S0 S5) Dec 12 18:19:55.520138 kernel: ACPI: Using IOAPIC for interrupt routing Dec 12 18:19:55.520145 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 12 18:19:55.520151 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 12 18:19:55.520159 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Dec 12 18:19:55.520165 kernel: iommu: Default domain type: Translated Dec 12 18:19:55.520171 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 12 18:19:55.520178 kernel: efivars: Registered efivars operations Dec 12 18:19:55.520184 kernel: PCI: Using ACPI for IRQ routing Dec 12 18:19:55.520194 kernel: PCI: System does not support PCI Dec 12 18:19:55.520205 kernel: vgaarb: loaded Dec 12 18:19:55.520214 kernel: clocksource: Switched to clocksource tsc-early Dec 12 18:19:55.520222 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 18:19:55.520228 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 18:19:55.520235 kernel: pnp: PnP ACPI init Dec 12 18:19:55.520242 kernel: pnp: PnP ACPI: found 3 devices Dec 12 18:19:55.520253 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 12 18:19:55.520265 kernel: NET: Registered PF_INET protocol family Dec 12 18:19:55.520274 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 12 18:19:55.520285 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 12 18:19:55.520292 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 18:19:55.520299 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 18:19:55.520307 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 12 18:19:55.520318 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 12 18:19:55.520328 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 12 18:19:55.520338 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 12 18:19:55.520349 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 18:19:55.520355 kernel: NET: Registered PF_XDP protocol family Dec 12 18:19:55.520362 kernel: PCI: CLS 0 bytes, default 64 Dec 12 18:19:55.520372 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 12 18:19:55.520383 kernel: software IO TLB: mapped [mem 0x000000003a9ba000-0x000000003e9ba000] (64MB) Dec 12 18:19:55.520393 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Dec 12 18:19:55.520415 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Dec 12 18:19:55.520421 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Dec 12 18:19:55.520431 kernel: clocksource: Switched to clocksource tsc Dec 12 18:19:55.520444 kernel: Initialise system trusted keyrings Dec 12 18:19:55.520460 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 12 18:19:55.520467 kernel: Key type asymmetric registered Dec 12 18:19:55.520473 kernel: Asymmetric key parser 'x509' registered Dec 12 18:19:55.520485 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 12 18:19:55.520494 kernel: io scheduler mq-deadline registered Dec 12 18:19:55.520504 kernel: io scheduler kyber registered Dec 12 18:19:55.522561 kernel: io scheduler bfq registered Dec 12 18:19:55.522581 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 12 18:19:55.522594 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 18:19:55.522604 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 12 18:19:55.522614 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 12 18:19:55.522625 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Dec 12 18:19:55.522635 kernel: i8042: PNP: No PS/2 controller found. Dec 12 18:19:55.522807 kernel: rtc_cmos 00:02: registered as rtc0 Dec 12 18:19:55.523087 kernel: rtc_cmos 00:02: setting system clock to 2025-12-12T18:19:52 UTC (1765563592) Dec 12 18:19:55.523483 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Dec 12 18:19:55.523497 kernel: intel_pstate: Intel P-state driver initializing Dec 12 18:19:55.523579 kernel: efifb: probing for efifb Dec 12 18:19:55.523644 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 12 18:19:55.523660 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 12 18:19:55.523726 kernel: efifb: scrolling: redraw Dec 12 18:19:55.523738 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 12 18:19:55.523747 kernel: Console: switching to colour frame buffer device 128x48 Dec 12 18:19:55.523815 kernel: fb0: EFI VGA frame buffer device Dec 12 18:19:55.523825 kernel: pstore: Using crash dump compression: deflate Dec 12 18:19:55.523892 kernel: pstore: Registered efi_pstore as persistent store backend Dec 12 18:19:55.523905 kernel: NET: Registered PF_INET6 protocol family Dec 12 18:19:55.523972 kernel: Segment Routing with IPv6 Dec 12 18:19:55.523983 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 18:19:55.524049 kernel: NET: Registered PF_PACKET protocol family Dec 12 18:19:55.524062 kernel: Key type dns_resolver registered Dec 12 18:19:55.524074 kernel: IPI shorthand broadcast: enabled Dec 12 18:19:55.524085 kernel: sched_clock: Marking stable (2046005168, 97818504)->(2471933531, -328109859) Dec 12 18:19:55.524097 kernel: registered taskstats version 1 Dec 12 18:19:55.524109 kernel: Loading compiled-in X.509 certificates Dec 12 18:19:55.524119 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: b90706f42f055ab9f35fc8fc29156d877adb12c4' Dec 12 18:19:55.524132 kernel: Demotion targets for Node 0: null Dec 12 18:19:55.524142 kernel: Key type .fscrypt registered Dec 12 18:19:55.524153 kernel: Key type fscrypt-provisioning registered Dec 12 18:19:55.524165 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 18:19:55.524176 kernel: ima: Allocated hash algorithm: sha1 Dec 12 18:19:55.524191 kernel: ima: No architecture policies found Dec 12 18:19:55.524201 kernel: clk: Disabling unused clocks Dec 12 18:19:55.524212 kernel: Freeing unused kernel image (initmem) memory: 15464K Dec 12 18:19:55.524223 kernel: Write protecting the kernel read-only data: 45056k Dec 12 18:19:55.524232 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Dec 12 18:19:55.524243 kernel: Run /init as init process Dec 12 18:19:55.524254 kernel: with arguments: Dec 12 18:19:55.524267 kernel: /init Dec 12 18:19:55.524278 kernel: with environment: Dec 12 18:19:55.524290 kernel: HOME=/ Dec 12 18:19:55.524299 kernel: TERM=linux Dec 12 18:19:55.524309 kernel: hv_vmbus: Vmbus version:5.3 Dec 12 18:19:55.524321 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 12 18:19:55.524331 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 12 18:19:55.524346 kernel: PTP clock support registered Dec 12 18:19:55.524356 kernel: hv_utils: Registering HyperV Utility Driver Dec 12 18:19:55.524368 kernel: hv_vmbus: registering driver hv_utils Dec 12 18:19:55.524377 kernel: hv_utils: Shutdown IC version 3.2 Dec 12 18:19:55.524388 kernel: hv_utils: Heartbeat IC version 3.0 Dec 12 18:19:55.524399 kernel: hv_utils: TimeSync IC version 4.0 Dec 12 18:19:55.524409 kernel: SCSI subsystem initialized Dec 12 18:19:55.524421 kernel: hv_vmbus: registering driver hv_pci Dec 12 18:19:55.529041 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Dec 12 18:19:55.531051 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Dec 12 18:19:55.531203 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Dec 12 18:19:55.531320 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Dec 12 18:19:55.531467 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Dec 12 18:19:55.531606 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Dec 12 18:19:55.531724 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Dec 12 18:19:55.531849 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Dec 12 18:19:55.531861 kernel: hv_vmbus: registering driver hv_storvsc Dec 12 18:19:55.531989 kernel: scsi host0: storvsc_host_t Dec 12 18:19:55.532126 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Dec 12 18:19:55.532139 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 12 18:19:55.532149 kernel: hv_vmbus: registering driver hid_hyperv Dec 12 18:19:55.532159 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 12 18:19:55.532277 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 12 18:19:55.532290 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 12 18:19:55.532302 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 12 18:19:55.532410 kernel: nvme nvme0: pci function c05b:00:00.0 Dec 12 18:19:55.538956 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Dec 12 18:19:55.539096 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 12 18:19:55.539112 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 12 18:19:55.539259 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 12 18:19:55.539273 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 12 18:19:55.539403 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 12 18:19:55.539416 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 18:19:55.539427 kernel: device-mapper: uevent: version 1.0.3 Dec 12 18:19:55.539438 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 18:19:55.539449 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 12 18:19:55.539474 kernel: raid6: avx512x4 gen() 44020 MB/s Dec 12 18:19:55.539487 kernel: raid6: avx512x2 gen() 42697 MB/s Dec 12 18:19:55.539497 kernel: raid6: avx512x1 gen() 25278 MB/s Dec 12 18:19:55.539508 kernel: raid6: avx2x4 gen() 34936 MB/s Dec 12 18:19:55.539537 kernel: raid6: avx2x2 gen() 36537 MB/s Dec 12 18:19:55.539548 kernel: raid6: avx2x1 gen() 30376 MB/s Dec 12 18:19:55.539559 kernel: raid6: using algorithm avx512x4 gen() 44020 MB/s Dec 12 18:19:55.539571 kernel: raid6: .... xor() 7602 MB/s, rmw enabled Dec 12 18:19:55.539581 kernel: raid6: using avx512x2 recovery algorithm Dec 12 18:19:55.539592 kernel: xor: automatically using best checksumming function avx Dec 12 18:19:55.539602 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 18:19:55.539613 kernel: BTRFS: device fsid ea73a94a-fb20-4d45-8448-4c6f4c422a4f devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (903) Dec 12 18:19:55.539625 kernel: BTRFS info (device dm-0): first mount of filesystem ea73a94a-fb20-4d45-8448-4c6f4c422a4f Dec 12 18:19:55.539636 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:19:55.539648 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 12 18:19:55.539658 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 18:19:55.539669 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 18:19:55.539679 kernel: loop: module loaded Dec 12 18:19:55.539689 kernel: loop0: detected capacity change from 0 to 100136 Dec 12 18:19:55.539699 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 18:19:55.539712 systemd[1]: Successfully made /usr/ read-only. Dec 12 18:19:55.539727 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:19:55.539739 systemd[1]: Detected virtualization microsoft. Dec 12 18:19:55.539779 systemd[1]: Detected architecture x86-64. Dec 12 18:19:55.539790 systemd[1]: Running in initrd. Dec 12 18:19:55.539802 systemd[1]: No hostname configured, using default hostname. Dec 12 18:19:55.539813 systemd[1]: Hostname set to . Dec 12 18:19:55.539825 systemd[1]: Initializing machine ID from random generator. Dec 12 18:19:55.539836 systemd[1]: Queued start job for default target initrd.target. Dec 12 18:19:55.539847 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 18:19:55.539858 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:19:55.539869 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:19:55.539882 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 18:19:55.539895 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:19:55.539907 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 18:19:55.539919 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 18:19:55.539932 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:19:55.539943 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:19:55.539955 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:19:55.539966 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:19:55.539977 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:19:55.539988 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:19:55.540000 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:19:55.540013 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:19:55.540024 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:19:55.540035 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 18:19:55.540047 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 18:19:55.540058 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 18:19:55.540069 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:19:55.540081 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:19:55.540094 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:19:55.540106 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:19:55.540117 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 18:19:55.540129 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 18:19:55.540140 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:19:55.540151 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 18:19:55.540163 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 18:19:55.540176 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 18:19:55.540187 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:19:55.540198 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:19:55.540210 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:19:55.540223 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 18:19:55.540234 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:19:55.540245 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 18:19:55.540257 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:19:55.540290 systemd-journald[1039]: Collecting audit messages is enabled. Dec 12 18:19:55.540317 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 18:19:55.540328 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:19:55.540339 kernel: audit: type=1130 audit(1765563595.513:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:55.540349 kernel: Bridge firewalling registered Dec 12 18:19:55.540361 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:19:55.540372 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:19:55.540383 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:19:55.540394 kernel: audit: type=1130 audit(1765563595.527:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:55.540406 systemd-journald[1039]: Journal started Dec 12 18:19:55.540433 systemd-journald[1039]: Runtime Journal (/run/log/journal/8ad867ef658540adbcd34e92b2fe784a) is 8M, max 158.5M, 150.5M free. Dec 12 18:19:55.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:55.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:55.525002 systemd-modules-load[1041]: Inserted module 'br_netfilter' Dec 12 18:19:55.543534 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:19:55.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:55.547535 kernel: audit: type=1130 audit(1765563595.542:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:55.552645 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:19:55.562375 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:19:55.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:55.567533 kernel: audit: type=1130 audit(1765563595.562:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:55.579956 systemd-tmpfiles[1060]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 18:19:55.625021 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:19:55.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:55.631599 kernel: audit: type=1130 audit(1765563595.624:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:55.633692 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:19:55.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:55.643715 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:19:55.657045 kernel: audit: type=1130 audit(1765563595.639:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:55.657076 kernel: audit: type=1130 audit(1765563595.643:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:55.657091 kernel: audit: type=1334 audit(1765563595.645:9): prog-id=6 op=LOAD Dec 12 18:19:55.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:55.645000 audit: BPF prog-id=6 op=LOAD Dec 12 18:19:55.645626 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 18:19:55.648624 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:19:55.677966 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:19:55.690350 kernel: audit: type=1130 audit(1765563595.683:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:55.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:55.686802 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 18:19:55.791857 dracut-cmdline[1081]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 12 18:19:55.819885 systemd-resolved[1069]: Positive Trust Anchors: Dec 12 18:19:55.819900 systemd-resolved[1069]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:19:55.819904 systemd-resolved[1069]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 18:19:55.819943 systemd-resolved[1069]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:19:55.872921 systemd-resolved[1069]: Defaulting to hostname 'linux'. Dec 12 18:19:55.873852 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:19:55.888372 kernel: audit: type=1130 audit(1765563595.879:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:55.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:55.879675 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:19:55.957566 kernel: Loading iSCSI transport class v2.0-870. Dec 12 18:19:56.017549 kernel: iscsi: registered transport (tcp) Dec 12 18:19:56.070308 kernel: iscsi: registered transport (qla4xxx) Dec 12 18:19:56.070373 kernel: QLogic iSCSI HBA Driver Dec 12 18:19:56.125508 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:19:56.144323 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:19:56.157665 kernel: audit: type=1130 audit(1765563596.143:12): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:56.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:56.145955 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:19:56.185894 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 18:19:56.191860 kernel: audit: type=1130 audit(1765563596.185:13): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:56.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:56.191205 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 18:19:56.196772 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 18:19:56.226830 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:19:56.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:56.237685 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:19:56.241643 kernel: audit: type=1130 audit(1765563596.230:14): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:56.231000 audit: BPF prog-id=7 op=LOAD Dec 12 18:19:56.247549 kernel: audit: type=1334 audit(1765563596.231:15): prog-id=7 op=LOAD Dec 12 18:19:56.231000 audit: BPF prog-id=8 op=LOAD Dec 12 18:19:56.251551 kernel: audit: type=1334 audit(1765563596.231:16): prog-id=8 op=LOAD Dec 12 18:19:56.273758 systemd-udevd[1326]: Using default interface naming scheme 'v257'. Dec 12 18:19:56.287244 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:19:56.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:56.292609 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:19:56.301351 kernel: audit: type=1130 audit(1765563596.291:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:56.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:56.308827 kernel: audit: type=1130 audit(1765563596.301:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:56.306472 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 18:19:56.313637 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:19:56.320623 kernel: audit: type=1334 audit(1765563596.312:19): prog-id=9 op=LOAD Dec 12 18:19:56.312000 audit: BPF prog-id=9 op=LOAD Dec 12 18:19:56.330074 dracut-pre-trigger[1422]: rd.md=0: removing MD RAID activation Dec 12 18:19:56.358931 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:19:56.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:56.364856 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:19:56.372887 kernel: audit: type=1130 audit(1765563596.362:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:56.373070 systemd-networkd[1423]: lo: Link UP Dec 12 18:19:56.373838 systemd-networkd[1423]: lo: Gained carrier Dec 12 18:19:56.374273 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:19:56.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:56.378350 systemd[1]: Reached target network.target - Network. Dec 12 18:19:56.416868 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:19:56.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:56.424642 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 18:19:56.495985 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:19:56.497885 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:19:56.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:56.502724 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:19:56.510539 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#40 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 12 18:19:56.511008 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:19:56.523545 kernel: hv_vmbus: registering driver hv_netvsc Dec 12 18:19:56.534548 kernel: hv_netvsc f8615163-0000-1000-2000-000d3ab7ab4d (unnamed net_device) (uninitialized): VF slot 1 added Dec 12 18:19:56.549551 kernel: cryptd: max_cpu_qlen set to 1000 Dec 12 18:19:56.550231 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:19:56.550316 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:19:56.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:56.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:56.560262 systemd-networkd[1423]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 18:19:56.560367 systemd-networkd[1423]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:19:56.562436 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:19:56.562496 systemd-networkd[1423]: eth0: Link UP Dec 12 18:19:56.563169 systemd-networkd[1423]: eth0: Gained carrier Dec 12 18:19:56.563181 systemd-networkd[1423]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 18:19:56.581599 systemd-networkd[1423]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 12 18:19:56.593538 kernel: AES CTR mode by8 optimization enabled Dec 12 18:19:56.623506 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:19:56.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:56.719542 kernel: nvme nvme0: using unchecked data buffer Dec 12 18:19:56.897172 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Dec 12 18:19:56.899659 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 18:19:56.940154 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Dec 12 18:19:56.951481 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Dec 12 18:19:56.966389 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Dec 12 18:19:57.064491 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 18:19:57.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:57.067414 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:19:57.071922 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:19:57.074773 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:19:57.086693 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 18:19:57.149445 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:19:57.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:57.551546 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Dec 12 18:19:57.556003 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Dec 12 18:19:57.556197 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Dec 12 18:19:57.557741 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Dec 12 18:19:57.562694 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Dec 12 18:19:57.566561 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Dec 12 18:19:57.571556 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Dec 12 18:19:57.573574 kernel: pci 7870:00:00.0: enabling Extended Tags Dec 12 18:19:57.587474 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Dec 12 18:19:57.587687 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Dec 12 18:19:57.591533 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Dec 12 18:19:57.613678 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Dec 12 18:19:57.623542 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Dec 12 18:19:57.627143 kernel: hv_netvsc f8615163-0000-1000-2000-000d3ab7ab4d eth0: VF registering: eth1 Dec 12 18:19:57.627336 kernel: mana 7870:00:00.0 eth1: joined to eth0 Dec 12 18:19:57.631539 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Dec 12 18:19:57.631594 systemd-networkd[1423]: eth1: Interface name change detected, renamed to enP30832s1. Dec 12 18:19:57.734537 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Dec 12 18:19:57.737613 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 12 18:19:57.739584 kernel: hv_netvsc f8615163-0000-1000-2000-000d3ab7ab4d eth0: Data path switched to VF: enP30832s1 Dec 12 18:19:57.739551 systemd-networkd[1423]: enP30832s1: Link UP Dec 12 18:19:57.739722 systemd-networkd[1423]: enP30832s1: Gained carrier Dec 12 18:19:57.963743 systemd-networkd[1423]: eth0: Gained IPv6LL Dec 12 18:19:58.171480 disk-uuid[1615]: Warning: The kernel is still using the old partition table. Dec 12 18:19:58.171480 disk-uuid[1615]: The new table will be used at the next reboot or after you Dec 12 18:19:58.171480 disk-uuid[1615]: run partprobe(8) or kpartx(8) Dec 12 18:19:58.171480 disk-uuid[1615]: The operation has completed successfully. Dec 12 18:19:58.180480 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 18:19:58.180608 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 18:19:58.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:58.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:58.185765 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 18:19:58.228536 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1661) Dec 12 18:19:58.228578 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 18:19:58.231264 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:19:58.253536 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 12 18:19:58.253577 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 12 18:19:58.255605 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 12 18:19:58.261541 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 18:19:58.262152 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 18:19:58.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:58.265802 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 18:19:59.359014 ignition[1680]: Ignition 2.22.0 Dec 12 18:19:59.359027 ignition[1680]: Stage: fetch-offline Dec 12 18:19:59.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:59.360818 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:19:59.359143 ignition[1680]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:19:59.366693 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 12 18:19:59.359151 ignition[1680]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 12 18:19:59.359239 ignition[1680]: parsed url from cmdline: "" Dec 12 18:19:59.359243 ignition[1680]: no config URL provided Dec 12 18:19:59.359247 ignition[1680]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:19:59.359255 ignition[1680]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:19:59.359260 ignition[1680]: failed to fetch config: resource requires networking Dec 12 18:19:59.359579 ignition[1680]: Ignition finished successfully Dec 12 18:19:59.401089 ignition[1686]: Ignition 2.22.0 Dec 12 18:19:59.401099 ignition[1686]: Stage: fetch Dec 12 18:19:59.401309 ignition[1686]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:19:59.401317 ignition[1686]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 12 18:19:59.401723 ignition[1686]: parsed url from cmdline: "" Dec 12 18:19:59.401727 ignition[1686]: no config URL provided Dec 12 18:19:59.401732 ignition[1686]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:19:59.401740 ignition[1686]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:19:59.401761 ignition[1686]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 12 18:19:59.468511 ignition[1686]: GET result: OK Dec 12 18:19:59.468635 ignition[1686]: config has been read from IMDS userdata Dec 12 18:19:59.469457 ignition[1686]: parsing config with SHA512: 86a0929fec223304e0d7011b2d54a44046ee6803777633a54f7a5fe30b79d816b3fc38edd42e453ea44940c98ba9d59423e44fc9a253c8ec48f2aaae4334f355 Dec 12 18:19:59.474648 unknown[1686]: fetched base config from "system" Dec 12 18:19:59.474854 unknown[1686]: fetched base config from "system" Dec 12 18:19:59.475369 ignition[1686]: fetch: fetch complete Dec 12 18:19:59.474865 unknown[1686]: fetched user config from "azure" Dec 12 18:19:59.475374 ignition[1686]: fetch: fetch passed Dec 12 18:19:59.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:59.479996 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 12 18:19:59.475418 ignition[1686]: Ignition finished successfully Dec 12 18:19:59.484503 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 18:19:59.515635 ignition[1693]: Ignition 2.22.0 Dec 12 18:19:59.515645 ignition[1693]: Stage: kargs Dec 12 18:19:59.515848 ignition[1693]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:19:59.518981 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 18:19:59.515855 ignition[1693]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 12 18:19:59.516861 ignition[1693]: kargs: kargs passed Dec 12 18:19:59.516900 ignition[1693]: Ignition finished successfully Dec 12 18:19:59.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:59.526851 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 18:19:59.553335 ignition[1700]: Ignition 2.22.0 Dec 12 18:19:59.553344 ignition[1700]: Stage: disks Dec 12 18:19:59.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:59.555618 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 18:19:59.553593 ignition[1700]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:19:59.556504 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 18:19:59.553602 ignition[1700]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 12 18:19:59.554505 ignition[1700]: disks: disks passed Dec 12 18:19:59.554564 ignition[1700]: Ignition finished successfully Dec 12 18:19:59.567650 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 18:19:59.569962 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:19:59.575021 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:19:59.577152 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:19:59.582540 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 18:19:59.685997 systemd-fsck[1709]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Dec 12 18:19:59.689446 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 18:19:59.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:59.693295 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 18:20:00.002547 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 7cac6192-738c-43cc-9341-24f71d091e91 r/w with ordered data mode. Quota mode: none. Dec 12 18:20:00.003162 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 18:20:00.008984 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 18:20:00.055595 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:20:00.060597 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 18:20:00.080659 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 12 18:20:00.087656 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 18:20:00.087692 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:20:00.092345 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 18:20:00.104615 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1718) Dec 12 18:20:00.104654 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 18:20:00.104669 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:20:00.105045 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 18:20:00.116127 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 12 18:20:00.116161 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 12 18:20:00.116170 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 12 18:20:00.117489 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:20:00.619014 coreos-metadata[1720]: Dec 12 18:20:00.618 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 12 18:20:00.623874 coreos-metadata[1720]: Dec 12 18:20:00.623 INFO Fetch successful Dec 12 18:20:00.626575 coreos-metadata[1720]: Dec 12 18:20:00.624 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 12 18:20:00.641862 coreos-metadata[1720]: Dec 12 18:20:00.641 INFO Fetch successful Dec 12 18:20:00.672057 coreos-metadata[1720]: Dec 12 18:20:00.672 INFO wrote hostname ci-4515.1.0-a-53d1559fda to /sysroot/etc/hostname Dec 12 18:20:00.675830 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 12 18:20:00.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:00.931348 initrd-setup-root[1748]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 18:20:00.996367 initrd-setup-root[1755]: cut: /sysroot/etc/group: No such file or directory Dec 12 18:20:01.034432 initrd-setup-root[1762]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 18:20:01.053172 initrd-setup-root[1769]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 18:20:02.141356 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 18:20:02.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:02.146628 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 18:20:02.153120 kernel: kauditd_printk_skb: 17 callbacks suppressed Dec 12 18:20:02.153147 kernel: audit: type=1130 audit(1765563602.145:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:02.164644 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 18:20:02.192714 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 18:20:02.197192 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 18:20:02.209645 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 18:20:02.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:02.216592 kernel: audit: type=1130 audit(1765563602.210:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:02.229455 ignition[1838]: INFO : Ignition 2.22.0 Dec 12 18:20:02.229455 ignition[1838]: INFO : Stage: mount Dec 12 18:20:02.233649 ignition[1838]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:20:02.233649 ignition[1838]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 12 18:20:02.233649 ignition[1838]: INFO : mount: mount passed Dec 12 18:20:02.233649 ignition[1838]: INFO : Ignition finished successfully Dec 12 18:20:02.233068 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 18:20:02.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:02.244602 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 18:20:02.245539 kernel: audit: type=1130 audit(1765563602.240:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:02.258809 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:20:02.277549 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1849) Dec 12 18:20:02.281550 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 18:20:02.281653 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:20:02.285809 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 12 18:20:02.285841 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 12 18:20:02.285853 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 12 18:20:02.288195 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:20:02.318890 ignition[1866]: INFO : Ignition 2.22.0 Dec 12 18:20:02.318890 ignition[1866]: INFO : Stage: files Dec 12 18:20:02.322544 ignition[1866]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:20:02.322544 ignition[1866]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 12 18:20:02.322544 ignition[1866]: DEBUG : files: compiled without relabeling support, skipping Dec 12 18:20:02.328355 ignition[1866]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 18:20:02.328355 ignition[1866]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 18:20:02.384689 ignition[1866]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 18:20:02.388608 ignition[1866]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 18:20:02.388608 ignition[1866]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 18:20:02.387038 unknown[1866]: wrote ssh authorized keys file for user: core Dec 12 18:20:02.532403 ignition[1866]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 12 18:20:02.535365 ignition[1866]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 12 18:20:02.587358 ignition[1866]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 18:20:02.632464 ignition[1866]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 12 18:20:02.636598 ignition[1866]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 18:20:02.636598 ignition[1866]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 18:20:02.636598 ignition[1866]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:20:02.636598 ignition[1866]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:20:02.636598 ignition[1866]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:20:02.636598 ignition[1866]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:20:02.636598 ignition[1866]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:20:02.636598 ignition[1866]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:20:02.660772 ignition[1866]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:20:02.660772 ignition[1866]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:20:02.660772 ignition[1866]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 18:20:02.660772 ignition[1866]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 18:20:02.660772 ignition[1866]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 18:20:02.660772 ignition[1866]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Dec 12 18:20:02.980465 ignition[1866]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 18:20:03.992097 ignition[1866]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 18:20:03.992097 ignition[1866]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 18:20:04.039410 ignition[1866]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:20:04.049455 ignition[1866]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:20:04.049455 ignition[1866]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 18:20:04.049455 ignition[1866]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 12 18:20:04.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.063471 ignition[1866]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 18:20:04.063471 ignition[1866]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:20:04.063471 ignition[1866]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:20:04.063471 ignition[1866]: INFO : files: files passed Dec 12 18:20:04.063471 ignition[1866]: INFO : Ignition finished successfully Dec 12 18:20:04.076779 kernel: audit: type=1130 audit(1765563604.057:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.054035 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 18:20:04.069680 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 18:20:04.082415 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 18:20:04.088430 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 18:20:04.090791 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 18:20:04.098034 initrd-setup-root-after-ignition[1897]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:20:04.098034 initrd-setup-root-after-ignition[1897]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:20:04.119331 kernel: audit: type=1130 audit(1765563604.097:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.119798 kernel: audit: type=1131 audit(1765563604.101:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.119810 kernel: audit: type=1130 audit(1765563604.108:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.108585 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:20:04.123259 initrd-setup-root-after-ignition[1901]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:20:04.108998 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 18:20:04.115636 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 18:20:04.162908 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 18:20:04.163004 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 18:20:04.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.171806 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 18:20:04.185606 kernel: audit: type=1130 audit(1765563604.167:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.185635 kernel: audit: type=1131 audit(1765563604.169:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.176408 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 18:20:04.176909 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 18:20:04.178636 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 18:20:04.196721 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:20:04.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.203108 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 18:20:04.209601 kernel: audit: type=1130 audit(1765563604.198:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.226467 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 18:20:04.227493 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:20:04.234246 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:20:04.237217 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 18:20:04.238597 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 18:20:04.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.238716 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:20:04.241686 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 18:20:04.245689 systemd[1]: Stopped target basic.target - Basic System. Dec 12 18:20:04.248839 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 18:20:04.251713 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:20:04.255679 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 18:20:04.258440 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:20:04.261662 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 18:20:04.265681 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:20:04.270700 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 18:20:04.273407 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 18:20:04.279481 systemd[1]: Stopped target swap.target - Swaps. Dec 12 18:20:04.281128 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 18:20:04.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.281238 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:20:04.296060 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:20:04.299007 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:20:04.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.299107 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 18:20:04.299341 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:20:04.299444 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 18:20:04.299588 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 18:20:04.300168 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 18:20:04.300272 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:20:04.300501 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 18:20:04.300671 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 18:20:04.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.300867 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 12 18:20:04.300976 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 12 18:20:04.303632 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 18:20:04.305762 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 18:20:04.306036 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 18:20:04.306171 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:20:04.306503 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 18:20:04.306632 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:20:04.370640 ignition[1922]: INFO : Ignition 2.22.0 Dec 12 18:20:04.370640 ignition[1922]: INFO : Stage: umount Dec 12 18:20:04.370640 ignition[1922]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:20:04.370640 ignition[1922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 12 18:20:04.370640 ignition[1922]: INFO : umount: umount passed Dec 12 18:20:04.370640 ignition[1922]: INFO : Ignition finished successfully Dec 12 18:20:04.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.306840 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 18:20:04.306943 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:20:04.325583 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 18:20:04.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.334822 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 18:20:04.370504 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 18:20:04.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.370638 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 18:20:04.373162 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 18:20:04.373211 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 18:20:04.378250 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 18:20:04.378291 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 18:20:04.386453 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 12 18:20:04.386497 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 12 18:20:04.389903 systemd[1]: Stopped target network.target - Network. Dec 12 18:20:04.392713 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 18:20:04.392769 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:20:04.397625 systemd[1]: Stopped target paths.target - Path Units. Dec 12 18:20:04.401569 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 18:20:04.405933 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:20:04.408598 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 18:20:04.419912 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 18:20:04.422858 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 18:20:04.422900 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:20:04.429664 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 18:20:04.429693 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:20:04.434773 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 12 18:20:04.434793 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 12 18:20:04.440249 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 18:20:04.440302 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 18:20:04.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.454613 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 18:20:04.454661 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 18:20:04.458671 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 18:20:04.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.459474 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 18:20:04.460943 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 18:20:04.469842 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 18:20:04.469940 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 18:20:04.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.475456 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 18:20:04.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.475568 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 18:20:04.479000 audit: BPF prog-id=6 op=UNLOAD Dec 12 18:20:04.482000 audit: BPF prog-id=9 op=UNLOAD Dec 12 18:20:04.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.479309 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 18:20:04.483313 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 18:20:04.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.483341 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:20:04.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.484253 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 18:20:04.484595 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 18:20:04.484641 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:20:04.489431 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 18:20:04.491275 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:20:04.495987 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 18:20:04.496037 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 18:20:04.499434 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:20:04.519897 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 18:20:04.520024 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:20:04.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.528797 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 18:20:04.528836 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 18:20:04.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.532223 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 18:20:04.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.532251 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:20:04.535593 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 18:20:04.535647 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:20:04.539908 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 18:20:04.539951 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 18:20:04.540457 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 18:20:04.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.540489 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:20:04.542648 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 18:20:04.542775 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 18:20:04.542824 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:20:04.543151 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 18:20:04.597627 kernel: hv_netvsc f8615163-0000-1000-2000-000d3ab7ab4d eth0: Data path switched from VF: enP30832s1 Dec 12 18:20:04.597843 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 12 18:20:04.543196 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:20:04.543415 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 12 18:20:04.543445 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:20:04.543750 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 18:20:04.543783 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:20:04.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:04.547974 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:20:04.548028 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:20:04.572666 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 18:20:04.572765 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 18:20:04.573971 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 18:20:04.574060 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 18:20:04.574401 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 18:20:04.574470 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 18:20:04.597962 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 18:20:04.599811 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 18:20:04.607959 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 18:20:04.611611 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 18:20:04.638915 systemd[1]: Switching root. Dec 12 18:20:04.725983 systemd-journald[1039]: Journal stopped Dec 12 18:20:09.298451 systemd-journald[1039]: Received SIGTERM from PID 1 (systemd). Dec 12 18:20:09.298489 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 18:20:09.298509 kernel: SELinux: policy capability open_perms=1 Dec 12 18:20:09.298535 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 18:20:09.298545 kernel: SELinux: policy capability always_check_network=0 Dec 12 18:20:09.298556 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 18:20:09.298566 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 18:20:09.298579 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 18:20:09.298589 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 18:20:09.298599 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 18:20:09.298610 systemd[1]: Successfully loaded SELinux policy in 516.144ms. Dec 12 18:20:09.298622 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.425ms. Dec 12 18:20:09.298633 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:20:09.298646 systemd[1]: Detected virtualization microsoft. Dec 12 18:20:09.298659 systemd[1]: Detected architecture x86-64. Dec 12 18:20:09.298670 systemd[1]: Detected first boot. Dec 12 18:20:09.298681 systemd[1]: Hostname set to . Dec 12 18:20:09.298694 systemd[1]: Initializing machine ID from random generator. Dec 12 18:20:09.298704 zram_generator::config[1966]: No configuration found. Dec 12 18:20:09.298715 kernel: Guest personality initialized and is inactive Dec 12 18:20:09.298726 kernel: VMCI host device registered (name=vmci, major=10, minor=259) Dec 12 18:20:09.298737 kernel: Initialized host personality Dec 12 18:20:09.298747 kernel: NET: Registered PF_VSOCK protocol family Dec 12 18:20:09.298758 systemd[1]: Populated /etc with preset unit settings. Dec 12 18:20:09.298769 kernel: kauditd_printk_skb: 45 callbacks suppressed Dec 12 18:20:09.298780 kernel: audit: type=1334 audit(1765563608.807:93): prog-id=12 op=LOAD Dec 12 18:20:09.298790 kernel: audit: type=1334 audit(1765563608.807:94): prog-id=3 op=UNLOAD Dec 12 18:20:09.298801 kernel: audit: type=1334 audit(1765563608.807:95): prog-id=13 op=LOAD Dec 12 18:20:09.298812 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 18:20:09.298822 kernel: audit: type=1334 audit(1765563608.807:96): prog-id=14 op=LOAD Dec 12 18:20:09.298834 kernel: audit: type=1334 audit(1765563608.807:97): prog-id=4 op=UNLOAD Dec 12 18:20:09.298844 kernel: audit: type=1334 audit(1765563608.807:98): prog-id=5 op=UNLOAD Dec 12 18:20:09.298854 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 18:20:09.298864 kernel: audit: type=1131 audit(1765563608.808:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.298874 kernel: audit: type=1334 audit(1765563608.821:100): prog-id=12 op=UNLOAD Dec 12 18:20:09.298886 kernel: audit: type=1130 audit(1765563608.826:101): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.298899 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 18:20:09.298910 kernel: audit: type=1131 audit(1765563608.826:102): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.298924 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 18:20:09.298935 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 18:20:09.298949 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 18:20:09.298963 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 18:20:09.298975 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 18:20:09.298986 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 18:20:09.298997 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 18:20:09.299007 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 18:20:09.299018 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:20:09.299031 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:20:09.299045 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 18:20:09.299057 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 18:20:09.299067 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 18:20:09.299079 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:20:09.299089 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 12 18:20:09.299101 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:20:09.299115 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:20:09.299127 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 18:20:09.299138 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 18:20:09.299149 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 18:20:09.299159 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 18:20:09.299170 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:20:09.299182 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:20:09.299195 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 12 18:20:09.299207 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:20:09.299217 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:20:09.299228 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 18:20:09.299238 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 18:20:09.299252 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 18:20:09.299264 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 18:20:09.299277 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 12 18:20:09.299288 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:20:09.299298 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 12 18:20:09.299311 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 12 18:20:09.299322 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:20:09.299334 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:20:09.299346 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 18:20:09.299358 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 18:20:09.299369 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 18:20:09.299379 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 18:20:09.299392 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:20:09.299404 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 18:20:09.299415 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 18:20:09.299427 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 18:20:09.299439 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 18:20:09.299450 systemd[1]: Reached target machines.target - Containers. Dec 12 18:20:09.299462 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 18:20:09.299474 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:20:09.299485 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:20:09.299498 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 18:20:09.299509 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:20:09.299619 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:20:09.299635 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:20:09.299655 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 18:20:09.299667 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:20:09.299681 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 18:20:09.299696 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 18:20:09.299710 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 18:20:09.299723 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 18:20:09.299738 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 18:20:09.299782 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:20:09.299796 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:20:09.299811 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:20:09.299825 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:20:09.299839 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 18:20:09.299854 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 18:20:09.299870 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:20:09.299883 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:20:09.299898 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 18:20:09.299914 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 18:20:09.299927 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 18:20:09.299941 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 18:20:09.299987 systemd-journald[2060]: Collecting audit messages is enabled. Dec 12 18:20:09.300022 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 18:20:09.300036 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 18:20:09.300053 systemd-journald[2060]: Journal started Dec 12 18:20:09.300082 systemd-journald[2060]: Runtime Journal (/run/log/journal/861f3e4059504aaaab5c535c0aa1f329) is 8M, max 158.5M, 150.5M free. Dec 12 18:20:08.959000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 12 18:20:09.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.202000 audit: BPF prog-id=14 op=UNLOAD Dec 12 18:20:09.202000 audit: BPF prog-id=13 op=UNLOAD Dec 12 18:20:09.203000 audit: BPF prog-id=15 op=LOAD Dec 12 18:20:09.203000 audit: BPF prog-id=16 op=LOAD Dec 12 18:20:09.203000 audit: BPF prog-id=17 op=LOAD Dec 12 18:20:09.291000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 12 18:20:09.291000 audit[2060]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffc903dc060 a2=4000 a3=0 items=0 ppid=1 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.291000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 12 18:20:08.796196 systemd[1]: Queued start job for default target multi-user.target. Dec 12 18:20:08.808628 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 12 18:20:08.809015 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 18:20:09.305539 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:20:09.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.308298 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 18:20:09.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.310902 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:20:09.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.314887 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 18:20:09.315049 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 18:20:09.318364 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:20:09.318552 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:20:09.321032 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:20:09.321191 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:20:09.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.324727 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:20:09.329706 kernel: fuse: init (API version 7.41) Dec 12 18:20:09.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.324948 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:20:09.328220 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:20:09.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.334600 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:20:09.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.338296 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 18:20:09.338536 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 18:20:09.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.342794 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 18:20:09.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.350193 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 18:20:09.361429 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:20:09.366033 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 12 18:20:09.368873 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 18:20:09.368906 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:20:09.372497 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 18:20:09.378928 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:20:09.379036 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 18:20:09.383621 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 18:20:09.389683 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 18:20:09.392284 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:20:09.394709 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 18:20:09.394823 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:20:09.396631 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:20:09.398672 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 18:20:09.408650 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:20:09.420542 kernel: ACPI: bus type drm_connector registered Dec 12 18:20:09.421892 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:20:09.422069 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:20:09.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.431137 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:20:09.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.434816 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 18:20:09.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.437959 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 18:20:09.443257 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 18:20:09.452368 systemd-journald[2060]: Time spent on flushing to /var/log/journal/861f3e4059504aaaab5c535c0aa1f329 is 15.643ms for 1140 entries. Dec 12 18:20:09.452368 systemd-journald[2060]: System Journal (/var/log/journal/861f3e4059504aaaab5c535c0aa1f329) is 8M, max 2.2G, 2.2G free. Dec 12 18:20:09.573667 systemd-journald[2060]: Received client request to flush runtime journal. Dec 12 18:20:09.573713 kernel: loop1: detected capacity change from 0 to 229808 Dec 12 18:20:09.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.470719 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:20:09.527059 systemd-tmpfiles[2106]: ACLs are not supported, ignoring. Dec 12 18:20:09.527067 systemd-tmpfiles[2106]: ACLs are not supported, ignoring. Dec 12 18:20:09.529751 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:20:09.533589 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 18:20:09.574588 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 18:20:09.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.580539 kernel: loop2: detected capacity change from 0 to 119256 Dec 12 18:20:09.601490 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 18:20:09.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.697666 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 18:20:09.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.701000 audit: BPF prog-id=18 op=LOAD Dec 12 18:20:09.701000 audit: BPF prog-id=19 op=LOAD Dec 12 18:20:09.701000 audit: BPF prog-id=20 op=LOAD Dec 12 18:20:09.702332 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 12 18:20:09.708000 audit: BPF prog-id=21 op=LOAD Dec 12 18:20:09.711663 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:20:09.715688 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:20:09.738468 systemd-tmpfiles[2126]: ACLs are not supported, ignoring. Dec 12 18:20:09.738486 systemd-tmpfiles[2126]: ACLs are not supported, ignoring. Dec 12 18:20:09.742096 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:20:09.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.784000 audit: BPF prog-id=22 op=LOAD Dec 12 18:20:09.784000 audit: BPF prog-id=23 op=LOAD Dec 12 18:20:09.784000 audit: BPF prog-id=24 op=LOAD Dec 12 18:20:09.787007 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 12 18:20:09.790000 audit: BPF prog-id=25 op=LOAD Dec 12 18:20:09.790000 audit: BPF prog-id=26 op=LOAD Dec 12 18:20:09.790000 audit: BPF prog-id=27 op=LOAD Dec 12 18:20:09.792484 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 18:20:09.812534 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 18:20:09.815850 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 18:20:09.820610 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 18:20:09.829448 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 18:20:09.835293 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 18:20:09.865236 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 18:20:09.868167 systemd-nsresourced[2129]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 12 18:20:09.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.869166 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 12 18:20:09.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:09.961455 systemd-oomd[2124]: No swap; memory pressure usage will be degraded Dec 12 18:20:09.962029 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 12 18:20:09.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:10.013589 systemd-resolved[2125]: Positive Trust Anchors: Dec 12 18:20:10.013606 systemd-resolved[2125]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:20:10.013610 systemd-resolved[2125]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 18:20:10.013644 systemd-resolved[2125]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:20:10.028543 kernel: loop3: detected capacity change from 0 to 27736 Dec 12 18:20:10.136356 systemd-resolved[2125]: Using system hostname 'ci-4515.1.0-a-53d1559fda'. Dec 12 18:20:10.137904 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:20:10.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:10.141754 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:20:10.219549 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 18:20:10.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:10.222000 audit: BPF prog-id=8 op=UNLOAD Dec 12 18:20:10.222000 audit: BPF prog-id=7 op=UNLOAD Dec 12 18:20:10.222000 audit: BPF prog-id=28 op=LOAD Dec 12 18:20:10.222000 audit: BPF prog-id=29 op=LOAD Dec 12 18:20:10.224032 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:20:10.252301 systemd-udevd[2151]: Using default interface naming scheme 'v257'. Dec 12 18:20:10.427723 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:20:10.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:10.432000 audit: BPF prog-id=30 op=LOAD Dec 12 18:20:10.434020 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:20:10.496306 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 12 18:20:10.526411 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 12 18:20:10.547538 kernel: loop4: detected capacity change from 0 to 111544 Dec 12 18:20:10.552613 systemd-networkd[2161]: lo: Link UP Dec 12 18:20:10.552621 systemd-networkd[2161]: lo: Gained carrier Dec 12 18:20:10.554421 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:20:10.554921 systemd-networkd[2161]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 18:20:10.554996 systemd-networkd[2161]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:20:10.558298 kernel: hv_vmbus: registering driver hyperv_fb Dec 12 18:20:10.558357 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Dec 12 18:20:10.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:10.560914 systemd[1]: Reached target network.target - Network. Dec 12 18:20:10.562567 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 12 18:20:10.562634 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 12 18:20:10.567226 kernel: hv_netvsc f8615163-0000-1000-2000-000d3ab7ab4d eth0: Data path switched to VF: enP30832s1 Dec 12 18:20:10.567460 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 12 18:20:10.569610 systemd-networkd[2161]: enP30832s1: Link UP Dec 12 18:20:10.569998 systemd-networkd[2161]: eth0: Link UP Dec 12 18:20:10.570206 systemd-networkd[2161]: eth0: Gained carrier Dec 12 18:20:10.570266 systemd-networkd[2161]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 18:20:10.572379 kernel: Console: switching to colour dummy device 80x25 Dec 12 18:20:10.574167 systemd-networkd[2161]: enP30832s1: Gained carrier Dec 12 18:20:10.574723 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 18:20:10.580956 kernel: Console: switching to colour frame buffer device 128x48 Dec 12 18:20:10.579787 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 18:20:10.585613 systemd-networkd[2161]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 12 18:20:10.594549 kernel: mousedev: PS/2 mouse device common for all mice Dec 12 18:20:10.604561 kernel: hv_vmbus: registering driver hv_balloon Dec 12 18:20:10.606624 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 12 18:20:10.625665 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 18:20:10.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:10.666608 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:20:10.674399 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:20:10.674786 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:20:10.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:10.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:10.682785 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:20:10.716754 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:20:10.717825 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:20:10.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:10.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:10.725771 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:20:10.889813 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Dec 12 18:20:10.904865 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Dec 12 18:20:10.908313 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 18:20:10.973795 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 18:20:10.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:10.984548 kernel: loop5: detected capacity change from 0 to 229808 Dec 12 18:20:10.998538 kernel: loop6: detected capacity change from 0 to 119256 Dec 12 18:20:11.010540 kernel: loop7: detected capacity change from 0 to 27736 Dec 12 18:20:11.019623 kernel: loop1: detected capacity change from 0 to 111544 Dec 12 18:20:11.032458 (sd-merge)[2240]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Dec 12 18:20:11.035460 (sd-merge)[2240]: Merged extensions into '/usr'. Dec 12 18:20:11.039313 systemd[1]: Reload requested from client PID 2105 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 18:20:11.039327 systemd[1]: Reloading... Dec 12 18:20:11.096541 zram_generator::config[2279]: No configuration found. Dec 12 18:20:11.303358 systemd[1]: Reloading finished in 263 ms. Dec 12 18:20:11.333736 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 18:20:11.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:11.337009 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:20:11.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:11.351540 systemd[1]: Starting ensure-sysext.service... Dec 12 18:20:11.357000 audit: BPF prog-id=31 op=LOAD Dec 12 18:20:11.357000 audit: BPF prog-id=22 op=UNLOAD Dec 12 18:20:11.357000 audit: BPF prog-id=32 op=LOAD Dec 12 18:20:11.357000 audit: BPF prog-id=33 op=LOAD Dec 12 18:20:11.357000 audit: BPF prog-id=23 op=UNLOAD Dec 12 18:20:11.357000 audit: BPF prog-id=24 op=UNLOAD Dec 12 18:20:11.355588 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:20:11.358000 audit: BPF prog-id=34 op=LOAD Dec 12 18:20:11.359000 audit: BPF prog-id=30 op=UNLOAD Dec 12 18:20:11.360000 audit: BPF prog-id=35 op=LOAD Dec 12 18:20:11.360000 audit: BPF prog-id=25 op=UNLOAD Dec 12 18:20:11.360000 audit: BPF prog-id=36 op=LOAD Dec 12 18:20:11.360000 audit: BPF prog-id=37 op=LOAD Dec 12 18:20:11.360000 audit: BPF prog-id=26 op=UNLOAD Dec 12 18:20:11.360000 audit: BPF prog-id=27 op=UNLOAD Dec 12 18:20:11.361000 audit: BPF prog-id=38 op=LOAD Dec 12 18:20:11.361000 audit: BPF prog-id=18 op=UNLOAD Dec 12 18:20:11.361000 audit: BPF prog-id=39 op=LOAD Dec 12 18:20:11.361000 audit: BPF prog-id=40 op=LOAD Dec 12 18:20:11.362000 audit: BPF prog-id=19 op=UNLOAD Dec 12 18:20:11.362000 audit: BPF prog-id=20 op=UNLOAD Dec 12 18:20:11.362000 audit: BPF prog-id=41 op=LOAD Dec 12 18:20:11.362000 audit: BPF prog-id=15 op=UNLOAD Dec 12 18:20:11.362000 audit: BPF prog-id=42 op=LOAD Dec 12 18:20:11.362000 audit: BPF prog-id=43 op=LOAD Dec 12 18:20:11.362000 audit: BPF prog-id=16 op=UNLOAD Dec 12 18:20:11.362000 audit: BPF prog-id=17 op=UNLOAD Dec 12 18:20:11.363000 audit: BPF prog-id=44 op=LOAD Dec 12 18:20:11.365000 audit: BPF prog-id=21 op=UNLOAD Dec 12 18:20:11.365000 audit: BPF prog-id=45 op=LOAD Dec 12 18:20:11.365000 audit: BPF prog-id=46 op=LOAD Dec 12 18:20:11.365000 audit: BPF prog-id=28 op=UNLOAD Dec 12 18:20:11.365000 audit: BPF prog-id=29 op=UNLOAD Dec 12 18:20:11.372309 systemd[1]: Reload requested from client PID 2335 ('systemctl') (unit ensure-sysext.service)... Dec 12 18:20:11.372329 systemd[1]: Reloading... Dec 12 18:20:11.394063 systemd-tmpfiles[2336]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 18:20:11.394080 systemd-tmpfiles[2336]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 18:20:11.394277 systemd-tmpfiles[2336]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 18:20:11.395072 systemd-tmpfiles[2336]: ACLs are not supported, ignoring. Dec 12 18:20:11.395125 systemd-tmpfiles[2336]: ACLs are not supported, ignoring. Dec 12 18:20:11.418482 systemd-tmpfiles[2336]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:20:11.418495 systemd-tmpfiles[2336]: Skipping /boot Dec 12 18:20:11.428643 systemd-tmpfiles[2336]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:20:11.428655 systemd-tmpfiles[2336]: Skipping /boot Dec 12 18:20:11.441552 zram_generator::config[2369]: No configuration found. Dec 12 18:20:11.637428 systemd[1]: Reloading finished in 264 ms. Dec 12 18:20:11.650000 audit: BPF prog-id=47 op=LOAD Dec 12 18:20:11.650000 audit: BPF prog-id=34 op=UNLOAD Dec 12 18:20:11.651000 audit: BPF prog-id=48 op=LOAD Dec 12 18:20:11.651000 audit: BPF prog-id=31 op=UNLOAD Dec 12 18:20:11.651000 audit: BPF prog-id=49 op=LOAD Dec 12 18:20:11.651000 audit: BPF prog-id=50 op=LOAD Dec 12 18:20:11.651000 audit: BPF prog-id=32 op=UNLOAD Dec 12 18:20:11.651000 audit: BPF prog-id=33 op=UNLOAD Dec 12 18:20:11.651000 audit: BPF prog-id=51 op=LOAD Dec 12 18:20:11.651000 audit: BPF prog-id=38 op=UNLOAD Dec 12 18:20:11.651000 audit: BPF prog-id=52 op=LOAD Dec 12 18:20:11.651000 audit: BPF prog-id=53 op=LOAD Dec 12 18:20:11.651000 audit: BPF prog-id=39 op=UNLOAD Dec 12 18:20:11.651000 audit: BPF prog-id=40 op=UNLOAD Dec 12 18:20:11.652000 audit: BPF prog-id=54 op=LOAD Dec 12 18:20:11.652000 audit: BPF prog-id=44 op=UNLOAD Dec 12 18:20:11.653000 audit: BPF prog-id=55 op=LOAD Dec 12 18:20:11.653000 audit: BPF prog-id=41 op=UNLOAD Dec 12 18:20:11.653000 audit: BPF prog-id=56 op=LOAD Dec 12 18:20:11.653000 audit: BPF prog-id=57 op=LOAD Dec 12 18:20:11.653000 audit: BPF prog-id=42 op=UNLOAD Dec 12 18:20:11.654000 audit: BPF prog-id=43 op=UNLOAD Dec 12 18:20:11.654000 audit: BPF prog-id=58 op=LOAD Dec 12 18:20:11.654000 audit: BPF prog-id=59 op=LOAD Dec 12 18:20:11.654000 audit: BPF prog-id=45 op=UNLOAD Dec 12 18:20:11.654000 audit: BPF prog-id=46 op=UNLOAD Dec 12 18:20:11.655000 audit: BPF prog-id=60 op=LOAD Dec 12 18:20:11.661000 audit: BPF prog-id=35 op=UNLOAD Dec 12 18:20:11.661000 audit: BPF prog-id=61 op=LOAD Dec 12 18:20:11.661000 audit: BPF prog-id=62 op=LOAD Dec 12 18:20:11.661000 audit: BPF prog-id=36 op=UNLOAD Dec 12 18:20:11.661000 audit: BPF prog-id=37 op=UNLOAD Dec 12 18:20:11.664104 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:20:11.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:11.679971 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:20:11.684069 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 18:20:11.688026 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 18:20:11.697271 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 18:20:11.701763 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 18:20:11.707967 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:20:11.708136 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:20:11.717608 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:20:11.720780 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:20:11.725053 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:20:11.727907 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:20:11.728430 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 18:20:11.728577 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:20:11.728680 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:20:11.731110 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:20:11.736000 audit[2434]: SYSTEM_BOOT pid=2434 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 12 18:20:11.737909 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:20:11.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:11.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:11.742422 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:20:11.742760 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:20:11.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:11.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:11.745916 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:20:11.746054 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:20:11.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:11.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:11.759101 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:20:11.760061 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:20:11.762800 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:20:11.767687 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:20:11.779703 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:20:11.783611 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:20:11.783784 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 18:20:11.783881 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:20:11.783976 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:20:11.785172 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 18:20:11.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:11.791429 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:20:11.791662 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:20:11.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:11.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:11.793819 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:20:11.793953 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:20:11.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:11.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:11.796211 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:20:11.796394 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:20:11.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:11.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:11.802448 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 18:20:11.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:11.809211 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:20:11.809452 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:20:11.810437 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:20:11.815737 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:20:11.822403 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:20:11.826500 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:20:11.829809 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:20:11.830231 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 18:20:11.830692 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:20:11.830856 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 18:20:11.833682 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:20:11.835042 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:20:11.836116 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:20:11.839009 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:20:11.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:11.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:11.839314 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:20:11.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:11.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:11.842945 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:20:11.843076 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:20:11.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:11.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:11.846882 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:20:11.847014 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:20:11.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:11.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:11.851742 systemd[1]: Finished ensure-sysext.service. Dec 12 18:20:11.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:11.857026 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:20:11.857082 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:20:12.039000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 12 18:20:12.039000 audit[2477]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdbede3bd0 a2=420 a3=0 items=0 ppid=2430 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:12.039000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 18:20:12.040481 augenrules[2477]: No rules Dec 12 18:20:12.040955 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:20:12.041217 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:20:12.043758 systemd-networkd[2161]: eth0: Gained IPv6LL Dec 12 18:20:12.045331 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 18:20:12.049814 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 18:20:12.472157 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 18:20:12.475806 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 18:20:17.394192 ldconfig[2432]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 18:20:17.408967 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 18:20:17.411879 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 18:20:17.426759 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 18:20:17.428322 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:20:17.430040 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 18:20:17.431514 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 18:20:17.434634 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 12 18:20:17.436372 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 18:20:17.439633 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 18:20:17.442606 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 12 18:20:17.444386 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 12 18:20:17.447584 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 18:20:17.450601 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 18:20:17.450634 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:20:17.451790 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:20:17.454842 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 18:20:17.457347 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 18:20:17.462188 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 18:20:17.465723 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 18:20:17.468573 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 18:20:17.479022 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 18:20:17.482803 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 18:20:17.486127 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 18:20:17.489333 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:20:17.490780 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:20:17.492119 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:20:17.492144 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:20:17.506728 systemd[1]: Starting chronyd.service - NTP client/server... Dec 12 18:20:17.510466 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 18:20:17.518651 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 12 18:20:17.523267 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 18:20:17.526634 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 18:20:17.529873 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 18:20:17.536889 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 18:20:17.539213 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 18:20:17.541702 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 12 18:20:17.545591 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Dec 12 18:20:17.547492 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 12 18:20:17.550186 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 12 18:20:17.551988 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:20:17.553408 jq[2498]: false Dec 12 18:20:17.560072 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 18:20:17.566631 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 18:20:17.571762 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 18:20:17.578824 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 18:20:17.585010 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 18:20:17.590439 KVP[2501]: KVP starting; pid is:2501 Dec 12 18:20:17.592214 oslogin_cache_refresh[2500]: Refreshing passwd entry cache Dec 12 18:20:17.593170 google_oslogin_nss_cache[2500]: oslogin_cache_refresh[2500]: Refreshing passwd entry cache Dec 12 18:20:17.596616 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 18:20:17.601674 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 18:20:17.602105 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 18:20:17.604488 KVP[2501]: KVP LIC Version: 3.1 Dec 12 18:20:17.604610 kernel: hv_utils: KVP IC version 4.0 Dec 12 18:20:17.605673 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 18:20:17.610845 extend-filesystems[2499]: Found /dev/nvme0n1p6 Dec 12 18:20:17.614645 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 18:20:17.622931 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 18:20:17.626178 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 18:20:17.626412 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 18:20:17.627658 chronyd[2490]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Dec 12 18:20:17.630777 chronyd[2490]: Timezone right/UTC failed leap second check, ignoring Dec 12 18:20:17.630924 chronyd[2490]: Loaded seccomp filter (level 2) Dec 12 18:20:17.636377 extend-filesystems[2499]: Found /dev/nvme0n1p9 Dec 12 18:20:17.638674 google_oslogin_nss_cache[2500]: oslogin_cache_refresh[2500]: Failure getting users, quitting Dec 12 18:20:17.638674 google_oslogin_nss_cache[2500]: oslogin_cache_refresh[2500]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:20:17.638674 google_oslogin_nss_cache[2500]: oslogin_cache_refresh[2500]: Refreshing group entry cache Dec 12 18:20:17.638325 systemd[1]: Started chronyd.service - NTP client/server. Dec 12 18:20:17.636823 oslogin_cache_refresh[2500]: Failure getting users, quitting Dec 12 18:20:17.636841 oslogin_cache_refresh[2500]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:20:17.636883 oslogin_cache_refresh[2500]: Refreshing group entry cache Dec 12 18:20:17.642913 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 18:20:17.643851 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 18:20:17.651088 extend-filesystems[2499]: Checking size of /dev/nvme0n1p9 Dec 12 18:20:17.662342 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 18:20:17.665066 google_oslogin_nss_cache[2500]: oslogin_cache_refresh[2500]: Failure getting groups, quitting Dec 12 18:20:17.665066 google_oslogin_nss_cache[2500]: oslogin_cache_refresh[2500]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:20:17.665009 oslogin_cache_refresh[2500]: Failure getting groups, quitting Dec 12 18:20:17.665020 oslogin_cache_refresh[2500]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:20:17.667995 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 18:20:17.671781 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 12 18:20:17.672002 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 12 18:20:17.680720 jq[2516]: true Dec 12 18:20:17.689191 extend-filesystems[2499]: Resized partition /dev/nvme0n1p9 Dec 12 18:20:17.700884 update_engine[2514]: I20251212 18:20:17.700793 2514 main.cc:92] Flatcar Update Engine starting Dec 12 18:20:17.705315 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 18:20:17.712511 tar[2524]: linux-amd64/LICENSE Dec 12 18:20:17.713799 tar[2524]: linux-amd64/helm Dec 12 18:20:17.728764 jq[2549]: true Dec 12 18:20:17.735284 extend-filesystems[2558]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 18:20:17.749327 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 6359552 to 6376955 blocks Dec 12 18:20:17.769457 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 6376955 Dec 12 18:20:17.776135 dbus-daemon[2493]: [system] SELinux support is enabled Dec 12 18:20:17.776326 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 18:20:17.779078 extend-filesystems[2558]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 12 18:20:17.779078 extend-filesystems[2558]: old_desc_blocks = 4, new_desc_blocks = 4 Dec 12 18:20:17.779078 extend-filesystems[2558]: The filesystem on /dev/nvme0n1p9 is now 6376955 (4k) blocks long. Dec 12 18:20:17.791592 extend-filesystems[2499]: Resized filesystem in /dev/nvme0n1p9 Dec 12 18:20:17.794129 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 18:20:17.795586 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 18:20:17.799678 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 18:20:17.799719 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 18:20:17.803609 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 18:20:17.803637 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 18:20:17.810683 systemd[1]: Started update-engine.service - Update Engine. Dec 12 18:20:17.813818 update_engine[2514]: I20251212 18:20:17.813655 2514 update_check_scheduler.cc:74] Next update check in 10m37s Dec 12 18:20:17.834028 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 18:20:17.890817 systemd-logind[2512]: New seat seat0. Dec 12 18:20:17.895179 systemd-logind[2512]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 12 18:20:17.895414 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 18:20:17.966169 bash[2580]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:20:17.970586 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 18:20:17.977939 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 12 18:20:17.984014 coreos-metadata[2492]: Dec 12 18:20:17.983 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 12 18:20:17.992070 coreos-metadata[2492]: Dec 12 18:20:17.992 INFO Fetch successful Dec 12 18:20:17.992376 coreos-metadata[2492]: Dec 12 18:20:17.992 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 12 18:20:17.998843 coreos-metadata[2492]: Dec 12 18:20:17.998 INFO Fetch successful Dec 12 18:20:17.999453 coreos-metadata[2492]: Dec 12 18:20:17.999 INFO Fetching http://168.63.129.16/machine/84cba41f-5111-418d-8a6f-7bf8ce01c4d9/96fe8fb0%2D96b1%2D41e1%2D972a%2D86e0ae2150be.%5Fci%2D4515.1.0%2Da%2D53d1559fda?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 12 18:20:18.003702 coreos-metadata[2492]: Dec 12 18:20:18.003 INFO Fetch successful Dec 12 18:20:18.004730 coreos-metadata[2492]: Dec 12 18:20:18.003 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 12 18:20:18.024809 coreos-metadata[2492]: Dec 12 18:20:18.024 INFO Fetch successful Dec 12 18:20:18.114955 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 12 18:20:18.120238 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 18:20:18.316882 locksmithd[2568]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 18:20:18.708509 tar[2524]: linux-amd64/README.md Dec 12 18:20:18.726321 sshd_keygen[2537]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 18:20:18.736642 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 18:20:18.757337 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 18:20:18.765781 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 18:20:18.770625 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 12 18:20:18.787077 containerd[2529]: time="2025-12-12T18:20:18Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 18:20:18.788648 containerd[2529]: time="2025-12-12T18:20:18.788613056Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 12 18:20:18.793121 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 18:20:18.793631 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 18:20:18.801868 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 18:20:18.813968 containerd[2529]: time="2025-12-12T18:20:18.813925460Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.323µs" Dec 12 18:20:18.813968 containerd[2529]: time="2025-12-12T18:20:18.813957918Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 18:20:18.814058 containerd[2529]: time="2025-12-12T18:20:18.813999612Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 18:20:18.814058 containerd[2529]: time="2025-12-12T18:20:18.814012435Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 18:20:18.814152 containerd[2529]: time="2025-12-12T18:20:18.814132801Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 18:20:18.814152 containerd[2529]: time="2025-12-12T18:20:18.814148600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:20:18.814214 containerd[2529]: time="2025-12-12T18:20:18.814198018Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:20:18.814214 containerd[2529]: time="2025-12-12T18:20:18.814208658Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:20:18.814411 containerd[2529]: time="2025-12-12T18:20:18.814391007Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:20:18.814411 containerd[2529]: time="2025-12-12T18:20:18.814405038Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:20:18.814459 containerd[2529]: time="2025-12-12T18:20:18.814415954Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:20:18.814459 containerd[2529]: time="2025-12-12T18:20:18.814429231Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 18:20:18.814628 containerd[2529]: time="2025-12-12T18:20:18.814605225Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 18:20:18.814628 containerd[2529]: time="2025-12-12T18:20:18.814621831Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 18:20:18.814709 containerd[2529]: time="2025-12-12T18:20:18.814695309Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 18:20:18.814849 containerd[2529]: time="2025-12-12T18:20:18.814834037Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:20:18.814884 containerd[2529]: time="2025-12-12T18:20:18.814859668Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:20:18.814884 containerd[2529]: time="2025-12-12T18:20:18.814871794Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 18:20:18.814929 containerd[2529]: time="2025-12-12T18:20:18.814902934Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 18:20:18.815133 containerd[2529]: time="2025-12-12T18:20:18.815117400Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 18:20:18.815177 containerd[2529]: time="2025-12-12T18:20:18.815166225Z" level=info msg="metadata content store policy set" policy=shared Dec 12 18:20:18.823919 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 12 18:20:18.836849 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 18:20:18.840446 containerd[2529]: time="2025-12-12T18:20:18.838872592Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 18:20:18.840446 containerd[2529]: time="2025-12-12T18:20:18.838974502Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 18:20:18.840446 containerd[2529]: time="2025-12-12T18:20:18.839096097Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 18:20:18.840446 containerd[2529]: time="2025-12-12T18:20:18.839110828Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 18:20:18.840446 containerd[2529]: time="2025-12-12T18:20:18.839124583Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 18:20:18.840446 containerd[2529]: time="2025-12-12T18:20:18.839136485Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 18:20:18.840446 containerd[2529]: time="2025-12-12T18:20:18.839148145Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 18:20:18.840446 containerd[2529]: time="2025-12-12T18:20:18.839169170Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 18:20:18.840446 containerd[2529]: time="2025-12-12T18:20:18.839181311Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 18:20:18.840446 containerd[2529]: time="2025-12-12T18:20:18.839193054Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 18:20:18.840446 containerd[2529]: time="2025-12-12T18:20:18.839204212Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 18:20:18.840446 containerd[2529]: time="2025-12-12T18:20:18.839226840Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 18:20:18.840446 containerd[2529]: time="2025-12-12T18:20:18.839252429Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 18:20:18.840446 containerd[2529]: time="2025-12-12T18:20:18.839265792Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 18:20:18.841008 containerd[2529]: time="2025-12-12T18:20:18.839398585Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 18:20:18.841008 containerd[2529]: time="2025-12-12T18:20:18.839416992Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 18:20:18.841008 containerd[2529]: time="2025-12-12T18:20:18.839431093Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 18:20:18.841008 containerd[2529]: time="2025-12-12T18:20:18.839447599Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 18:20:18.841008 containerd[2529]: time="2025-12-12T18:20:18.839459222Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 18:20:18.841008 containerd[2529]: time="2025-12-12T18:20:18.839480361Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 18:20:18.841008 containerd[2529]: time="2025-12-12T18:20:18.839491874Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 18:20:18.841008 containerd[2529]: time="2025-12-12T18:20:18.839504322Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 18:20:18.841008 containerd[2529]: time="2025-12-12T18:20:18.839659313Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 18:20:18.841008 containerd[2529]: time="2025-12-12T18:20:18.839671774Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 18:20:18.841008 containerd[2529]: time="2025-12-12T18:20:18.839683128Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 18:20:18.841008 containerd[2529]: time="2025-12-12T18:20:18.839707050Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 18:20:18.841008 containerd[2529]: time="2025-12-12T18:20:18.839790743Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 18:20:18.841008 containerd[2529]: time="2025-12-12T18:20:18.839815469Z" level=info msg="Start snapshots syncer" Dec 12 18:20:18.841008 containerd[2529]: time="2025-12-12T18:20:18.839839410Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 18:20:18.841298 containerd[2529]: time="2025-12-12T18:20:18.840265211Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 18:20:18.841298 containerd[2529]: time="2025-12-12T18:20:18.840329403Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 18:20:18.841427 containerd[2529]: time="2025-12-12T18:20:18.840583545Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 18:20:18.842077 containerd[2529]: time="2025-12-12T18:20:18.840783159Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 18:20:18.842077 containerd[2529]: time="2025-12-12T18:20:18.841494156Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 18:20:18.842077 containerd[2529]: time="2025-12-12T18:20:18.841507011Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 18:20:18.842077 containerd[2529]: time="2025-12-12T18:20:18.841547466Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 18:20:18.842077 containerd[2529]: time="2025-12-12T18:20:18.841560750Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 18:20:18.842077 containerd[2529]: time="2025-12-12T18:20:18.841573319Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 18:20:18.842077 containerd[2529]: time="2025-12-12T18:20:18.841585569Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 18:20:18.842077 containerd[2529]: time="2025-12-12T18:20:18.841597315Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 18:20:18.842077 containerd[2529]: time="2025-12-12T18:20:18.841621213Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 18:20:18.842077 containerd[2529]: time="2025-12-12T18:20:18.841661329Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:20:18.842077 containerd[2529]: time="2025-12-12T18:20:18.841674951Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:20:18.842077 containerd[2529]: time="2025-12-12T18:20:18.841739262Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:20:18.842077 containerd[2529]: time="2025-12-12T18:20:18.841751360Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:20:18.842077 containerd[2529]: time="2025-12-12T18:20:18.841769889Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 18:20:18.842381 containerd[2529]: time="2025-12-12T18:20:18.841781488Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 18:20:18.842381 containerd[2529]: time="2025-12-12T18:20:18.841793249Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 18:20:18.842381 containerd[2529]: time="2025-12-12T18:20:18.841805652Z" level=info msg="runtime interface created" Dec 12 18:20:18.842381 containerd[2529]: time="2025-12-12T18:20:18.841811807Z" level=info msg="created NRI interface" Dec 12 18:20:18.842381 containerd[2529]: time="2025-12-12T18:20:18.841820397Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 18:20:18.842381 containerd[2529]: time="2025-12-12T18:20:18.841842626Z" level=info msg="Connect containerd service" Dec 12 18:20:18.842381 containerd[2529]: time="2025-12-12T18:20:18.841867023Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 18:20:18.843303 containerd[2529]: time="2025-12-12T18:20:18.843281248Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:20:18.843308 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 18:20:18.847927 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 12 18:20:18.852450 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 18:20:19.248709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:20:19.260760 (kubelet)[2659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:20:19.351162 containerd[2529]: time="2025-12-12T18:20:19.350834415Z" level=info msg="Start subscribing containerd event" Dec 12 18:20:19.351162 containerd[2529]: time="2025-12-12T18:20:19.350894007Z" level=info msg="Start recovering state" Dec 12 18:20:19.351162 containerd[2529]: time="2025-12-12T18:20:19.350996039Z" level=info msg="Start event monitor" Dec 12 18:20:19.351162 containerd[2529]: time="2025-12-12T18:20:19.351007466Z" level=info msg="Start cni network conf syncer for default" Dec 12 18:20:19.351162 containerd[2529]: time="2025-12-12T18:20:19.351014803Z" level=info msg="Start streaming server" Dec 12 18:20:19.351162 containerd[2529]: time="2025-12-12T18:20:19.351023583Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 18:20:19.351162 containerd[2529]: time="2025-12-12T18:20:19.351031064Z" level=info msg="runtime interface starting up..." Dec 12 18:20:19.351162 containerd[2529]: time="2025-12-12T18:20:19.351037811Z" level=info msg="starting plugins..." Dec 12 18:20:19.351162 containerd[2529]: time="2025-12-12T18:20:19.351049942Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 18:20:19.354690 containerd[2529]: time="2025-12-12T18:20:19.351841414Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 18:20:19.354690 containerd[2529]: time="2025-12-12T18:20:19.351891503Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 18:20:19.352110 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 18:20:19.355093 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 18:20:19.359919 containerd[2529]: time="2025-12-12T18:20:19.358819672Z" level=info msg="containerd successfully booted in 0.572919s" Dec 12 18:20:19.359877 systemd[1]: Startup finished in 4.461s (kernel) + 11.362s (initrd) + 13.405s (userspace) = 29.229s. Dec 12 18:20:19.812928 kubelet[2659]: E1212 18:20:19.812883 2659 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:20:19.815513 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:20:19.815673 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:20:19.816091 systemd[1]: kubelet.service: Consumed 1.006s CPU time, 266.8M memory peak. Dec 12 18:20:19.928610 login[2647]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Dec 12 18:20:19.929283 login[2644]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 12 18:20:19.939086 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 18:20:19.942721 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 18:20:19.945670 systemd-logind[2512]: New session 2 of user core. Dec 12 18:20:19.961778 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 18:20:19.965105 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 18:20:19.990370 (systemd)[2676]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 18:20:19.992565 systemd-logind[2512]: New session c1 of user core. Dec 12 18:20:20.122378 systemd[2676]: Queued start job for default target default.target. Dec 12 18:20:20.130396 systemd[2676]: Created slice app.slice - User Application Slice. Dec 12 18:20:20.130434 systemd[2676]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 12 18:20:20.130449 systemd[2676]: Reached target paths.target - Paths. Dec 12 18:20:20.130490 systemd[2676]: Reached target timers.target - Timers. Dec 12 18:20:20.131486 systemd[2676]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 18:20:20.133471 systemd[2676]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 12 18:20:20.152236 systemd[2676]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 18:20:20.153550 systemd[2676]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 12 18:20:20.153644 systemd[2676]: Reached target sockets.target - Sockets. Dec 12 18:20:20.153738 systemd[2676]: Reached target basic.target - Basic System. Dec 12 18:20:20.153841 systemd[2676]: Reached target default.target - Main User Target. Dec 12 18:20:20.153868 systemd[2676]: Startup finished in 156ms. Dec 12 18:20:20.153877 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 18:20:20.157677 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 18:20:20.529797 waagent[2642]: 2025-12-12T18:20:20.529709Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Dec 12 18:20:20.531310 waagent[2642]: 2025-12-12T18:20:20.531142Z INFO Daemon Daemon OS: flatcar 4515.1.0 Dec 12 18:20:20.532433 waagent[2642]: 2025-12-12T18:20:20.532358Z INFO Daemon Daemon Python: 3.11.13 Dec 12 18:20:20.533698 waagent[2642]: 2025-12-12T18:20:20.533632Z INFO Daemon Daemon Run daemon Dec 12 18:20:20.534900 waagent[2642]: 2025-12-12T18:20:20.534849Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4515.1.0' Dec 12 18:20:20.537037 waagent[2642]: 2025-12-12T18:20:20.536963Z INFO Daemon Daemon Using waagent for provisioning Dec 12 18:20:20.538356 waagent[2642]: 2025-12-12T18:20:20.538328Z INFO Daemon Daemon Activate resource disk Dec 12 18:20:20.539549 waagent[2642]: 2025-12-12T18:20:20.539468Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 12 18:20:20.542666 waagent[2642]: 2025-12-12T18:20:20.542620Z INFO Daemon Daemon Found device: None Dec 12 18:20:20.543837 waagent[2642]: 2025-12-12T18:20:20.543762Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 12 18:20:20.545861 waagent[2642]: 2025-12-12T18:20:20.545831Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 12 18:20:20.548750 waagent[2642]: 2025-12-12T18:20:20.548702Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 12 18:20:20.550245 waagent[2642]: 2025-12-12T18:20:20.550170Z INFO Daemon Daemon Running default provisioning handler Dec 12 18:20:20.557545 waagent[2642]: 2025-12-12T18:20:20.557333Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 12 18:20:20.558071 waagent[2642]: 2025-12-12T18:20:20.558038Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 12 18:20:20.558419 waagent[2642]: 2025-12-12T18:20:20.558397Z INFO Daemon Daemon cloud-init is enabled: False Dec 12 18:20:20.558732 waagent[2642]: 2025-12-12T18:20:20.558713Z INFO Daemon Daemon Copying ovf-env.xml Dec 12 18:20:20.636263 waagent[2642]: 2025-12-12T18:20:20.635707Z INFO Daemon Daemon Successfully mounted dvd Dec 12 18:20:20.661490 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 12 18:20:20.663635 waagent[2642]: 2025-12-12T18:20:20.663584Z INFO Daemon Daemon Detect protocol endpoint Dec 12 18:20:20.665165 waagent[2642]: 2025-12-12T18:20:20.663762Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 12 18:20:20.665165 waagent[2642]: 2025-12-12T18:20:20.663982Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 12 18:20:20.665165 waagent[2642]: 2025-12-12T18:20:20.664258Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 12 18:20:20.665165 waagent[2642]: 2025-12-12T18:20:20.664413Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 12 18:20:20.665165 waagent[2642]: 2025-12-12T18:20:20.664548Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 12 18:20:20.714352 waagent[2642]: 2025-12-12T18:20:20.714312Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 12 18:20:20.715074 waagent[2642]: 2025-12-12T18:20:20.714666Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 12 18:20:20.715074 waagent[2642]: 2025-12-12T18:20:20.714770Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 12 18:20:20.862713 waagent[2642]: 2025-12-12T18:20:20.862564Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 12 18:20:20.864331 waagent[2642]: 2025-12-12T18:20:20.862978Z INFO Daemon Daemon Forcing an update of the goal state. Dec 12 18:20:20.868865 waagent[2642]: 2025-12-12T18:20:20.868826Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 12 18:20:20.889592 waagent[2642]: 2025-12-12T18:20:20.889549Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Dec 12 18:20:20.891265 waagent[2642]: 2025-12-12T18:20:20.891223Z INFO Daemon Dec 12 18:20:20.892141 waagent[2642]: 2025-12-12T18:20:20.892069Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 2e186253-cb64-4848-add4-7c31e45c4734 eTag: 9291350326754019322 source: Fabric] Dec 12 18:20:20.894803 waagent[2642]: 2025-12-12T18:20:20.894770Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 12 18:20:20.896692 waagent[2642]: 2025-12-12T18:20:20.896660Z INFO Daemon Dec 12 18:20:20.897168 waagent[2642]: 2025-12-12T18:20:20.897139Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 12 18:20:20.906064 waagent[2642]: 2025-12-12T18:20:20.906034Z INFO Daemon Daemon Downloading artifacts profile blob Dec 12 18:20:20.930403 login[2647]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 12 18:20:20.934942 systemd-logind[2512]: New session 1 of user core. Dec 12 18:20:20.943689 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 18:20:20.990930 waagent[2642]: 2025-12-12T18:20:20.990880Z INFO Daemon Downloaded certificate {'thumbprint': 'CD1A32612E290C272D92DD135293310684C8BDA9', 'hasPrivateKey': True} Dec 12 18:20:20.994907 waagent[2642]: 2025-12-12T18:20:20.994858Z INFO Daemon Fetch goal state completed Dec 12 18:20:21.002666 waagent[2642]: 2025-12-12T18:20:21.002599Z INFO Daemon Daemon Starting provisioning Dec 12 18:20:21.004200 waagent[2642]: 2025-12-12T18:20:21.002752Z INFO Daemon Daemon Handle ovf-env.xml. Dec 12 18:20:21.004200 waagent[2642]: 2025-12-12T18:20:21.003215Z INFO Daemon Daemon Set hostname [ci-4515.1.0-a-53d1559fda] Dec 12 18:20:21.005829 waagent[2642]: 2025-12-12T18:20:21.005793Z INFO Daemon Daemon Publish hostname [ci-4515.1.0-a-53d1559fda] Dec 12 18:20:21.006441 waagent[2642]: 2025-12-12T18:20:21.006412Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 12 18:20:21.006805 waagent[2642]: 2025-12-12T18:20:21.006782Z INFO Daemon Daemon Primary interface is [eth0] Dec 12 18:20:21.014371 systemd-networkd[2161]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 18:20:21.014379 systemd-networkd[2161]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:20:21.014440 systemd-networkd[2161]: eth0: DHCP lease lost Dec 12 18:20:21.027779 waagent[2642]: 2025-12-12T18:20:21.027736Z INFO Daemon Daemon Create user account if not exists Dec 12 18:20:21.030197 waagent[2642]: 2025-12-12T18:20:21.028833Z INFO Daemon Daemon User core already exists, skip useradd Dec 12 18:20:21.030197 waagent[2642]: 2025-12-12T18:20:21.029105Z INFO Daemon Daemon Configure sudoer Dec 12 18:20:21.035185 waagent[2642]: 2025-12-12T18:20:21.035144Z INFO Daemon Daemon Configure sshd Dec 12 18:20:21.036585 systemd-networkd[2161]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 12 18:20:21.039401 waagent[2642]: 2025-12-12T18:20:21.039360Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 12 18:20:21.042795 waagent[2642]: 2025-12-12T18:20:21.039513Z INFO Daemon Daemon Deploy ssh public key. Dec 12 18:20:22.163253 waagent[2642]: 2025-12-12T18:20:22.163206Z INFO Daemon Daemon Provisioning complete Dec 12 18:20:22.176274 waagent[2642]: 2025-12-12T18:20:22.176240Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 12 18:20:22.177546 waagent[2642]: 2025-12-12T18:20:22.176448Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 12 18:20:22.177546 waagent[2642]: 2025-12-12T18:20:22.176797Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Dec 12 18:20:22.282780 waagent[2728]: 2025-12-12T18:20:22.282700Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Dec 12 18:20:22.283086 waagent[2728]: 2025-12-12T18:20:22.282813Z INFO ExtHandler ExtHandler OS: flatcar 4515.1.0 Dec 12 18:20:22.283086 waagent[2728]: 2025-12-12T18:20:22.282854Z INFO ExtHandler ExtHandler Python: 3.11.13 Dec 12 18:20:22.283086 waagent[2728]: 2025-12-12T18:20:22.282894Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Dec 12 18:20:22.318677 waagent[2728]: 2025-12-12T18:20:22.318615Z INFO ExtHandler ExtHandler Distro: flatcar-4515.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Dec 12 18:20:22.318814 waagent[2728]: 2025-12-12T18:20:22.318788Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 12 18:20:22.318869 waagent[2728]: 2025-12-12T18:20:22.318845Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 12 18:20:22.325018 waagent[2728]: 2025-12-12T18:20:22.324960Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 12 18:20:22.332490 waagent[2728]: 2025-12-12T18:20:22.332455Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Dec 12 18:20:22.332875 waagent[2728]: 2025-12-12T18:20:22.332842Z INFO ExtHandler Dec 12 18:20:22.332920 waagent[2728]: 2025-12-12T18:20:22.332901Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 4d84a6d0-949e-4f82-9fa6-ceb5e6c7467d eTag: 9291350326754019322 source: Fabric] Dec 12 18:20:22.333137 waagent[2728]: 2025-12-12T18:20:22.333109Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 12 18:20:22.333500 waagent[2728]: 2025-12-12T18:20:22.333468Z INFO ExtHandler Dec 12 18:20:22.333555 waagent[2728]: 2025-12-12T18:20:22.333514Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 12 18:20:22.337437 waagent[2728]: 2025-12-12T18:20:22.337405Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 12 18:20:22.398436 waagent[2728]: 2025-12-12T18:20:22.398378Z INFO ExtHandler Downloaded certificate {'thumbprint': 'CD1A32612E290C272D92DD135293310684C8BDA9', 'hasPrivateKey': True} Dec 12 18:20:22.398824 waagent[2728]: 2025-12-12T18:20:22.398792Z INFO ExtHandler Fetch goal state completed Dec 12 18:20:22.417485 waagent[2728]: 2025-12-12T18:20:22.417396Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.3 30 Sep 2025 (Library: OpenSSL 3.4.3 30 Sep 2025) Dec 12 18:20:22.421803 waagent[2728]: 2025-12-12T18:20:22.421757Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2728 Dec 12 18:20:22.421913 waagent[2728]: 2025-12-12T18:20:22.421888Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 12 18:20:22.422158 waagent[2728]: 2025-12-12T18:20:22.422133Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Dec 12 18:20:22.423220 waagent[2728]: 2025-12-12T18:20:22.423189Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4515.1.0', '', 'Flatcar Container Linux by Kinvolk'] Dec 12 18:20:22.423510 waagent[2728]: 2025-12-12T18:20:22.423485Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4515.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 12 18:20:22.423655 waagent[2728]: 2025-12-12T18:20:22.423628Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 12 18:20:22.424071 waagent[2728]: 2025-12-12T18:20:22.424042Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 12 18:20:22.442063 waagent[2728]: 2025-12-12T18:20:22.442035Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 12 18:20:22.442208 waagent[2728]: 2025-12-12T18:20:22.442187Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 12 18:20:22.447973 waagent[2728]: 2025-12-12T18:20:22.447595Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 12 18:20:22.453187 systemd[1]: Reload requested from client PID 2743 ('systemctl') (unit waagent.service)... Dec 12 18:20:22.453201 systemd[1]: Reloading... Dec 12 18:20:22.548540 zram_generator::config[2788]: No configuration found. Dec 12 18:20:22.731577 systemd[1]: Reloading finished in 278 ms. Dec 12 18:20:22.747548 waagent[2728]: 2025-12-12T18:20:22.746778Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 12 18:20:22.747548 waagent[2728]: 2025-12-12T18:20:22.746928Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 12 18:20:23.476094 waagent[2728]: 2025-12-12T18:20:23.476022Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 12 18:20:23.476405 waagent[2728]: 2025-12-12T18:20:23.476372Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 12 18:20:23.477177 waagent[2728]: 2025-12-12T18:20:23.477054Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 12 18:20:23.477470 waagent[2728]: 2025-12-12T18:20:23.477433Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 12 18:20:23.477668 waagent[2728]: 2025-12-12T18:20:23.477629Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 12 18:20:23.477829 waagent[2728]: 2025-12-12T18:20:23.477808Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 12 18:20:23.477859 waagent[2728]: 2025-12-12T18:20:23.477841Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 12 18:20:23.477907 waagent[2728]: 2025-12-12T18:20:23.477889Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 12 18:20:23.477986 waagent[2728]: 2025-12-12T18:20:23.477963Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 12 18:20:23.478149 waagent[2728]: 2025-12-12T18:20:23.478129Z INFO EnvHandler ExtHandler Configure routes Dec 12 18:20:23.478201 waagent[2728]: 2025-12-12T18:20:23.478179Z INFO EnvHandler ExtHandler Gateway:None Dec 12 18:20:23.478249 waagent[2728]: 2025-12-12T18:20:23.478202Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 12 18:20:23.478491 waagent[2728]: 2025-12-12T18:20:23.478469Z INFO EnvHandler ExtHandler Routes:None Dec 12 18:20:23.478752 waagent[2728]: 2025-12-12T18:20:23.478706Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 12 18:20:23.478933 waagent[2728]: 2025-12-12T18:20:23.478906Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 12 18:20:23.479183 waagent[2728]: 2025-12-12T18:20:23.479148Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 12 18:20:23.479275 waagent[2728]: 2025-12-12T18:20:23.479194Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 12 18:20:23.482359 waagent[2728]: 2025-12-12T18:20:23.482329Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 12 18:20:23.482359 waagent[2728]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 12 18:20:23.482359 waagent[2728]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Dec 12 18:20:23.482359 waagent[2728]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 12 18:20:23.482359 waagent[2728]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 12 18:20:23.482359 waagent[2728]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 12 18:20:23.482359 waagent[2728]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 12 18:20:23.493960 waagent[2728]: 2025-12-12T18:20:23.492880Z INFO ExtHandler ExtHandler Dec 12 18:20:23.493960 waagent[2728]: 2025-12-12T18:20:23.492927Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 35caa9d7-88e3-4a92-986b-c237c02babd1 correlation eaca89bf-952d-49c1-940e-7f2cef8786ef created: 2025-12-12T18:19:29.520782Z] Dec 12 18:20:23.493960 waagent[2728]: 2025-12-12T18:20:23.493140Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 12 18:20:23.493960 waagent[2728]: 2025-12-12T18:20:23.493482Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Dec 12 18:20:23.535022 waagent[2728]: 2025-12-12T18:20:23.534983Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Dec 12 18:20:23.535022 waagent[2728]: Try `iptables -h' or 'iptables --help' for more information.) Dec 12 18:20:23.535455 waagent[2728]: 2025-12-12T18:20:23.535420Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 03C0E643-B571-4D0E-9C7D-9F06237D9799;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Dec 12 18:20:23.564210 waagent[2728]: 2025-12-12T18:20:23.563855Z INFO MonitorHandler ExtHandler Network interfaces: Dec 12 18:20:23.564210 waagent[2728]: Executing ['ip', '-a', '-o', 'link']: Dec 12 18:20:23.564210 waagent[2728]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 12 18:20:23.564210 waagent[2728]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b7:ab:4d brd ff:ff:ff:ff:ff:ff\ alias Network Device\ altname enx000d3ab7ab4d Dec 12 18:20:23.564210 waagent[2728]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b7:ab:4d brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Dec 12 18:20:23.564210 waagent[2728]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 12 18:20:23.564210 waagent[2728]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 12 18:20:23.564210 waagent[2728]: 2: eth0 inet 10.200.8.12/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 12 18:20:23.564210 waagent[2728]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 12 18:20:23.564210 waagent[2728]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 12 18:20:23.564210 waagent[2728]: 2: eth0 inet6 fe80::20d:3aff:feb7:ab4d/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 12 18:20:23.596104 waagent[2728]: 2025-12-12T18:20:23.596056Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 12 18:20:23.596104 waagent[2728]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 12 18:20:23.596104 waagent[2728]: pkts bytes target prot opt in out source destination Dec 12 18:20:23.596104 waagent[2728]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 12 18:20:23.596104 waagent[2728]: pkts bytes target prot opt in out source destination Dec 12 18:20:23.596104 waagent[2728]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 12 18:20:23.596104 waagent[2728]: pkts bytes target prot opt in out source destination Dec 12 18:20:23.596104 waagent[2728]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 12 18:20:23.596104 waagent[2728]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 12 18:20:23.596104 waagent[2728]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 12 18:20:23.598843 waagent[2728]: 2025-12-12T18:20:23.598796Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 12 18:20:23.598843 waagent[2728]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 12 18:20:23.598843 waagent[2728]: pkts bytes target prot opt in out source destination Dec 12 18:20:23.598843 waagent[2728]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 12 18:20:23.598843 waagent[2728]: pkts bytes target prot opt in out source destination Dec 12 18:20:23.598843 waagent[2728]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 12 18:20:23.598843 waagent[2728]: pkts bytes target prot opt in out source destination Dec 12 18:20:23.598843 waagent[2728]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 12 18:20:23.598843 waagent[2728]: 2 304 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 12 18:20:23.598843 waagent[2728]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 12 18:20:29.925788 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 18:20:29.927558 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:20:30.415742 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:20:30.424725 (kubelet)[2883]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:20:30.462977 kubelet[2883]: E1212 18:20:30.462945 2883 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:20:30.466378 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:20:30.466484 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:20:30.467024 systemd[1]: kubelet.service: Consumed 138ms CPU time, 109M memory peak. Dec 12 18:20:40.675910 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 12 18:20:40.677365 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:20:41.208568 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:20:41.217724 (kubelet)[2898]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:20:41.249979 kubelet[2898]: E1212 18:20:41.249928 2898 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:20:41.251936 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:20:41.252079 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:20:41.252430 systemd[1]: kubelet.service: Consumed 126ms CPU time, 110.2M memory peak. Dec 12 18:20:41.415638 chronyd[2490]: Selected source PHC0 Dec 12 18:20:45.617156 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 18:20:45.618440 systemd[1]: Started sshd@0-10.200.8.12:22-10.200.16.10:33450.service - OpenSSH per-connection server daemon (10.200.16.10:33450). Dec 12 18:20:46.324796 sshd[2906]: Accepted publickey for core from 10.200.16.10 port 33450 ssh2: RSA SHA256:MDSGev8JoWWrhohyx7j99tYqdskhx1insLzl+tdEp00 Dec 12 18:20:46.325942 sshd-session[2906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:20:46.330503 systemd-logind[2512]: New session 3 of user core. Dec 12 18:20:46.339693 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 18:20:46.748416 systemd[1]: Started sshd@1-10.200.8.12:22-10.200.16.10:33460.service - OpenSSH per-connection server daemon (10.200.16.10:33460). Dec 12 18:20:47.291802 sshd[2912]: Accepted publickey for core from 10.200.16.10 port 33460 ssh2: RSA SHA256:MDSGev8JoWWrhohyx7j99tYqdskhx1insLzl+tdEp00 Dec 12 18:20:47.292929 sshd-session[2912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:20:47.297491 systemd-logind[2512]: New session 4 of user core. Dec 12 18:20:47.303702 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 18:20:47.594624 sshd[2915]: Connection closed by 10.200.16.10 port 33460 Dec 12 18:20:47.595148 sshd-session[2912]: pam_unix(sshd:session): session closed for user core Dec 12 18:20:47.598176 systemd[1]: sshd@1-10.200.8.12:22-10.200.16.10:33460.service: Deactivated successfully. Dec 12 18:20:47.599722 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 18:20:47.600957 systemd-logind[2512]: Session 4 logged out. Waiting for processes to exit. Dec 12 18:20:47.602111 systemd-logind[2512]: Removed session 4. Dec 12 18:20:47.709217 systemd[1]: Started sshd@2-10.200.8.12:22-10.200.16.10:33470.service - OpenSSH per-connection server daemon (10.200.16.10:33470). Dec 12 18:20:48.242376 sshd[2921]: Accepted publickey for core from 10.200.16.10 port 33470 ssh2: RSA SHA256:MDSGev8JoWWrhohyx7j99tYqdskhx1insLzl+tdEp00 Dec 12 18:20:48.243484 sshd-session[2921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:20:48.247870 systemd-logind[2512]: New session 5 of user core. Dec 12 18:20:48.253695 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 18:20:48.541251 sshd[2924]: Connection closed by 10.200.16.10 port 33470 Dec 12 18:20:48.542013 sshd-session[2921]: pam_unix(sshd:session): session closed for user core Dec 12 18:20:48.544792 systemd[1]: sshd@2-10.200.8.12:22-10.200.16.10:33470.service: Deactivated successfully. Dec 12 18:20:48.546399 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 18:20:48.547683 systemd-logind[2512]: Session 5 logged out. Waiting for processes to exit. Dec 12 18:20:48.548981 systemd-logind[2512]: Removed session 5. Dec 12 18:20:48.660254 systemd[1]: Started sshd@3-10.200.8.12:22-10.200.16.10:33476.service - OpenSSH per-connection server daemon (10.200.16.10:33476). Dec 12 18:20:49.192382 sshd[2930]: Accepted publickey for core from 10.200.16.10 port 33476 ssh2: RSA SHA256:MDSGev8JoWWrhohyx7j99tYqdskhx1insLzl+tdEp00 Dec 12 18:20:49.193491 sshd-session[2930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:20:49.198225 systemd-logind[2512]: New session 6 of user core. Dec 12 18:20:49.205691 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 18:20:49.494399 sshd[2933]: Connection closed by 10.200.16.10 port 33476 Dec 12 18:20:49.495156 sshd-session[2930]: pam_unix(sshd:session): session closed for user core Dec 12 18:20:49.498318 systemd[1]: sshd@3-10.200.8.12:22-10.200.16.10:33476.service: Deactivated successfully. Dec 12 18:20:49.499869 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 18:20:49.500631 systemd-logind[2512]: Session 6 logged out. Waiting for processes to exit. Dec 12 18:20:49.501729 systemd-logind[2512]: Removed session 6. Dec 12 18:20:49.609197 systemd[1]: Started sshd@4-10.200.8.12:22-10.200.16.10:33486.service - OpenSSH per-connection server daemon (10.200.16.10:33486). Dec 12 18:20:50.152504 sshd[2939]: Accepted publickey for core from 10.200.16.10 port 33486 ssh2: RSA SHA256:MDSGev8JoWWrhohyx7j99tYqdskhx1insLzl+tdEp00 Dec 12 18:20:50.153655 sshd-session[2939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:20:50.158144 systemd-logind[2512]: New session 7 of user core. Dec 12 18:20:50.164685 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 18:20:50.488676 sudo[2943]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 18:20:50.488904 sudo[2943]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:20:50.515355 sudo[2943]: pam_unix(sudo:session): session closed for user root Dec 12 18:20:50.615503 sshd[2942]: Connection closed by 10.200.16.10 port 33486 Dec 12 18:20:50.616161 sshd-session[2939]: pam_unix(sshd:session): session closed for user core Dec 12 18:20:50.619419 systemd[1]: sshd@4-10.200.8.12:22-10.200.16.10:33486.service: Deactivated successfully. Dec 12 18:20:50.621141 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 18:20:50.623096 systemd-logind[2512]: Session 7 logged out. Waiting for processes to exit. Dec 12 18:20:50.624072 systemd-logind[2512]: Removed session 7. Dec 12 18:20:50.730480 systemd[1]: Started sshd@5-10.200.8.12:22-10.200.16.10:44478.service - OpenSSH per-connection server daemon (10.200.16.10:44478). Dec 12 18:20:51.267128 sshd[2949]: Accepted publickey for core from 10.200.16.10 port 44478 ssh2: RSA SHA256:MDSGev8JoWWrhohyx7j99tYqdskhx1insLzl+tdEp00 Dec 12 18:20:51.268297 sshd-session[2949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:20:51.269334 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 12 18:20:51.270826 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:20:51.274591 systemd-logind[2512]: New session 8 of user core. Dec 12 18:20:51.285026 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 18:20:51.471906 sudo[2957]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 18:20:51.472134 sudo[2957]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:20:51.680511 sudo[2957]: pam_unix(sudo:session): session closed for user root Dec 12 18:20:51.688124 sudo[2956]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 18:20:51.688388 sudo[2956]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:20:51.701833 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:20:51.726703 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:20:51.736026 (kubelet)[2977]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:20:51.736000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 18:20:51.738028 kernel: kauditd_printk_skb: 160 callbacks suppressed Dec 12 18:20:51.738081 kernel: audit: type=1305 audit(1765563651.736:259): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 18:20:51.741496 augenrules[2985]: No rules Dec 12 18:20:51.742126 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:20:51.742381 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:20:51.751770 kernel: audit: type=1300 audit(1765563651.736:259): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffa3a536e0 a2=420 a3=0 items=0 ppid=2960 pid=2985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:51.736000 audit[2985]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffa3a536e0 a2=420 a3=0 items=0 ppid=2960 pid=2985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:51.743357 sudo[2956]: pam_unix(sudo:session): session closed for user root Dec 12 18:20:51.736000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 18:20:51.760595 kernel: audit: type=1327 audit(1765563651.736:259): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 18:20:51.760664 kernel: audit: type=1130 audit(1765563651.742:260): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:51.760682 kernel: audit: type=1131 audit(1765563651.742:261): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:51.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:51.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:51.770816 kernel: audit: type=1106 audit(1765563651.742:262): pid=2956 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:20:51.742000 audit[2956]: USER_END pid=2956 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:20:51.742000 audit[2956]: CRED_DISP pid=2956 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:20:51.775728 kernel: audit: type=1104 audit(1765563651.742:263): pid=2956 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:20:51.777498 kubelet[2977]: E1212 18:20:51.777464 2977 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:20:51.778997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:20:51.779119 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:20:51.779495 systemd[1]: kubelet.service: Consumed 135ms CPU time, 107.3M memory peak. Dec 12 18:20:51.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 18:20:51.783542 kernel: audit: type=1131 audit(1765563651.778:264): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 18:20:51.850659 sshd[2955]: Connection closed by 10.200.16.10 port 44478 Dec 12 18:20:51.851016 sshd-session[2949]: pam_unix(sshd:session): session closed for user core Dec 12 18:20:51.851000 audit[2949]: USER_END pid=2949 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:20:51.853693 systemd[1]: sshd@5-10.200.8.12:22-10.200.16.10:44478.service: Deactivated successfully. Dec 12 18:20:51.856171 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 18:20:51.858997 systemd-logind[2512]: Session 8 logged out. Waiting for processes to exit. Dec 12 18:20:51.851000 audit[2949]: CRED_DISP pid=2949 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:20:51.860088 systemd-logind[2512]: Removed session 8. Dec 12 18:20:51.863538 kernel: audit: type=1106 audit(1765563651.851:265): pid=2949 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:20:51.863579 kernel: audit: type=1104 audit(1765563651.851:266): pid=2949 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:20:51.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.12:22-10.200.16.10:44478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:51.968281 systemd[1]: Started sshd@6-10.200.8.12:22-10.200.16.10:44490.service - OpenSSH per-connection server daemon (10.200.16.10:44490). Dec 12 18:20:51.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.12:22-10.200.16.10:44490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:52.501000 audit[3001]: USER_ACCT pid=3001 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:20:52.502684 sshd[3001]: Accepted publickey for core from 10.200.16.10 port 44490 ssh2: RSA SHA256:MDSGev8JoWWrhohyx7j99tYqdskhx1insLzl+tdEp00 Dec 12 18:20:52.502000 audit[3001]: CRED_ACQ pid=3001 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:20:52.502000 audit[3001]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde86500d0 a2=3 a3=0 items=0 ppid=1 pid=3001 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:52.502000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:20:52.503811 sshd-session[3001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:20:52.508606 systemd-logind[2512]: New session 9 of user core. Dec 12 18:20:52.517698 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 18:20:52.519000 audit[3001]: USER_START pid=3001 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:20:52.520000 audit[3004]: CRED_ACQ pid=3004 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:20:52.706000 audit[3005]: USER_ACCT pid=3005 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:20:52.707016 sudo[3005]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 18:20:52.706000 audit[3005]: CRED_REFR pid=3005 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:20:52.707251 sudo[3005]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:20:52.708000 audit[3005]: USER_START pid=3005 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:20:54.595985 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 18:20:54.612747 (dockerd)[3023]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 18:20:55.859725 dockerd[3023]: time="2025-12-12T18:20:55.859180715Z" level=info msg="Starting up" Dec 12 18:20:55.860926 dockerd[3023]: time="2025-12-12T18:20:55.860892524Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 18:20:55.871369 dockerd[3023]: time="2025-12-12T18:20:55.871330677Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 18:20:56.046944 dockerd[3023]: time="2025-12-12T18:20:56.046893126Z" level=info msg="Loading containers: start." Dec 12 18:20:56.088614 kernel: Initializing XFRM netlink socket Dec 12 18:20:56.126000 audit[3069]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=3069 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:20:56.126000 audit[3069]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffef0655e80 a2=0 a3=0 items=0 ppid=3023 pid=3069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.126000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 18:20:56.128000 audit[3071]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=3071 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:20:56.128000 audit[3071]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe193ea5d0 a2=0 a3=0 items=0 ppid=3023 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.128000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 18:20:56.130000 audit[3073]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=3073 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:20:56.130000 audit[3073]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff83e2f630 a2=0 a3=0 items=0 ppid=3023 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.130000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 18:20:56.132000 audit[3075]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=3075 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:20:56.132000 audit[3075]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe61df97a0 a2=0 a3=0 items=0 ppid=3023 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.132000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 18:20:56.133000 audit[3077]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=3077 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:20:56.133000 audit[3077]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcaf0b5f90 a2=0 a3=0 items=0 ppid=3023 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.133000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 18:20:56.135000 audit[3079]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=3079 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:20:56.135000 audit[3079]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffd890b2c0 a2=0 a3=0 items=0 ppid=3023 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.135000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 18:20:56.137000 audit[3081]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=3081 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:20:56.137000 audit[3081]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffed9cfa9d0 a2=0 a3=0 items=0 ppid=3023 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.137000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 18:20:56.140000 audit[3083]: NETFILTER_CFG table=nat:12 family=2 entries=2 op=nft_register_chain pid=3083 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:20:56.140000 audit[3083]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe680da5e0 a2=0 a3=0 items=0 ppid=3023 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.140000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 18:20:56.172000 audit[3086]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=3086 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:20:56.172000 audit[3086]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fff9ebe9dc0 a2=0 a3=0 items=0 ppid=3023 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.172000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 12 18:20:56.174000 audit[3088]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=3088 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:20:56.174000 audit[3088]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd598b8fb0 a2=0 a3=0 items=0 ppid=3023 pid=3088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.174000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 18:20:56.176000 audit[3090]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=3090 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:20:56.176000 audit[3090]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe98f8dcf0 a2=0 a3=0 items=0 ppid=3023 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.176000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 18:20:56.178000 audit[3092]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=3092 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:20:56.178000 audit[3092]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fffe185ee20 a2=0 a3=0 items=0 ppid=3023 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.178000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 18:20:56.179000 audit[3094]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_rule pid=3094 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:20:56.179000 audit[3094]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff3df27050 a2=0 a3=0 items=0 ppid=3023 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.179000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 18:20:56.265000 audit[3124]: NETFILTER_CFG table=nat:18 family=10 entries=2 op=nft_register_chain pid=3124 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:20:56.265000 audit[3124]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffdd58c7b40 a2=0 a3=0 items=0 ppid=3023 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.265000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 18:20:56.267000 audit[3126]: NETFILTER_CFG table=filter:19 family=10 entries=2 op=nft_register_chain pid=3126 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:20:56.267000 audit[3126]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc7187feb0 a2=0 a3=0 items=0 ppid=3023 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.267000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 18:20:56.268000 audit[3128]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=3128 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:20:56.268000 audit[3128]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe3550e9c0 a2=0 a3=0 items=0 ppid=3023 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.268000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 18:20:56.270000 audit[3130]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=3130 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:20:56.270000 audit[3130]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc7b715ba0 a2=0 a3=0 items=0 ppid=3023 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.270000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 18:20:56.272000 audit[3132]: NETFILTER_CFG table=filter:22 family=10 entries=1 op=nft_register_chain pid=3132 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:20:56.272000 audit[3132]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc5fffc350 a2=0 a3=0 items=0 ppid=3023 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.272000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 18:20:56.273000 audit[3134]: NETFILTER_CFG table=filter:23 family=10 entries=1 op=nft_register_chain pid=3134 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:20:56.273000 audit[3134]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff8d560800 a2=0 a3=0 items=0 ppid=3023 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.273000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 18:20:56.275000 audit[3136]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_chain pid=3136 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:20:56.275000 audit[3136]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc200bb2c0 a2=0 a3=0 items=0 ppid=3023 pid=3136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.275000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 18:20:56.277000 audit[3138]: NETFILTER_CFG table=nat:25 family=10 entries=2 op=nft_register_chain pid=3138 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:20:56.277000 audit[3138]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff43d9d5a0 a2=0 a3=0 items=0 ppid=3023 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.277000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 18:20:56.279000 audit[3140]: NETFILTER_CFG table=nat:26 family=10 entries=2 op=nft_register_chain pid=3140 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:20:56.279000 audit[3140]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffc1f8c9050 a2=0 a3=0 items=0 ppid=3023 pid=3140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.279000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 12 18:20:56.281000 audit[3142]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=3142 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:20:56.281000 audit[3142]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd0e632eb0 a2=0 a3=0 items=0 ppid=3023 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.281000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 18:20:56.282000 audit[3144]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=3144 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:20:56.282000 audit[3144]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc709d60e0 a2=0 a3=0 items=0 ppid=3023 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.282000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 18:20:56.284000 audit[3146]: NETFILTER_CFG table=filter:29 family=10 entries=1 op=nft_register_rule pid=3146 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:20:56.284000 audit[3146]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe87f77a10 a2=0 a3=0 items=0 ppid=3023 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.284000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 18:20:56.286000 audit[3148]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_rule pid=3148 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:20:56.286000 audit[3148]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffdf4e704e0 a2=0 a3=0 items=0 ppid=3023 pid=3148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.286000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 18:20:56.290000 audit[3153]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=3153 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:20:56.290000 audit[3153]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff98096a40 a2=0 a3=0 items=0 ppid=3023 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.290000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 18:20:56.292000 audit[3155]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=3155 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:20:56.292000 audit[3155]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff277d1a20 a2=0 a3=0 items=0 ppid=3023 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.292000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 18:20:56.294000 audit[3157]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=3157 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:20:56.294000 audit[3157]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd2c7f5890 a2=0 a3=0 items=0 ppid=3023 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.294000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 18:20:56.296000 audit[3159]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=3159 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:20:56.296000 audit[3159]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdf19b95a0 a2=0 a3=0 items=0 ppid=3023 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.296000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 18:20:56.297000 audit[3161]: NETFILTER_CFG table=filter:35 family=10 entries=1 op=nft_register_rule pid=3161 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:20:56.297000 audit[3161]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe000df960 a2=0 a3=0 items=0 ppid=3023 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.297000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 18:20:56.299000 audit[3163]: NETFILTER_CFG table=filter:36 family=10 entries=1 op=nft_register_rule pid=3163 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:20:56.299000 audit[3163]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd9fbf7d10 a2=0 a3=0 items=0 ppid=3023 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.299000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 18:20:56.339000 audit[3168]: NETFILTER_CFG table=nat:37 family=2 entries=2 op=nft_register_chain pid=3168 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:20:56.339000 audit[3168]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffc4433ab60 a2=0 a3=0 items=0 ppid=3023 pid=3168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.339000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 12 18:20:56.342000 audit[3170]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=3170 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:20:56.342000 audit[3170]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd4214bca0 a2=0 a3=0 items=0 ppid=3023 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.342000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 12 18:20:56.349000 audit[3178]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=3178 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:20:56.349000 audit[3178]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffc43aa81c0 a2=0 a3=0 items=0 ppid=3023 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.349000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 12 18:20:56.354000 audit[3183]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:20:56.354000 audit[3183]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fffc7b47600 a2=0 a3=0 items=0 ppid=3023 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.354000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 12 18:20:56.356000 audit[3185]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=3185 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:20:56.356000 audit[3185]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffccd035d10 a2=0 a3=0 items=0 ppid=3023 pid=3185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.356000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 12 18:20:56.357000 audit[3187]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=3187 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:20:56.357000 audit[3187]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc7e9103a0 a2=0 a3=0 items=0 ppid=3023 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.357000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 12 18:20:56.359000 audit[3189]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_rule pid=3189 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:20:56.359000 audit[3189]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffcccde5d50 a2=0 a3=0 items=0 ppid=3023 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.359000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 18:20:56.361000 audit[3191]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=3191 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:20:56.361000 audit[3191]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff25860310 a2=0 a3=0 items=0 ppid=3023 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:56.361000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 12 18:20:56.362423 systemd-networkd[2161]: docker0: Link UP Dec 12 18:20:56.376383 dockerd[3023]: time="2025-12-12T18:20:56.376349831Z" level=info msg="Loading containers: done." Dec 12 18:20:56.433506 dockerd[3023]: time="2025-12-12T18:20:56.433471839Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 18:20:56.433641 dockerd[3023]: time="2025-12-12T18:20:56.433571563Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 18:20:56.433669 dockerd[3023]: time="2025-12-12T18:20:56.433644892Z" level=info msg="Initializing buildkit" Dec 12 18:20:56.478620 dockerd[3023]: time="2025-12-12T18:20:56.478588512Z" level=info msg="Completed buildkit initialization" Dec 12 18:20:56.485445 dockerd[3023]: time="2025-12-12T18:20:56.485410871Z" level=info msg="Daemon has completed initialization" Dec 12 18:20:56.485609 dockerd[3023]: time="2025-12-12T18:20:56.485564439Z" level=info msg="API listen on /run/docker.sock" Dec 12 18:20:56.485742 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 18:20:56.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:57.621372 containerd[2529]: time="2025-12-12T18:20:57.621331061Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 12 18:20:58.484709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount210404052.mount: Deactivated successfully. Dec 12 18:20:58.738436 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Dec 12 18:20:59.510769 containerd[2529]: time="2025-12-12T18:20:59.510721037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:20:59.513230 containerd[2529]: time="2025-12-12T18:20:59.513071752Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=29139145" Dec 12 18:20:59.515746 containerd[2529]: time="2025-12-12T18:20:59.515720217Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:20:59.519300 containerd[2529]: time="2025-12-12T18:20:59.519269707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:20:59.519951 containerd[2529]: time="2025-12-12T18:20:59.519927384Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 1.898556669s" Dec 12 18:20:59.520037 containerd[2529]: time="2025-12-12T18:20:59.520024539Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Dec 12 18:20:59.520825 containerd[2529]: time="2025-12-12T18:20:59.520792806Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 12 18:21:00.873164 containerd[2529]: time="2025-12-12T18:21:00.873116377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:21:00.875420 containerd[2529]: time="2025-12-12T18:21:00.875384609Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26011378" Dec 12 18:21:00.877960 containerd[2529]: time="2025-12-12T18:21:00.877903995Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:21:00.882725 containerd[2529]: time="2025-12-12T18:21:00.881921874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:21:00.882725 containerd[2529]: time="2025-12-12T18:21:00.882597089Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.361776289s" Dec 12 18:21:00.882725 containerd[2529]: time="2025-12-12T18:21:00.882632328Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Dec 12 18:21:00.883142 containerd[2529]: time="2025-12-12T18:21:00.883125916Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 12 18:21:01.925737 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 12 18:21:01.928907 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:21:02.449897 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:21:02.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:02.451846 kernel: kauditd_printk_skb: 133 callbacks suppressed Dec 12 18:21:02.451885 kernel: audit: type=1130 audit(1765563662.448:318): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:02.460756 (kubelet)[3306]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:21:02.473297 containerd[2529]: time="2025-12-12T18:21:02.473259015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:21:02.476915 containerd[2529]: time="2025-12-12T18:21:02.475961186Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Dec 12 18:21:02.479544 containerd[2529]: time="2025-12-12T18:21:02.479500491Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:21:02.484693 containerd[2529]: time="2025-12-12T18:21:02.484666606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:21:02.486044 containerd[2529]: time="2025-12-12T18:21:02.486010006Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.602810181s" Dec 12 18:21:02.486111 containerd[2529]: time="2025-12-12T18:21:02.486050187Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Dec 12 18:21:02.486771 containerd[2529]: time="2025-12-12T18:21:02.486741463Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 12 18:21:02.504893 kubelet[3306]: E1212 18:21:02.504863 3306 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:21:02.506662 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:21:02.506799 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:21:02.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 18:21:02.507148 systemd[1]: kubelet.service: Consumed 145ms CPU time, 110M memory peak. Dec 12 18:21:02.511536 kernel: audit: type=1131 audit(1765563662.505:319): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 18:21:03.137612 update_engine[2514]: I20251212 18:21:03.137558 2514 update_attempter.cc:509] Updating boot flags... Dec 12 18:21:03.533828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount874923248.mount: Deactivated successfully. Dec 12 18:21:03.905898 containerd[2529]: time="2025-12-12T18:21:03.905786047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:21:03.908440 containerd[2529]: time="2025-12-12T18:21:03.908400574Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=0" Dec 12 18:21:03.911180 containerd[2529]: time="2025-12-12T18:21:03.911127665Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:21:03.914961 containerd[2529]: time="2025-12-12T18:21:03.914465007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:21:03.914961 containerd[2529]: time="2025-12-12T18:21:03.914842108Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.428072499s" Dec 12 18:21:03.914961 containerd[2529]: time="2025-12-12T18:21:03.914868221Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Dec 12 18:21:03.915445 containerd[2529]: time="2025-12-12T18:21:03.915425695Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 12 18:21:04.655412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2411643658.mount: Deactivated successfully. Dec 12 18:21:05.551719 containerd[2529]: time="2025-12-12T18:21:05.551668606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:21:05.553967 containerd[2529]: time="2025-12-12T18:21:05.553926324Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20128467" Dec 12 18:21:05.556431 containerd[2529]: time="2025-12-12T18:21:05.556376977Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:21:05.559943 containerd[2529]: time="2025-12-12T18:21:05.559901325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:21:05.560806 containerd[2529]: time="2025-12-12T18:21:05.560637640Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.645184647s" Dec 12 18:21:05.560806 containerd[2529]: time="2025-12-12T18:21:05.560669754Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Dec 12 18:21:05.561129 containerd[2529]: time="2025-12-12T18:21:05.561098844Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 18:21:06.042293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount598294010.mount: Deactivated successfully. Dec 12 18:21:06.065597 containerd[2529]: time="2025-12-12T18:21:06.065559479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:21:06.068692 containerd[2529]: time="2025-12-12T18:21:06.068530637Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=881" Dec 12 18:21:06.076758 containerd[2529]: time="2025-12-12T18:21:06.076735311Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:21:06.080240 containerd[2529]: time="2025-12-12T18:21:06.080214429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:21:06.080760 containerd[2529]: time="2025-12-12T18:21:06.080634079Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 519.4954ms" Dec 12 18:21:06.080760 containerd[2529]: time="2025-12-12T18:21:06.080662507Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 12 18:21:06.081305 containerd[2529]: time="2025-12-12T18:21:06.081283331Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 12 18:21:06.695307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4234038663.mount: Deactivated successfully. Dec 12 18:21:08.382602 containerd[2529]: time="2025-12-12T18:21:08.382554615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:21:08.385056 containerd[2529]: time="2025-12-12T18:21:08.384900710Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=46127678" Dec 12 18:21:08.387839 containerd[2529]: time="2025-12-12T18:21:08.387813886Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:21:08.391806 containerd[2529]: time="2025-12-12T18:21:08.391775943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:21:08.392716 containerd[2529]: time="2025-12-12T18:21:08.392586393Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.311277037s" Dec 12 18:21:08.392716 containerd[2529]: time="2025-12-12T18:21:08.392617186Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Dec 12 18:21:12.391577 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:21:12.391838 systemd[1]: kubelet.service: Consumed 145ms CPU time, 110M memory peak. Dec 12 18:21:12.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:12.399859 kernel: audit: type=1130 audit(1765563672.391:320): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:12.399934 kernel: audit: type=1131 audit(1765563672.391:321): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:12.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:12.398313 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:21:12.421239 systemd[1]: Reload requested from client PID 3497 ('systemctl') (unit session-9.scope)... Dec 12 18:21:12.421254 systemd[1]: Reloading... Dec 12 18:21:12.523868 zram_generator::config[3546]: No configuration found. Dec 12 18:21:12.727970 systemd[1]: Reloading finished in 306 ms. Dec 12 18:21:12.878578 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 18:21:12.878676 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 18:21:12.879046 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:21:12.879115 systemd[1]: kubelet.service: Consumed 91ms CPU time, 92.7M memory peak. Dec 12 18:21:12.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 18:21:12.886481 kernel: audit: type=1130 audit(1765563672.878:322): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 18:21:12.886588 kernel: audit: type=1334 audit(1765563672.883:323): prog-id=87 op=LOAD Dec 12 18:21:12.883000 audit: BPF prog-id=87 op=LOAD Dec 12 18:21:12.882784 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:21:12.888504 kernel: audit: type=1334 audit(1765563672.883:324): prog-id=69 op=UNLOAD Dec 12 18:21:12.883000 audit: BPF prog-id=69 op=UNLOAD Dec 12 18:21:12.884000 audit: BPF prog-id=88 op=LOAD Dec 12 18:21:12.890573 kernel: audit: type=1334 audit(1765563672.884:325): prog-id=88 op=LOAD Dec 12 18:21:12.884000 audit: BPF prog-id=89 op=LOAD Dec 12 18:21:12.893539 kernel: audit: type=1334 audit(1765563672.884:326): prog-id=89 op=LOAD Dec 12 18:21:12.893592 kernel: audit: type=1334 audit(1765563672.884:327): prog-id=85 op=UNLOAD Dec 12 18:21:12.884000 audit: BPF prog-id=85 op=UNLOAD Dec 12 18:21:12.897166 kernel: audit: type=1334 audit(1765563672.884:328): prog-id=86 op=UNLOAD Dec 12 18:21:12.897231 kernel: audit: type=1334 audit(1765563672.885:329): prog-id=90 op=LOAD Dec 12 18:21:12.884000 audit: BPF prog-id=86 op=UNLOAD Dec 12 18:21:12.885000 audit: BPF prog-id=90 op=LOAD Dec 12 18:21:12.885000 audit: BPF prog-id=70 op=UNLOAD Dec 12 18:21:12.885000 audit: BPF prog-id=91 op=LOAD Dec 12 18:21:12.885000 audit: BPF prog-id=92 op=LOAD Dec 12 18:21:12.885000 audit: BPF prog-id=71 op=UNLOAD Dec 12 18:21:12.885000 audit: BPF prog-id=72 op=UNLOAD Dec 12 18:21:12.887000 audit: BPF prog-id=93 op=LOAD Dec 12 18:21:12.887000 audit: BPF prog-id=73 op=UNLOAD Dec 12 18:21:12.889000 audit: BPF prog-id=94 op=LOAD Dec 12 18:21:12.889000 audit: BPF prog-id=95 op=LOAD Dec 12 18:21:12.889000 audit: BPF prog-id=74 op=UNLOAD Dec 12 18:21:12.889000 audit: BPF prog-id=75 op=UNLOAD Dec 12 18:21:12.889000 audit: BPF prog-id=96 op=LOAD Dec 12 18:21:12.889000 audit: BPF prog-id=68 op=UNLOAD Dec 12 18:21:12.892000 audit: BPF prog-id=97 op=LOAD Dec 12 18:21:12.892000 audit: BPF prog-id=76 op=UNLOAD Dec 12 18:21:12.892000 audit: BPF prog-id=98 op=LOAD Dec 12 18:21:12.892000 audit: BPF prog-id=99 op=LOAD Dec 12 18:21:12.892000 audit: BPF prog-id=77 op=UNLOAD Dec 12 18:21:12.892000 audit: BPF prog-id=78 op=UNLOAD Dec 12 18:21:12.894000 audit: BPF prog-id=100 op=LOAD Dec 12 18:21:12.894000 audit: BPF prog-id=67 op=UNLOAD Dec 12 18:21:12.896000 audit: BPF prog-id=101 op=LOAD Dec 12 18:21:12.896000 audit: BPF prog-id=82 op=UNLOAD Dec 12 18:21:12.896000 audit: BPF prog-id=102 op=LOAD Dec 12 18:21:12.896000 audit: BPF prog-id=103 op=LOAD Dec 12 18:21:12.896000 audit: BPF prog-id=83 op=UNLOAD Dec 12 18:21:12.896000 audit: BPF prog-id=84 op=UNLOAD Dec 12 18:21:12.898000 audit: BPF prog-id=104 op=LOAD Dec 12 18:21:12.898000 audit: BPF prog-id=79 op=UNLOAD Dec 12 18:21:12.898000 audit: BPF prog-id=105 op=LOAD Dec 12 18:21:12.898000 audit: BPF prog-id=106 op=LOAD Dec 12 18:21:12.898000 audit: BPF prog-id=80 op=UNLOAD Dec 12 18:21:12.898000 audit: BPF prog-id=81 op=UNLOAD Dec 12 18:21:13.511809 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:21:13.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:13.519047 (kubelet)[3613]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:21:13.558895 kubelet[3613]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:21:13.558895 kubelet[3613]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:21:13.558895 kubelet[3613]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:21:13.559195 kubelet[3613]: I1212 18:21:13.558952 3613 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:21:13.727704 kubelet[3613]: I1212 18:21:13.727665 3613 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 18:21:13.727704 kubelet[3613]: I1212 18:21:13.727694 3613 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:21:13.727970 kubelet[3613]: I1212 18:21:13.727956 3613 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 18:21:13.763098 kubelet[3613]: E1212 18:21:13.763058 3613 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 18:21:13.764674 kubelet[3613]: I1212 18:21:13.764236 3613 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:21:13.776274 kubelet[3613]: I1212 18:21:13.774192 3613 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:21:13.778395 kubelet[3613]: I1212 18:21:13.778376 3613 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:21:13.778601 kubelet[3613]: I1212 18:21:13.778580 3613 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:21:13.778752 kubelet[3613]: I1212 18:21:13.778599 3613 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515.1.0-a-53d1559fda","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:21:13.778752 kubelet[3613]: I1212 18:21:13.778752 3613 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:21:13.778894 kubelet[3613]: I1212 18:21:13.778763 3613 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 18:21:13.778894 kubelet[3613]: I1212 18:21:13.778874 3613 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:21:13.782627 kubelet[3613]: I1212 18:21:13.782554 3613 kubelet.go:480] "Attempting to sync node with API server" Dec 12 18:21:13.782627 kubelet[3613]: I1212 18:21:13.782579 3613 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:21:13.782627 kubelet[3613]: I1212 18:21:13.782603 3613 kubelet.go:386] "Adding apiserver pod source" Dec 12 18:21:13.782627 kubelet[3613]: I1212 18:21:13.782619 3613 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:21:13.794275 kubelet[3613]: E1212 18:21:13.793903 3613 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-a-53d1559fda&limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 18:21:13.795389 kubelet[3613]: E1212 18:21:13.795346 3613 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 18:21:13.795546 kubelet[3613]: I1212 18:21:13.795535 3613 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 18:21:13.796125 kubelet[3613]: I1212 18:21:13.796111 3613 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 18:21:13.796838 kubelet[3613]: W1212 18:21:13.796823 3613 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 18:21:13.799080 kubelet[3613]: I1212 18:21:13.799061 3613 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:21:13.799132 kubelet[3613]: I1212 18:21:13.799119 3613 server.go:1289] "Started kubelet" Dec 12 18:21:13.801847 kubelet[3613]: I1212 18:21:13.801828 3613 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:21:13.806545 kubelet[3613]: E1212 18:21:13.803982 3613 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4515.1.0-a-53d1559fda.18808ace9520b283 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4515.1.0-a-53d1559fda,UID:ci-4515.1.0-a-53d1559fda,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4515.1.0-a-53d1559fda,},FirstTimestamp:2025-12-12 18:21:13.799078531 +0000 UTC m=+0.276275264,LastTimestamp:2025-12-12 18:21:13.799078531 +0000 UTC m=+0.276275264,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515.1.0-a-53d1559fda,}" Dec 12 18:21:13.806545 kubelet[3613]: I1212 18:21:13.805781 3613 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:21:13.807161 kubelet[3613]: I1212 18:21:13.807148 3613 server.go:317] "Adding debug handlers to kubelet server" Dec 12 18:21:13.807000 audit[3626]: NETFILTER_CFG table=mangle:45 family=2 entries=2 op=nft_register_chain pid=3626 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:21:13.807000 audit[3626]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc78326000 a2=0 a3=0 items=0 ppid=3613 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:13.807000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 18:21:13.808000 audit[3627]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=3627 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:21:13.808000 audit[3627]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffbeb49570 a2=0 a3=0 items=0 ppid=3613 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:13.808000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 18:21:13.810340 kubelet[3613]: I1212 18:21:13.810326 3613 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:21:13.810651 kubelet[3613]: E1212 18:21:13.810626 3613 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-53d1559fda\" not found" Dec 12 18:21:13.811000 audit[3631]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=3631 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:21:13.811000 audit[3631]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff981f4060 a2=0 a3=0 items=0 ppid=3613 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:13.811000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 18:21:13.812103 kubelet[3613]: I1212 18:21:13.811955 3613 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:21:13.812180 kubelet[3613]: I1212 18:21:13.812162 3613 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:21:13.812342 kubelet[3613]: I1212 18:21:13.812326 3613 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:21:13.813887 kubelet[3613]: I1212 18:21:13.813745 3613 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:21:13.813887 kubelet[3613]: I1212 18:21:13.813795 3613 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:21:13.815147 kubelet[3613]: E1212 18:21:13.814372 3613 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-a-53d1559fda?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="200ms" Dec 12 18:21:13.815147 kubelet[3613]: E1212 18:21:13.814936 3613 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 18:21:13.815747 kubelet[3613]: I1212 18:21:13.815725 3613 factory.go:223] Registration of the systemd container factory successfully Dec 12 18:21:13.815822 kubelet[3613]: I1212 18:21:13.815806 3613 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:21:13.815000 audit[3633]: NETFILTER_CFG table=filter:48 family=2 entries=2 op=nft_register_chain pid=3633 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:21:13.815000 audit[3633]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe231f6a00 a2=0 a3=0 items=0 ppid=3613 pid=3633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:13.815000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 18:21:13.817303 kubelet[3613]: E1212 18:21:13.817280 3613 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:21:13.817836 kubelet[3613]: I1212 18:21:13.817817 3613 factory.go:223] Registration of the containerd container factory successfully Dec 12 18:21:13.840031 kubelet[3613]: I1212 18:21:13.840012 3613 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:21:13.840031 kubelet[3613]: I1212 18:21:13.840025 3613 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:21:13.840141 kubelet[3613]: I1212 18:21:13.840041 3613 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:21:13.886000 audit[3639]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3639 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:21:13.886000 audit[3639]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd9e312990 a2=0 a3=0 items=0 ppid=3613 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:13.886000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 12 18:21:13.887000 audit[3640]: NETFILTER_CFG table=mangle:50 family=10 entries=2 op=nft_register_chain pid=3640 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:21:13.887000 audit[3640]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd7d944a10 a2=0 a3=0 items=0 ppid=3613 pid=3640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:13.887000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 18:21:13.888000 audit[3642]: NETFILTER_CFG table=mangle:51 family=10 entries=1 op=nft_register_chain pid=3642 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:21:13.888000 audit[3642]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd10a70a90 a2=0 a3=0 items=0 ppid=3613 pid=3642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:13.888000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 18:21:13.889000 audit[3643]: NETFILTER_CFG table=mangle:52 family=2 entries=1 op=nft_register_chain pid=3643 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:21:13.889000 audit[3643]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe13c71750 a2=0 a3=0 items=0 ppid=3613 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:13.889000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 18:21:13.890000 audit[3644]: NETFILTER_CFG table=nat:53 family=2 entries=1 op=nft_register_chain pid=3644 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:21:13.890000 audit[3644]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe3a736130 a2=0 a3=0 items=0 ppid=3613 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:13.890000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 18:21:13.890000 audit[3645]: NETFILTER_CFG table=nat:54 family=10 entries=1 op=nft_register_chain pid=3645 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:21:13.890000 audit[3645]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2cf4f140 a2=0 a3=0 items=0 ppid=3613 pid=3645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:13.890000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 18:21:13.891000 audit[3648]: NETFILTER_CFG table=filter:55 family=10 entries=1 op=nft_register_chain pid=3648 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:21:13.891000 audit[3648]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffded082d50 a2=0 a3=0 items=0 ppid=3613 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:13.891000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 18:21:13.891000 audit[3647]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_chain pid=3647 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:21:13.891000 audit[3647]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffce1b00ac0 a2=0 a3=0 items=0 ppid=3613 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:13.891000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 18:21:13.917696 kubelet[3613]: I1212 18:21:13.886991 3613 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 18:21:13.917696 kubelet[3613]: I1212 18:21:13.888117 3613 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 18:21:13.917696 kubelet[3613]: I1212 18:21:13.888132 3613 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 18:21:13.917696 kubelet[3613]: I1212 18:21:13.888151 3613 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:21:13.917696 kubelet[3613]: I1212 18:21:13.888158 3613 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 18:21:13.917696 kubelet[3613]: E1212 18:21:13.888194 3613 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:21:13.917696 kubelet[3613]: E1212 18:21:13.889818 3613 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 18:21:13.917696 kubelet[3613]: E1212 18:21:13.911745 3613 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-53d1559fda\" not found" Dec 12 18:21:13.918507 kubelet[3613]: I1212 18:21:13.918486 3613 policy_none.go:49] "None policy: Start" Dec 12 18:21:13.918507 kubelet[3613]: I1212 18:21:13.918505 3613 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:21:13.918591 kubelet[3613]: I1212 18:21:13.918531 3613 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:21:13.984825 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 18:21:13.988990 kubelet[3613]: E1212 18:21:13.988958 3613 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 12 18:21:13.992425 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 18:21:13.995918 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 18:21:14.001601 kubelet[3613]: E1212 18:21:14.001192 3613 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 18:21:14.001793 kubelet[3613]: I1212 18:21:14.001782 3613 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:21:14.002334 kubelet[3613]: I1212 18:21:14.001844 3613 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:21:14.003505 kubelet[3613]: I1212 18:21:14.003054 3613 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:21:14.004320 kubelet[3613]: E1212 18:21:14.004303 3613 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:21:14.004376 kubelet[3613]: E1212 18:21:14.004343 3613 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4515.1.0-a-53d1559fda\" not found" Dec 12 18:21:14.015716 kubelet[3613]: E1212 18:21:14.015612 3613 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-a-53d1559fda?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="400ms" Dec 12 18:21:14.104823 kubelet[3613]: I1212 18:21:14.104782 3613 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-53d1559fda" Dec 12 18:21:14.105180 kubelet[3613]: E1212 18:21:14.105157 3613 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-4515.1.0-a-53d1559fda" Dec 12 18:21:14.229812 systemd[1]: Created slice kubepods-burstable-podc4b1af7469176f3c56abfbc44f370e0b.slice - libcontainer container kubepods-burstable-podc4b1af7469176f3c56abfbc44f370e0b.slice. Dec 12 18:21:14.236175 kubelet[3613]: E1212 18:21:14.236144 3613 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-53d1559fda\" not found" node="ci-4515.1.0-a-53d1559fda" Dec 12 18:21:14.241236 systemd[1]: Created slice kubepods-burstable-pod8b8390b14c3381141afd35a53580d3d1.slice - libcontainer container kubepods-burstable-pod8b8390b14c3381141afd35a53580d3d1.slice. Dec 12 18:21:14.243145 kubelet[3613]: E1212 18:21:14.243123 3613 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-53d1559fda\" not found" node="ci-4515.1.0-a-53d1559fda" Dec 12 18:21:14.245805 systemd[1]: Created slice kubepods-burstable-pod936029b026b25834fc0060775fe7ae67.slice - libcontainer container kubepods-burstable-pod936029b026b25834fc0060775fe7ae67.slice. Dec 12 18:21:14.247257 kubelet[3613]: E1212 18:21:14.247229 3613 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-53d1559fda\" not found" node="ci-4515.1.0-a-53d1559fda" Dec 12 18:21:14.306962 kubelet[3613]: I1212 18:21:14.306675 3613 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-53d1559fda" Dec 12 18:21:14.307054 kubelet[3613]: E1212 18:21:14.307004 3613 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-4515.1.0-a-53d1559fda" Dec 12 18:21:14.314872 kubelet[3613]: I1212 18:21:14.314455 3613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c4b1af7469176f3c56abfbc44f370e0b-ca-certs\") pod \"kube-apiserver-ci-4515.1.0-a-53d1559fda\" (UID: \"c4b1af7469176f3c56abfbc44f370e0b\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:14.314872 kubelet[3613]: I1212 18:21:14.314486 3613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c4b1af7469176f3c56abfbc44f370e0b-k8s-certs\") pod \"kube-apiserver-ci-4515.1.0-a-53d1559fda\" (UID: \"c4b1af7469176f3c56abfbc44f370e0b\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:14.314872 kubelet[3613]: I1212 18:21:14.314508 3613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c4b1af7469176f3c56abfbc44f370e0b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515.1.0-a-53d1559fda\" (UID: \"c4b1af7469176f3c56abfbc44f370e0b\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:14.415252 kubelet[3613]: I1212 18:21:14.415185 3613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8b8390b14c3381141afd35a53580d3d1-flexvolume-dir\") pod \"kube-controller-manager-ci-4515.1.0-a-53d1559fda\" (UID: \"8b8390b14c3381141afd35a53580d3d1\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:14.415252 kubelet[3613]: I1212 18:21:14.415226 3613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b8390b14c3381141afd35a53580d3d1-k8s-certs\") pod \"kube-controller-manager-ci-4515.1.0-a-53d1559fda\" (UID: \"8b8390b14c3381141afd35a53580d3d1\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:14.415252 kubelet[3613]: I1212 18:21:14.415245 3613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/936029b026b25834fc0060775fe7ae67-kubeconfig\") pod \"kube-scheduler-ci-4515.1.0-a-53d1559fda\" (UID: \"936029b026b25834fc0060775fe7ae67\") " pod="kube-system/kube-scheduler-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:14.415617 kubelet[3613]: I1212 18:21:14.415297 3613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b8390b14c3381141afd35a53580d3d1-ca-certs\") pod \"kube-controller-manager-ci-4515.1.0-a-53d1559fda\" (UID: \"8b8390b14c3381141afd35a53580d3d1\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:14.415617 kubelet[3613]: I1212 18:21:14.415312 3613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8b8390b14c3381141afd35a53580d3d1-kubeconfig\") pod \"kube-controller-manager-ci-4515.1.0-a-53d1559fda\" (UID: \"8b8390b14c3381141afd35a53580d3d1\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:14.415617 kubelet[3613]: I1212 18:21:14.415329 3613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b8390b14c3381141afd35a53580d3d1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515.1.0-a-53d1559fda\" (UID: \"8b8390b14c3381141afd35a53580d3d1\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:14.416577 kubelet[3613]: E1212 18:21:14.416544 3613 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-a-53d1559fda?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="800ms" Dec 12 18:21:14.538098 containerd[2529]: time="2025-12-12T18:21:14.538056929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515.1.0-a-53d1559fda,Uid:c4b1af7469176f3c56abfbc44f370e0b,Namespace:kube-system,Attempt:0,}" Dec 12 18:21:14.544563 containerd[2529]: time="2025-12-12T18:21:14.544502379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515.1.0-a-53d1559fda,Uid:8b8390b14c3381141afd35a53580d3d1,Namespace:kube-system,Attempt:0,}" Dec 12 18:21:14.548176 containerd[2529]: time="2025-12-12T18:21:14.548151902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515.1.0-a-53d1559fda,Uid:936029b026b25834fc0060775fe7ae67,Namespace:kube-system,Attempt:0,}" Dec 12 18:21:14.620441 kubelet[3613]: E1212 18:21:14.620357 3613 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 18:21:14.709127 kubelet[3613]: I1212 18:21:14.709097 3613 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-53d1559fda" Dec 12 18:21:14.709436 kubelet[3613]: E1212 18:21:14.709405 3613 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-4515.1.0-a-53d1559fda" Dec 12 18:21:14.867658 containerd[2529]: time="2025-12-12T18:21:14.867566822Z" level=info msg="connecting to shim 48cf59e570868638a73f2d079a7631195df946dac3a8c4ad28ad7cd33e560152" address="unix:///run/containerd/s/34fbec2f5461089404b10dfdfd6eca356a49bf6e88f9376126a91e9b279e277b" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:21:14.870692 containerd[2529]: time="2025-12-12T18:21:14.869579842Z" level=info msg="connecting to shim dd9a0598bdd14b9596ded03ca297cf45030119c1897d8b98885d76aefe934404" address="unix:///run/containerd/s/2c1e1205ebdaf46df0feb8b4e47718f536812a17b9e3169d856680f36589d527" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:21:14.880190 containerd[2529]: time="2025-12-12T18:21:14.880152986Z" level=info msg="connecting to shim dd89bc3cb4c2612ea6927eb5b5f837f85eba464578b1d965e3e62766cc3b88f9" address="unix:///run/containerd/s/0ef796138deb4d57f27e522154499acc6457d0540272f566afb47721f6cac311" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:21:14.901973 systemd[1]: Started cri-containerd-48cf59e570868638a73f2d079a7631195df946dac3a8c4ad28ad7cd33e560152.scope - libcontainer container 48cf59e570868638a73f2d079a7631195df946dac3a8c4ad28ad7cd33e560152. Dec 12 18:21:14.917684 systemd[1]: Started cri-containerd-dd9a0598bdd14b9596ded03ca297cf45030119c1897d8b98885d76aefe934404.scope - libcontainer container dd9a0598bdd14b9596ded03ca297cf45030119c1897d8b98885d76aefe934404. Dec 12 18:21:14.921129 systemd[1]: Started cri-containerd-dd89bc3cb4c2612ea6927eb5b5f837f85eba464578b1d965e3e62766cc3b88f9.scope - libcontainer container dd89bc3cb4c2612ea6927eb5b5f837f85eba464578b1d965e3e62766cc3b88f9. Dec 12 18:21:14.922000 audit: BPF prog-id=107 op=LOAD Dec 12 18:21:14.922000 audit: BPF prog-id=108 op=LOAD Dec 12 18:21:14.922000 audit[3684]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3657 pid=3684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:14.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438636635396535373038363836333861373366326430373961373633 Dec 12 18:21:14.923000 audit: BPF prog-id=108 op=UNLOAD Dec 12 18:21:14.923000 audit[3684]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3657 pid=3684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:14.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438636635396535373038363836333861373366326430373961373633 Dec 12 18:21:14.923000 audit: BPF prog-id=109 op=LOAD Dec 12 18:21:14.923000 audit[3684]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3657 pid=3684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:14.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438636635396535373038363836333861373366326430373961373633 Dec 12 18:21:14.923000 audit: BPF prog-id=110 op=LOAD Dec 12 18:21:14.923000 audit[3684]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3657 pid=3684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:14.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438636635396535373038363836333861373366326430373961373633 Dec 12 18:21:14.923000 audit: BPF prog-id=110 op=UNLOAD Dec 12 18:21:14.923000 audit[3684]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3657 pid=3684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:14.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438636635396535373038363836333861373366326430373961373633 Dec 12 18:21:14.923000 audit: BPF prog-id=109 op=UNLOAD Dec 12 18:21:14.923000 audit[3684]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3657 pid=3684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:14.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438636635396535373038363836333861373366326430373961373633 Dec 12 18:21:14.923000 audit: BPF prog-id=111 op=LOAD Dec 12 18:21:14.923000 audit[3684]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3657 pid=3684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:14.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438636635396535373038363836333861373366326430373961373633 Dec 12 18:21:14.931624 kubelet[3613]: E1212 18:21:14.931488 3613 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 18:21:14.936000 audit: BPF prog-id=112 op=LOAD Dec 12 18:21:14.937000 audit: BPF prog-id=113 op=LOAD Dec 12 18:21:14.937000 audit[3713]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3676 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:14.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464396130353938626464313462393539366465643033636132393763 Dec 12 18:21:14.937000 audit: BPF prog-id=113 op=UNLOAD Dec 12 18:21:14.937000 audit[3713]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3676 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:14.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464396130353938626464313462393539366465643033636132393763 Dec 12 18:21:14.937000 audit: BPF prog-id=114 op=LOAD Dec 12 18:21:14.937000 audit[3713]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3676 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:14.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464396130353938626464313462393539366465643033636132393763 Dec 12 18:21:14.937000 audit: BPF prog-id=115 op=LOAD Dec 12 18:21:14.937000 audit[3713]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3676 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:14.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464396130353938626464313462393539366465643033636132393763 Dec 12 18:21:14.937000 audit: BPF prog-id=115 op=UNLOAD Dec 12 18:21:14.937000 audit[3713]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3676 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:14.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464396130353938626464313462393539366465643033636132393763 Dec 12 18:21:14.937000 audit: BPF prog-id=114 op=UNLOAD Dec 12 18:21:14.937000 audit[3713]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3676 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:14.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464396130353938626464313462393539366465643033636132393763 Dec 12 18:21:14.938000 audit: BPF prog-id=116 op=LOAD Dec 12 18:21:14.938000 audit: BPF prog-id=117 op=LOAD Dec 12 18:21:14.938000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3702 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:14.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464383962633363623463323631326561363932376562356235663833 Dec 12 18:21:14.938000 audit: BPF prog-id=117 op=UNLOAD Dec 12 18:21:14.938000 audit[3727]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3702 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:14.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464383962633363623463323631326561363932376562356235663833 Dec 12 18:21:14.938000 audit: BPF prog-id=118 op=LOAD Dec 12 18:21:14.938000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3702 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:14.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464383962633363623463323631326561363932376562356235663833 Dec 12 18:21:14.938000 audit: BPF prog-id=119 op=LOAD Dec 12 18:21:14.938000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3702 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:14.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464383962633363623463323631326561363932376562356235663833 Dec 12 18:21:14.938000 audit: BPF prog-id=119 op=UNLOAD Dec 12 18:21:14.938000 audit[3727]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3702 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:14.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464383962633363623463323631326561363932376562356235663833 Dec 12 18:21:14.938000 audit: BPF prog-id=118 op=UNLOAD Dec 12 18:21:14.938000 audit[3727]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3702 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:14.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464383962633363623463323631326561363932376562356235663833 Dec 12 18:21:14.938000 audit: BPF prog-id=120 op=LOAD Dec 12 18:21:14.938000 audit[3713]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3676 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:14.939000 audit: BPF prog-id=121 op=LOAD Dec 12 18:21:14.939000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3702 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:14.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464383962633363623463323631326561363932376562356235663833 Dec 12 18:21:14.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464396130353938626464313462393539366465643033636132393763 Dec 12 18:21:14.979236 containerd[2529]: time="2025-12-12T18:21:14.978962688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515.1.0-a-53d1559fda,Uid:c4b1af7469176f3c56abfbc44f370e0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"48cf59e570868638a73f2d079a7631195df946dac3a8c4ad28ad7cd33e560152\"" Dec 12 18:21:14.993167 containerd[2529]: time="2025-12-12T18:21:14.993138567Z" level=info msg="CreateContainer within sandbox \"48cf59e570868638a73f2d079a7631195df946dac3a8c4ad28ad7cd33e560152\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 18:21:14.999987 containerd[2529]: time="2025-12-12T18:21:14.999962977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515.1.0-a-53d1559fda,Uid:8b8390b14c3381141afd35a53580d3d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd9a0598bdd14b9596ded03ca297cf45030119c1897d8b98885d76aefe934404\"" Dec 12 18:21:15.007854 containerd[2529]: time="2025-12-12T18:21:15.007589739Z" level=info msg="CreateContainer within sandbox \"dd9a0598bdd14b9596ded03ca297cf45030119c1897d8b98885d76aefe934404\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 18:21:15.007854 containerd[2529]: time="2025-12-12T18:21:15.007830870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515.1.0-a-53d1559fda,Uid:936029b026b25834fc0060775fe7ae67,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd89bc3cb4c2612ea6927eb5b5f837f85eba464578b1d965e3e62766cc3b88f9\"" Dec 12 18:21:15.015212 containerd[2529]: time="2025-12-12T18:21:15.015179848Z" level=info msg="CreateContainer within sandbox \"dd89bc3cb4c2612ea6927eb5b5f837f85eba464578b1d965e3e62766cc3b88f9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 18:21:15.018539 containerd[2529]: time="2025-12-12T18:21:15.018126323Z" level=info msg="Container 2c29bb40da0769b7e4522b27cc222e3e3d1525e96a1bbc01158c81b0a978a417: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:21:15.048779 containerd[2529]: time="2025-12-12T18:21:15.048753120Z" level=info msg="Container f701e1cc2a346411c41a3856966da6225401284aa7ebcd1df44022d3b15039c9: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:21:15.057492 containerd[2529]: time="2025-12-12T18:21:15.057466531Z" level=info msg="CreateContainer within sandbox \"48cf59e570868638a73f2d079a7631195df946dac3a8c4ad28ad7cd33e560152\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2c29bb40da0769b7e4522b27cc222e3e3d1525e96a1bbc01158c81b0a978a417\"" Dec 12 18:21:15.058538 containerd[2529]: time="2025-12-12T18:21:15.058496063Z" level=info msg="Container e24fc202b0017d1b35c2e1ca58665b5532f8a4e43a1a1bf162e162809ca7d9d7: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:21:15.059130 containerd[2529]: time="2025-12-12T18:21:15.059105952Z" level=info msg="StartContainer for \"2c29bb40da0769b7e4522b27cc222e3e3d1525e96a1bbc01158c81b0a978a417\"" Dec 12 18:21:15.062701 containerd[2529]: time="2025-12-12T18:21:15.062628983Z" level=info msg="connecting to shim 2c29bb40da0769b7e4522b27cc222e3e3d1525e96a1bbc01158c81b0a978a417" address="unix:///run/containerd/s/34fbec2f5461089404b10dfdfd6eca356a49bf6e88f9376126a91e9b279e277b" protocol=ttrpc version=3 Dec 12 18:21:15.077405 containerd[2529]: time="2025-12-12T18:21:15.077376077Z" level=info msg="CreateContainer within sandbox \"dd9a0598bdd14b9596ded03ca297cf45030119c1897d8b98885d76aefe934404\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f701e1cc2a346411c41a3856966da6225401284aa7ebcd1df44022d3b15039c9\"" Dec 12 18:21:15.077924 containerd[2529]: time="2025-12-12T18:21:15.077882163Z" level=info msg="StartContainer for \"f701e1cc2a346411c41a3856966da6225401284aa7ebcd1df44022d3b15039c9\"" Dec 12 18:21:15.078845 containerd[2529]: time="2025-12-12T18:21:15.078808213Z" level=info msg="connecting to shim f701e1cc2a346411c41a3856966da6225401284aa7ebcd1df44022d3b15039c9" address="unix:///run/containerd/s/2c1e1205ebdaf46df0feb8b4e47718f536812a17b9e3169d856680f36589d527" protocol=ttrpc version=3 Dec 12 18:21:15.080734 systemd[1]: Started cri-containerd-2c29bb40da0769b7e4522b27cc222e3e3d1525e96a1bbc01158c81b0a978a417.scope - libcontainer container 2c29bb40da0769b7e4522b27cc222e3e3d1525e96a1bbc01158c81b0a978a417. Dec 12 18:21:15.087836 kubelet[3613]: E1212 18:21:15.087782 3613 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-a-53d1559fda&limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 18:21:15.098129 containerd[2529]: time="2025-12-12T18:21:15.098102239Z" level=info msg="CreateContainer within sandbox \"dd89bc3cb4c2612ea6927eb5b5f837f85eba464578b1d965e3e62766cc3b88f9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e24fc202b0017d1b35c2e1ca58665b5532f8a4e43a1a1bf162e162809ca7d9d7\"" Dec 12 18:21:15.098480 containerd[2529]: time="2025-12-12T18:21:15.098465540Z" level=info msg="StartContainer for \"e24fc202b0017d1b35c2e1ca58665b5532f8a4e43a1a1bf162e162809ca7d9d7\"" Dec 12 18:21:15.099259 containerd[2529]: time="2025-12-12T18:21:15.099241053Z" level=info msg="connecting to shim e24fc202b0017d1b35c2e1ca58665b5532f8a4e43a1a1bf162e162809ca7d9d7" address="unix:///run/containerd/s/0ef796138deb4d57f27e522154499acc6457d0540272f566afb47721f6cac311" protocol=ttrpc version=3 Dec 12 18:21:15.099798 systemd[1]: Started cri-containerd-f701e1cc2a346411c41a3856966da6225401284aa7ebcd1df44022d3b15039c9.scope - libcontainer container f701e1cc2a346411c41a3856966da6225401284aa7ebcd1df44022d3b15039c9. Dec 12 18:21:15.102000 audit: BPF prog-id=122 op=LOAD Dec 12 18:21:15.104000 audit: BPF prog-id=123 op=LOAD Dec 12 18:21:15.104000 audit[3790]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3657 pid=3790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:15.104000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263323962623430646130373639623765343532326232376363323232 Dec 12 18:21:15.104000 audit: BPF prog-id=123 op=UNLOAD Dec 12 18:21:15.104000 audit[3790]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3657 pid=3790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:15.104000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263323962623430646130373639623765343532326232376363323232 Dec 12 18:21:15.104000 audit: BPF prog-id=124 op=LOAD Dec 12 18:21:15.104000 audit[3790]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3657 pid=3790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:15.104000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263323962623430646130373639623765343532326232376363323232 Dec 12 18:21:15.104000 audit: BPF prog-id=125 op=LOAD Dec 12 18:21:15.104000 audit[3790]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3657 pid=3790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:15.104000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263323962623430646130373639623765343532326232376363323232 Dec 12 18:21:15.104000 audit: BPF prog-id=125 op=UNLOAD Dec 12 18:21:15.104000 audit[3790]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3657 pid=3790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:15.104000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263323962623430646130373639623765343532326232376363323232 Dec 12 18:21:15.104000 audit: BPF prog-id=124 op=UNLOAD Dec 12 18:21:15.104000 audit[3790]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3657 pid=3790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:15.104000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263323962623430646130373639623765343532326232376363323232 Dec 12 18:21:15.105000 audit: BPF prog-id=126 op=LOAD Dec 12 18:21:15.105000 audit[3790]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3657 pid=3790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:15.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263323962623430646130373639623765343532326232376363323232 Dec 12 18:21:15.128670 systemd[1]: Started cri-containerd-e24fc202b0017d1b35c2e1ca58665b5532f8a4e43a1a1bf162e162809ca7d9d7.scope - libcontainer container e24fc202b0017d1b35c2e1ca58665b5532f8a4e43a1a1bf162e162809ca7d9d7. Dec 12 18:21:15.132000 audit: BPF prog-id=127 op=LOAD Dec 12 18:21:15.133000 audit: BPF prog-id=128 op=LOAD Dec 12 18:21:15.133000 audit[3801]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3676 pid=3801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:15.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637303165316363326133343634313163343161333835363936366461 Dec 12 18:21:15.133000 audit: BPF prog-id=128 op=UNLOAD Dec 12 18:21:15.133000 audit[3801]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3676 pid=3801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:15.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637303165316363326133343634313163343161333835363936366461 Dec 12 18:21:15.134000 audit: BPF prog-id=129 op=LOAD Dec 12 18:21:15.134000 audit[3801]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3676 pid=3801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:15.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637303165316363326133343634313163343161333835363936366461 Dec 12 18:21:15.134000 audit: BPF prog-id=130 op=LOAD Dec 12 18:21:15.134000 audit[3801]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3676 pid=3801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:15.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637303165316363326133343634313163343161333835363936366461 Dec 12 18:21:15.134000 audit: BPF prog-id=130 op=UNLOAD Dec 12 18:21:15.134000 audit[3801]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3676 pid=3801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:15.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637303165316363326133343634313163343161333835363936366461 Dec 12 18:21:15.134000 audit: BPF prog-id=129 op=UNLOAD Dec 12 18:21:15.134000 audit[3801]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3676 pid=3801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:15.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637303165316363326133343634313163343161333835363936366461 Dec 12 18:21:15.134000 audit: BPF prog-id=131 op=LOAD Dec 12 18:21:15.134000 audit[3801]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3676 pid=3801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:15.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637303165316363326133343634313163343161333835363936366461 Dec 12 18:21:15.154000 audit: BPF prog-id=132 op=LOAD Dec 12 18:21:15.155000 audit: BPF prog-id=133 op=LOAD Dec 12 18:21:15.155000 audit[3821]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=3702 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:15.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532346663323032623030313764316233356332653163613538363635 Dec 12 18:21:15.155000 audit: BPF prog-id=133 op=UNLOAD Dec 12 18:21:15.155000 audit[3821]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3702 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:15.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532346663323032623030313764316233356332653163613538363635 Dec 12 18:21:15.155000 audit: BPF prog-id=134 op=LOAD Dec 12 18:21:15.155000 audit[3821]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3702 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:15.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532346663323032623030313764316233356332653163613538363635 Dec 12 18:21:15.155000 audit: BPF prog-id=135 op=LOAD Dec 12 18:21:15.155000 audit[3821]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3702 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:15.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532346663323032623030313764316233356332653163613538363635 Dec 12 18:21:15.155000 audit: BPF prog-id=135 op=UNLOAD Dec 12 18:21:15.155000 audit[3821]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3702 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:15.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532346663323032623030313764316233356332653163613538363635 Dec 12 18:21:15.155000 audit: BPF prog-id=134 op=UNLOAD Dec 12 18:21:15.155000 audit[3821]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3702 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:15.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532346663323032623030313764316233356332653163613538363635 Dec 12 18:21:15.155000 audit: BPF prog-id=136 op=LOAD Dec 12 18:21:15.155000 audit[3821]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3702 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:15.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532346663323032623030313764316233356332653163613538363635 Dec 12 18:21:15.171684 containerd[2529]: time="2025-12-12T18:21:15.171652220Z" level=info msg="StartContainer for \"2c29bb40da0769b7e4522b27cc222e3e3d1525e96a1bbc01158c81b0a978a417\" returns successfully" Dec 12 18:21:15.183652 kubelet[3613]: E1212 18:21:15.183442 3613 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 18:21:15.217872 kubelet[3613]: E1212 18:21:15.217747 3613 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-a-53d1559fda?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="1.6s" Dec 12 18:21:15.219911 containerd[2529]: time="2025-12-12T18:21:15.219878924Z" level=info msg="StartContainer for \"f701e1cc2a346411c41a3856966da6225401284aa7ebcd1df44022d3b15039c9\" returns successfully" Dec 12 18:21:15.220987 containerd[2529]: time="2025-12-12T18:21:15.220753664Z" level=info msg="StartContainer for \"e24fc202b0017d1b35c2e1ca58665b5532f8a4e43a1a1bf162e162809ca7d9d7\" returns successfully" Dec 12 18:21:15.522534 kubelet[3613]: I1212 18:21:15.519538 3613 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-53d1559fda" Dec 12 18:21:15.901165 kubelet[3613]: E1212 18:21:15.900888 3613 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-53d1559fda\" not found" node="ci-4515.1.0-a-53d1559fda" Dec 12 18:21:15.905325 kubelet[3613]: E1212 18:21:15.905294 3613 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-53d1559fda\" not found" node="ci-4515.1.0-a-53d1559fda" Dec 12 18:21:15.907089 kubelet[3613]: E1212 18:21:15.907063 3613 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-53d1559fda\" not found" node="ci-4515.1.0-a-53d1559fda" Dec 12 18:21:16.909192 kubelet[3613]: E1212 18:21:16.909128 3613 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-53d1559fda\" not found" node="ci-4515.1.0-a-53d1559fda" Dec 12 18:21:16.910061 kubelet[3613]: E1212 18:21:16.909765 3613 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-53d1559fda\" not found" node="ci-4515.1.0-a-53d1559fda" Dec 12 18:21:17.048309 kubelet[3613]: E1212 18:21:17.048258 3613 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4515.1.0-a-53d1559fda\" not found" node="ci-4515.1.0-a-53d1559fda" Dec 12 18:21:17.113320 kubelet[3613]: I1212 18:21:17.113267 3613 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515.1.0-a-53d1559fda" Dec 12 18:21:17.113320 kubelet[3613]: E1212 18:21:17.113296 3613 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515.1.0-a-53d1559fda\": node \"ci-4515.1.0-a-53d1559fda\" not found" Dec 12 18:21:17.126739 kubelet[3613]: E1212 18:21:17.126709 3613 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-53d1559fda\" not found" Dec 12 18:21:17.227746 kubelet[3613]: E1212 18:21:17.227716 3613 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-53d1559fda\" not found" Dec 12 18:21:17.311627 kubelet[3613]: I1212 18:21:17.311597 3613 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:17.316608 kubelet[3613]: E1212 18:21:17.316575 3613 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-a-53d1559fda\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:17.316608 kubelet[3613]: I1212 18:21:17.316597 3613 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:17.318016 kubelet[3613]: E1212 18:21:17.317991 3613 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515.1.0-a-53d1559fda\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:17.318016 kubelet[3613]: I1212 18:21:17.318012 3613 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:17.319266 kubelet[3613]: E1212 18:21:17.319239 3613 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515.1.0-a-53d1559fda\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:17.730274 kubelet[3613]: I1212 18:21:17.730247 3613 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:17.732166 kubelet[3613]: E1212 18:21:17.732136 3613 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515.1.0-a-53d1559fda\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:17.795465 kubelet[3613]: I1212 18:21:17.795435 3613 apiserver.go:52] "Watching apiserver" Dec 12 18:21:17.814817 kubelet[3613]: I1212 18:21:17.814793 3613 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:21:18.977189 systemd[1]: Reload requested from client PID 3892 ('systemctl') (unit session-9.scope)... Dec 12 18:21:18.977368 systemd[1]: Reloading... Dec 12 18:21:19.097596 zram_generator::config[3948]: No configuration found. Dec 12 18:21:19.295442 systemd[1]: Reloading finished in 317 ms. Dec 12 18:21:19.320986 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:21:19.337450 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 18:21:19.337726 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:21:19.346074 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 12 18:21:19.346129 kernel: audit: type=1131 audit(1765563679.337:424): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:19.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:19.337787 systemd[1]: kubelet.service: Consumed 571ms CPU time, 129.6M memory peak. Dec 12 18:21:19.341768 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:21:19.344000 audit: BPF prog-id=137 op=LOAD Dec 12 18:21:19.344000 audit: BPF prog-id=101 op=UNLOAD Dec 12 18:21:19.344000 audit: BPF prog-id=138 op=LOAD Dec 12 18:21:19.346583 kernel: audit: type=1334 audit(1765563679.344:425): prog-id=137 op=LOAD Dec 12 18:21:19.346606 kernel: audit: type=1334 audit(1765563679.344:426): prog-id=101 op=UNLOAD Dec 12 18:21:19.346625 kernel: audit: type=1334 audit(1765563679.344:427): prog-id=138 op=LOAD Dec 12 18:21:19.344000 audit: BPF prog-id=139 op=LOAD Dec 12 18:21:19.344000 audit: BPF prog-id=102 op=UNLOAD Dec 12 18:21:19.344000 audit: BPF prog-id=103 op=UNLOAD Dec 12 18:21:19.345000 audit: BPF prog-id=140 op=LOAD Dec 12 18:21:19.345000 audit: BPF prog-id=93 op=UNLOAD Dec 12 18:21:19.346000 audit: BPF prog-id=141 op=LOAD Dec 12 18:21:19.346000 audit: BPF prog-id=142 op=LOAD Dec 12 18:21:19.346000 audit: BPF prog-id=94 op=UNLOAD Dec 12 18:21:19.346000 audit: BPF prog-id=95 op=UNLOAD Dec 12 18:21:19.348466 kernel: audit: type=1334 audit(1765563679.344:428): prog-id=139 op=LOAD Dec 12 18:21:19.348531 kernel: audit: type=1334 audit(1765563679.344:429): prog-id=102 op=UNLOAD Dec 12 18:21:19.348553 kernel: audit: type=1334 audit(1765563679.344:430): prog-id=103 op=UNLOAD Dec 12 18:21:19.348570 kernel: audit: type=1334 audit(1765563679.345:431): prog-id=140 op=LOAD Dec 12 18:21:19.348589 kernel: audit: type=1334 audit(1765563679.345:432): prog-id=93 op=UNLOAD Dec 12 18:21:19.348605 kernel: audit: type=1334 audit(1765563679.346:433): prog-id=141 op=LOAD Dec 12 18:21:19.346000 audit: BPF prog-id=143 op=LOAD Dec 12 18:21:19.346000 audit: BPF prog-id=90 op=UNLOAD Dec 12 18:21:19.346000 audit: BPF prog-id=144 op=LOAD Dec 12 18:21:19.346000 audit: BPF prog-id=145 op=LOAD Dec 12 18:21:19.346000 audit: BPF prog-id=91 op=UNLOAD Dec 12 18:21:19.346000 audit: BPF prog-id=92 op=UNLOAD Dec 12 18:21:19.346000 audit: BPF prog-id=146 op=LOAD Dec 12 18:21:19.346000 audit: BPF prog-id=97 op=UNLOAD Dec 12 18:21:19.346000 audit: BPF prog-id=147 op=LOAD Dec 12 18:21:19.346000 audit: BPF prog-id=148 op=LOAD Dec 12 18:21:19.346000 audit: BPF prog-id=98 op=UNLOAD Dec 12 18:21:19.348000 audit: BPF prog-id=99 op=UNLOAD Dec 12 18:21:19.348000 audit: BPF prog-id=149 op=LOAD Dec 12 18:21:19.348000 audit: BPF prog-id=150 op=LOAD Dec 12 18:21:19.348000 audit: BPF prog-id=88 op=UNLOAD Dec 12 18:21:19.348000 audit: BPF prog-id=89 op=UNLOAD Dec 12 18:21:19.349000 audit: BPF prog-id=151 op=LOAD Dec 12 18:21:19.349000 audit: BPF prog-id=100 op=UNLOAD Dec 12 18:21:19.351000 audit: BPF prog-id=152 op=LOAD Dec 12 18:21:19.351000 audit: BPF prog-id=87 op=UNLOAD Dec 12 18:21:19.351000 audit: BPF prog-id=153 op=LOAD Dec 12 18:21:19.351000 audit: BPF prog-id=104 op=UNLOAD Dec 12 18:21:19.351000 audit: BPF prog-id=154 op=LOAD Dec 12 18:21:19.351000 audit: BPF prog-id=155 op=LOAD Dec 12 18:21:19.351000 audit: BPF prog-id=105 op=UNLOAD Dec 12 18:21:19.351000 audit: BPF prog-id=106 op=UNLOAD Dec 12 18:21:19.351000 audit: BPF prog-id=156 op=LOAD Dec 12 18:21:19.353000 audit: BPF prog-id=96 op=UNLOAD Dec 12 18:21:20.140897 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:21:20.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:20.149780 (kubelet)[4009]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:21:20.192406 kubelet[4009]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:21:20.192406 kubelet[4009]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:21:20.192406 kubelet[4009]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:21:20.192790 kubelet[4009]: I1212 18:21:20.192448 4009 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:21:20.200291 kubelet[4009]: I1212 18:21:20.200228 4009 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 18:21:20.200291 kubelet[4009]: I1212 18:21:20.200250 4009 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:21:20.200626 kubelet[4009]: I1212 18:21:20.200562 4009 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 18:21:20.205405 kubelet[4009]: I1212 18:21:20.205347 4009 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 18:21:20.207470 kubelet[4009]: I1212 18:21:20.207168 4009 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:21:20.211533 kubelet[4009]: I1212 18:21:20.211502 4009 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:21:20.215102 kubelet[4009]: I1212 18:21:20.215049 4009 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:21:20.215270 kubelet[4009]: I1212 18:21:20.215213 4009 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:21:20.215389 kubelet[4009]: I1212 18:21:20.215233 4009 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515.1.0-a-53d1559fda","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:21:20.215481 kubelet[4009]: I1212 18:21:20.215396 4009 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:21:20.215481 kubelet[4009]: I1212 18:21:20.215407 4009 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 18:21:20.215481 kubelet[4009]: I1212 18:21:20.215451 4009 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:21:20.215831 kubelet[4009]: I1212 18:21:20.215732 4009 kubelet.go:480] "Attempting to sync node with API server" Dec 12 18:21:20.215831 kubelet[4009]: I1212 18:21:20.215752 4009 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:21:20.216171 kubelet[4009]: I1212 18:21:20.216135 4009 kubelet.go:386] "Adding apiserver pod source" Dec 12 18:21:20.216171 kubelet[4009]: I1212 18:21:20.216173 4009 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:21:20.268322 kubelet[4009]: I1212 18:21:20.268236 4009 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 18:21:20.269563 kubelet[4009]: I1212 18:21:20.268931 4009 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 18:21:20.274400 kubelet[4009]: I1212 18:21:20.274375 4009 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:21:20.274467 kubelet[4009]: I1212 18:21:20.274434 4009 server.go:1289] "Started kubelet" Dec 12 18:21:20.275869 kubelet[4009]: I1212 18:21:20.275848 4009 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:21:20.277814 kubelet[4009]: I1212 18:21:20.277772 4009 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:21:20.278474 kubelet[4009]: I1212 18:21:20.278458 4009 server.go:317] "Adding debug handlers to kubelet server" Dec 12 18:21:20.279168 kubelet[4009]: I1212 18:21:20.279130 4009 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:21:20.279545 kubelet[4009]: I1212 18:21:20.279460 4009 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:21:20.282627 kubelet[4009]: I1212 18:21:20.282605 4009 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:21:20.284643 kubelet[4009]: I1212 18:21:20.283989 4009 factory.go:223] Registration of the systemd container factory successfully Dec 12 18:21:20.284643 kubelet[4009]: I1212 18:21:20.284072 4009 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:21:20.285082 kubelet[4009]: I1212 18:21:20.285039 4009 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:21:20.285222 kubelet[4009]: I1212 18:21:20.285206 4009 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:21:20.285357 kubelet[4009]: I1212 18:21:20.285347 4009 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:21:20.287714 kubelet[4009]: I1212 18:21:20.287257 4009 factory.go:223] Registration of the containerd container factory successfully Dec 12 18:21:20.307855 kubelet[4009]: I1212 18:21:20.307498 4009 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 18:21:20.310349 kubelet[4009]: I1212 18:21:20.309815 4009 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 18:21:20.310349 kubelet[4009]: I1212 18:21:20.309857 4009 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 18:21:20.310349 kubelet[4009]: I1212 18:21:20.309875 4009 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:21:20.310349 kubelet[4009]: I1212 18:21:20.309882 4009 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 18:21:20.310349 kubelet[4009]: E1212 18:21:20.309970 4009 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:21:20.351532 kubelet[4009]: I1212 18:21:20.351496 4009 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:21:20.351532 kubelet[4009]: I1212 18:21:20.351512 4009 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:21:20.351627 kubelet[4009]: I1212 18:21:20.351544 4009 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:21:20.351663 kubelet[4009]: I1212 18:21:20.351652 4009 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 18:21:20.351695 kubelet[4009]: I1212 18:21:20.351663 4009 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 18:21:20.351695 kubelet[4009]: I1212 18:21:20.351678 4009 policy_none.go:49] "None policy: Start" Dec 12 18:21:20.351695 kubelet[4009]: I1212 18:21:20.351687 4009 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:21:20.351844 kubelet[4009]: I1212 18:21:20.351696 4009 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:21:20.351844 kubelet[4009]: I1212 18:21:20.351817 4009 state_mem.go:75] "Updated machine memory state" Dec 12 18:21:20.356266 kubelet[4009]: E1212 18:21:20.356249 4009 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 18:21:20.356765 kubelet[4009]: I1212 18:21:20.356713 4009 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:21:20.356765 kubelet[4009]: I1212 18:21:20.356734 4009 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:21:20.357304 kubelet[4009]: I1212 18:21:20.357255 4009 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:21:20.358642 kubelet[4009]: E1212 18:21:20.358626 4009 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:21:20.411671 kubelet[4009]: I1212 18:21:20.411560 4009 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:20.413198 kubelet[4009]: I1212 18:21:20.413059 4009 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:20.413631 kubelet[4009]: I1212 18:21:20.413619 4009 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:20.421499 kubelet[4009]: I1212 18:21:20.421477 4009 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 12 18:21:20.427123 kubelet[4009]: I1212 18:21:20.427023 4009 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 12 18:21:20.427615 kubelet[4009]: I1212 18:21:20.427589 4009 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 12 18:21:20.466416 kubelet[4009]: I1212 18:21:20.466396 4009 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-53d1559fda" Dec 12 18:21:20.479562 kubelet[4009]: I1212 18:21:20.479291 4009 kubelet_node_status.go:124] "Node was previously registered" node="ci-4515.1.0-a-53d1559fda" Dec 12 18:21:20.481175 kubelet[4009]: I1212 18:21:20.480948 4009 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515.1.0-a-53d1559fda" Dec 12 18:21:20.483818 kubelet[4009]: I1212 18:21:20.483462 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/936029b026b25834fc0060775fe7ae67-kubeconfig\") pod \"kube-scheduler-ci-4515.1.0-a-53d1559fda\" (UID: \"936029b026b25834fc0060775fe7ae67\") " pod="kube-system/kube-scheduler-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:20.483818 kubelet[4009]: I1212 18:21:20.483497 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8b8390b14c3381141afd35a53580d3d1-flexvolume-dir\") pod \"kube-controller-manager-ci-4515.1.0-a-53d1559fda\" (UID: \"8b8390b14c3381141afd35a53580d3d1\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:20.483818 kubelet[4009]: I1212 18:21:20.483533 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b8390b14c3381141afd35a53580d3d1-k8s-certs\") pod \"kube-controller-manager-ci-4515.1.0-a-53d1559fda\" (UID: \"8b8390b14c3381141afd35a53580d3d1\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:20.483818 kubelet[4009]: I1212 18:21:20.483553 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b8390b14c3381141afd35a53580d3d1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515.1.0-a-53d1559fda\" (UID: \"8b8390b14c3381141afd35a53580d3d1\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:20.483818 kubelet[4009]: I1212 18:21:20.483573 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c4b1af7469176f3c56abfbc44f370e0b-ca-certs\") pod \"kube-apiserver-ci-4515.1.0-a-53d1559fda\" (UID: \"c4b1af7469176f3c56abfbc44f370e0b\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:20.483996 kubelet[4009]: I1212 18:21:20.483596 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c4b1af7469176f3c56abfbc44f370e0b-k8s-certs\") pod \"kube-apiserver-ci-4515.1.0-a-53d1559fda\" (UID: \"c4b1af7469176f3c56abfbc44f370e0b\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:20.483996 kubelet[4009]: I1212 18:21:20.483616 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c4b1af7469176f3c56abfbc44f370e0b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515.1.0-a-53d1559fda\" (UID: \"c4b1af7469176f3c56abfbc44f370e0b\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:20.483996 kubelet[4009]: I1212 18:21:20.483632 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b8390b14c3381141afd35a53580d3d1-ca-certs\") pod \"kube-controller-manager-ci-4515.1.0-a-53d1559fda\" (UID: \"8b8390b14c3381141afd35a53580d3d1\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:20.483996 kubelet[4009]: I1212 18:21:20.483649 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8b8390b14c3381141afd35a53580d3d1-kubeconfig\") pod \"kube-controller-manager-ci-4515.1.0-a-53d1559fda\" (UID: \"8b8390b14c3381141afd35a53580d3d1\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:21.220225 kubelet[4009]: I1212 18:21:21.219987 4009 apiserver.go:52] "Watching apiserver" Dec 12 18:21:21.279840 kubelet[4009]: I1212 18:21:21.279806 4009 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:21:21.307921 kubelet[4009]: I1212 18:21:21.307864 4009 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4515.1.0-a-53d1559fda" podStartSLOduration=1.307830365 podStartE2EDuration="1.307830365s" podCreationTimestamp="2025-12-12 18:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:21:21.307684989 +0000 UTC m=+1.153568637" watchObservedRunningTime="2025-12-12 18:21:21.307830365 +0000 UTC m=+1.153714004" Dec 12 18:21:21.316162 kubelet[4009]: I1212 18:21:21.316108 4009 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-53d1559fda" podStartSLOduration=1.316096365 podStartE2EDuration="1.316096365s" podCreationTimestamp="2025-12-12 18:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:21:21.315781127 +0000 UTC m=+1.161664777" watchObservedRunningTime="2025-12-12 18:21:21.316096365 +0000 UTC m=+1.161980009" Dec 12 18:21:21.345386 kubelet[4009]: I1212 18:21:21.344987 4009 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:21.345386 kubelet[4009]: I1212 18:21:21.345235 4009 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:21.346864 kubelet[4009]: I1212 18:21:21.346838 4009 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:21.358820 kubelet[4009]: I1212 18:21:21.358597 4009 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 12 18:21:21.358820 kubelet[4009]: E1212 18:21:21.358643 4009 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-a-53d1559fda\" already exists" pod="kube-system/kube-apiserver-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:21.359388 kubelet[4009]: I1212 18:21:21.359154 4009 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4515.1.0-a-53d1559fda" podStartSLOduration=1.359143761 podStartE2EDuration="1.359143761s" podCreationTimestamp="2025-12-12 18:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:21:21.325056189 +0000 UTC m=+1.170939839" watchObservedRunningTime="2025-12-12 18:21:21.359143761 +0000 UTC m=+1.205027412" Dec 12 18:21:21.359648 kubelet[4009]: I1212 18:21:21.359619 4009 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 12 18:21:21.359753 kubelet[4009]: E1212 18:21:21.359743 4009 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515.1.0-a-53d1559fda\" already exists" pod="kube-system/kube-scheduler-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:21.359871 kubelet[4009]: I1212 18:21:21.359793 4009 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 12 18:21:21.359905 kubelet[4009]: E1212 18:21:21.359890 4009 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515.1.0-a-53d1559fda\" already exists" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-53d1559fda" Dec 12 18:21:24.394926 kubelet[4009]: I1212 18:21:24.394890 4009 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 18:21:24.395503 containerd[2529]: time="2025-12-12T18:21:24.395470788Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 18:21:24.395828 kubelet[4009]: I1212 18:21:24.395739 4009 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 18:21:25.402198 systemd[1]: Created slice kubepods-besteffort-pod630a9bdf_37dd_4584_a250_9a6924f25025.slice - libcontainer container kubepods-besteffort-pod630a9bdf_37dd_4584_a250_9a6924f25025.slice. Dec 12 18:21:25.410546 kubelet[4009]: I1212 18:21:25.410476 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/630a9bdf-37dd-4584-a250-9a6924f25025-kube-proxy\") pod \"kube-proxy-l5ql4\" (UID: \"630a9bdf-37dd-4584-a250-9a6924f25025\") " pod="kube-system/kube-proxy-l5ql4" Dec 12 18:21:25.410857 kubelet[4009]: I1212 18:21:25.410661 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/630a9bdf-37dd-4584-a250-9a6924f25025-lib-modules\") pod \"kube-proxy-l5ql4\" (UID: \"630a9bdf-37dd-4584-a250-9a6924f25025\") " pod="kube-system/kube-proxy-l5ql4" Dec 12 18:21:25.410857 kubelet[4009]: I1212 18:21:25.410685 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gckfs\" (UniqueName: \"kubernetes.io/projected/630a9bdf-37dd-4584-a250-9a6924f25025-kube-api-access-gckfs\") pod \"kube-proxy-l5ql4\" (UID: \"630a9bdf-37dd-4584-a250-9a6924f25025\") " pod="kube-system/kube-proxy-l5ql4" Dec 12 18:21:25.410857 kubelet[4009]: I1212 18:21:25.410711 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/630a9bdf-37dd-4584-a250-9a6924f25025-xtables-lock\") pod \"kube-proxy-l5ql4\" (UID: \"630a9bdf-37dd-4584-a250-9a6924f25025\") " pod="kube-system/kube-proxy-l5ql4" Dec 12 18:21:25.713155 containerd[2529]: time="2025-12-12T18:21:25.713117424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l5ql4,Uid:630a9bdf-37dd-4584-a250-9a6924f25025,Namespace:kube-system,Attempt:0,}" Dec 12 18:21:25.750213 systemd[1]: Created slice kubepods-besteffort-pod66d208a9_e0cc_4d4c_a77b_82c94b7863dc.slice - libcontainer container kubepods-besteffort-pod66d208a9_e0cc_4d4c_a77b_82c94b7863dc.slice. Dec 12 18:21:25.765102 containerd[2529]: time="2025-12-12T18:21:25.765067875Z" level=info msg="connecting to shim c8e0199b4f80a29c4cfe21ff6a9824e6931736a1ead4b378287102659d748c31" address="unix:///run/containerd/s/958bedec82e59d919402faa2f5d2f44f4fcd251437089530c954f0ed9e722e11" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:21:25.788688 systemd[1]: Started cri-containerd-c8e0199b4f80a29c4cfe21ff6a9824e6931736a1ead4b378287102659d748c31.scope - libcontainer container c8e0199b4f80a29c4cfe21ff6a9824e6931736a1ead4b378287102659d748c31. Dec 12 18:21:25.794000 audit: BPF prog-id=157 op=LOAD Dec 12 18:21:25.797254 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 12 18:21:25.797310 kernel: audit: type=1334 audit(1765563685.794:466): prog-id=157 op=LOAD Dec 12 18:21:25.797000 audit: BPF prog-id=158 op=LOAD Dec 12 18:21:25.806554 kernel: audit: type=1334 audit(1765563685.797:467): prog-id=158 op=LOAD Dec 12 18:21:25.806629 kernel: audit: type=1300 audit(1765563685.797:467): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4066 pid=4078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:25.797000 audit[4078]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4066 pid=4078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:25.811577 kernel: audit: type=1327 audit(1765563685.797:467): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338653031393962346638306132396334636665323166663661393832 Dec 12 18:21:25.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338653031393962346638306132396334636665323166663661393832 Dec 12 18:21:25.812895 kubelet[4009]: I1212 18:21:25.812872 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/66d208a9-e0cc-4d4c-a77b-82c94b7863dc-var-lib-calico\") pod \"tigera-operator-7dcd859c48-64n74\" (UID: \"66d208a9-e0cc-4d4c-a77b-82c94b7863dc\") " pod="tigera-operator/tigera-operator-7dcd859c48-64n74" Dec 12 18:21:25.813072 kubelet[4009]: I1212 18:21:25.812907 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plzgv\" (UniqueName: \"kubernetes.io/projected/66d208a9-e0cc-4d4c-a77b-82c94b7863dc-kube-api-access-plzgv\") pod \"tigera-operator-7dcd859c48-64n74\" (UID: \"66d208a9-e0cc-4d4c-a77b-82c94b7863dc\") " pod="tigera-operator/tigera-operator-7dcd859c48-64n74" Dec 12 18:21:25.797000 audit: BPF prog-id=158 op=UNLOAD Dec 12 18:21:25.815561 kernel: audit: type=1334 audit(1765563685.797:468): prog-id=158 op=UNLOAD Dec 12 18:21:25.797000 audit[4078]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4066 pid=4078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:25.830358 kernel: audit: type=1300 audit(1765563685.797:468): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4066 pid=4078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:25.830417 kernel: audit: type=1327 audit(1765563685.797:468): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338653031393962346638306132396334636665323166663661393832 Dec 12 18:21:25.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338653031393962346638306132396334636665323166663661393832 Dec 12 18:21:25.797000 audit: BPF prog-id=159 op=LOAD Dec 12 18:21:25.836608 kernel: audit: type=1334 audit(1765563685.797:469): prog-id=159 op=LOAD Dec 12 18:21:25.836667 kernel: audit: type=1300 audit(1765563685.797:469): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4066 pid=4078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:25.797000 audit[4078]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4066 pid=4078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:25.841686 kernel: audit: type=1327 audit(1765563685.797:469): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338653031393962346638306132396334636665323166663661393832 Dec 12 18:21:25.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338653031393962346638306132396334636665323166663661393832 Dec 12 18:21:25.841808 containerd[2529]: time="2025-12-12T18:21:25.839713791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l5ql4,Uid:630a9bdf-37dd-4584-a250-9a6924f25025,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8e0199b4f80a29c4cfe21ff6a9824e6931736a1ead4b378287102659d748c31\"" Dec 12 18:21:25.797000 audit: BPF prog-id=160 op=LOAD Dec 12 18:21:25.797000 audit[4078]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4066 pid=4078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:25.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338653031393962346638306132396334636665323166663661393832 Dec 12 18:21:25.797000 audit: BPF prog-id=160 op=UNLOAD Dec 12 18:21:25.797000 audit[4078]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4066 pid=4078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:25.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338653031393962346638306132396334636665323166663661393832 Dec 12 18:21:25.797000 audit: BPF prog-id=159 op=UNLOAD Dec 12 18:21:25.797000 audit[4078]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4066 pid=4078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:25.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338653031393962346638306132396334636665323166663661393832 Dec 12 18:21:25.798000 audit: BPF prog-id=161 op=LOAD Dec 12 18:21:25.798000 audit[4078]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4066 pid=4078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:25.798000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338653031393962346638306132396334636665323166663661393832 Dec 12 18:21:25.848771 containerd[2529]: time="2025-12-12T18:21:25.848742925Z" level=info msg="CreateContainer within sandbox \"c8e0199b4f80a29c4cfe21ff6a9824e6931736a1ead4b378287102659d748c31\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 18:21:25.869534 containerd[2529]: time="2025-12-12T18:21:25.866842348Z" level=info msg="Container f9b8bfbebc2beccb9ad0786a87c2d5942d48b3f4b48717e2b41aca30fd46f18a: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:21:25.871212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1401178221.mount: Deactivated successfully. Dec 12 18:21:25.886399 containerd[2529]: time="2025-12-12T18:21:25.886371403Z" level=info msg="CreateContainer within sandbox \"c8e0199b4f80a29c4cfe21ff6a9824e6931736a1ead4b378287102659d748c31\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f9b8bfbebc2beccb9ad0786a87c2d5942d48b3f4b48717e2b41aca30fd46f18a\"" Dec 12 18:21:25.888428 containerd[2529]: time="2025-12-12T18:21:25.886960564Z" level=info msg="StartContainer for \"f9b8bfbebc2beccb9ad0786a87c2d5942d48b3f4b48717e2b41aca30fd46f18a\"" Dec 12 18:21:25.888428 containerd[2529]: time="2025-12-12T18:21:25.888240241Z" level=info msg="connecting to shim f9b8bfbebc2beccb9ad0786a87c2d5942d48b3f4b48717e2b41aca30fd46f18a" address="unix:///run/containerd/s/958bedec82e59d919402faa2f5d2f44f4fcd251437089530c954f0ed9e722e11" protocol=ttrpc version=3 Dec 12 18:21:25.906699 systemd[1]: Started cri-containerd-f9b8bfbebc2beccb9ad0786a87c2d5942d48b3f4b48717e2b41aca30fd46f18a.scope - libcontainer container f9b8bfbebc2beccb9ad0786a87c2d5942d48b3f4b48717e2b41aca30fd46f18a. Dec 12 18:21:25.941000 audit: BPF prog-id=162 op=LOAD Dec 12 18:21:25.941000 audit[4102]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4066 pid=4102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:25.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639623862666265626332626563636239616430373836613837633264 Dec 12 18:21:25.941000 audit: BPF prog-id=163 op=LOAD Dec 12 18:21:25.941000 audit[4102]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4066 pid=4102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:25.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639623862666265626332626563636239616430373836613837633264 Dec 12 18:21:25.941000 audit: BPF prog-id=163 op=UNLOAD Dec 12 18:21:25.941000 audit[4102]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4066 pid=4102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:25.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639623862666265626332626563636239616430373836613837633264 Dec 12 18:21:25.941000 audit: BPF prog-id=162 op=UNLOAD Dec 12 18:21:25.941000 audit[4102]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4066 pid=4102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:25.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639623862666265626332626563636239616430373836613837633264 Dec 12 18:21:25.941000 audit: BPF prog-id=164 op=LOAD Dec 12 18:21:25.941000 audit[4102]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=4066 pid=4102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:25.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639623862666265626332626563636239616430373836613837633264 Dec 12 18:21:25.968195 containerd[2529]: time="2025-12-12T18:21:25.968065010Z" level=info msg="StartContainer for \"f9b8bfbebc2beccb9ad0786a87c2d5942d48b3f4b48717e2b41aca30fd46f18a\" returns successfully" Dec 12 18:21:26.053289 containerd[2529]: time="2025-12-12T18:21:26.053164202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-64n74,Uid:66d208a9-e0cc-4d4c-a77b-82c94b7863dc,Namespace:tigera-operator,Attempt:0,}" Dec 12 18:21:26.093000 audit[4167]: NETFILTER_CFG table=mangle:57 family=2 entries=1 op=nft_register_chain pid=4167 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:21:26.093000 audit[4167]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe501ce1a0 a2=0 a3=7ffe501ce18c items=0 ppid=4115 pid=4167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.093000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 18:21:26.097000 audit[4168]: NETFILTER_CFG table=mangle:58 family=10 entries=1 op=nft_register_chain pid=4168 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:21:26.097000 audit[4168]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc3e79ccb0 a2=0 a3=7ffc3e79cc9c items=0 ppid=4115 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.097000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 18:21:26.098000 audit[4172]: NETFILTER_CFG table=nat:59 family=10 entries=1 op=nft_register_chain pid=4172 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:21:26.098000 audit[4172]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff35b0b650 a2=0 a3=7fff35b0b63c items=0 ppid=4115 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.098000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 18:21:26.102000 audit[4173]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=4173 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:21:26.102000 audit[4173]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd7985d5b0 a2=0 a3=7ffd7985d59c items=0 ppid=4115 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.102000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 18:21:26.105000 audit[4170]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=4170 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:21:26.105000 audit[4170]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe57dd6080 a2=0 a3=7ffe57dd606c items=0 ppid=4115 pid=4170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.105000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 18:21:26.111000 audit[4181]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_chain pid=4181 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:21:26.111000 audit[4181]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd8475db60 a2=0 a3=7ffd8475db4c items=0 ppid=4115 pid=4181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.111000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 18:21:26.114870 containerd[2529]: time="2025-12-12T18:21:26.114819402Z" level=info msg="connecting to shim b1054b6e0de33db406652cc812a985519ea31b4bef67c849105666c275f32c80" address="unix:///run/containerd/s/b8e4dd1cddc05f4f75deadbca268e756feb75ce66df41b58bdb74d4bd60e2d58" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:21:26.136714 systemd[1]: Started cri-containerd-b1054b6e0de33db406652cc812a985519ea31b4bef67c849105666c275f32c80.scope - libcontainer container b1054b6e0de33db406652cc812a985519ea31b4bef67c849105666c275f32c80. Dec 12 18:21:26.144000 audit: BPF prog-id=165 op=LOAD Dec 12 18:21:26.144000 audit: BPF prog-id=166 op=LOAD Dec 12 18:21:26.144000 audit[4197]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4186 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231303534623665306465333364623430363635326363383132613938 Dec 12 18:21:26.144000 audit: BPF prog-id=166 op=UNLOAD Dec 12 18:21:26.144000 audit[4197]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4186 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231303534623665306465333364623430363635326363383132613938 Dec 12 18:21:26.144000 audit: BPF prog-id=167 op=LOAD Dec 12 18:21:26.144000 audit[4197]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4186 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231303534623665306465333364623430363635326363383132613938 Dec 12 18:21:26.144000 audit: BPF prog-id=168 op=LOAD Dec 12 18:21:26.144000 audit[4197]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4186 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231303534623665306465333364623430363635326363383132613938 Dec 12 18:21:26.144000 audit: BPF prog-id=168 op=UNLOAD Dec 12 18:21:26.144000 audit[4197]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4186 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231303534623665306465333364623430363635326363383132613938 Dec 12 18:21:26.144000 audit: BPF prog-id=167 op=UNLOAD Dec 12 18:21:26.144000 audit[4197]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4186 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231303534623665306465333364623430363635326363383132613938 Dec 12 18:21:26.144000 audit: BPF prog-id=169 op=LOAD Dec 12 18:21:26.144000 audit[4197]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4186 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231303534623665306465333364623430363635326363383132613938 Dec 12 18:21:26.184036 containerd[2529]: time="2025-12-12T18:21:26.183961106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-64n74,Uid:66d208a9-e0cc-4d4c-a77b-82c94b7863dc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b1054b6e0de33db406652cc812a985519ea31b4bef67c849105666c275f32c80\"" Dec 12 18:21:26.185947 containerd[2529]: time="2025-12-12T18:21:26.185910321Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 18:21:26.197000 audit[4223]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=4223 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:21:26.197000 audit[4223]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe3e1c68e0 a2=0 a3=7ffe3e1c68cc items=0 ppid=4115 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.197000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 18:21:26.200000 audit[4225]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=4225 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:21:26.200000 audit[4225]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff6c2c5e60 a2=0 a3=7fff6c2c5e4c items=0 ppid=4115 pid=4225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.200000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 12 18:21:26.204000 audit[4228]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_rule pid=4228 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:21:26.204000 audit[4228]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffd0cc9550 a2=0 a3=7fffd0cc953c items=0 ppid=4115 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.204000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 12 18:21:26.205000 audit[4229]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_chain pid=4229 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:21:26.205000 audit[4229]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffec2449070 a2=0 a3=7ffec244905c items=0 ppid=4115 pid=4229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.205000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 18:21:26.207000 audit[4231]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=4231 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:21:26.207000 audit[4231]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffec1ab0150 a2=0 a3=7ffec1ab013c items=0 ppid=4115 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.207000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 18:21:26.208000 audit[4232]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=4232 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:21:26.208000 audit[4232]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd5c69d7e0 a2=0 a3=7ffd5c69d7cc items=0 ppid=4115 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.208000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 18:21:26.211000 audit[4234]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=4234 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:21:26.211000 audit[4234]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc674b1160 a2=0 a3=7ffc674b114c items=0 ppid=4115 pid=4234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.211000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 12 18:21:26.215000 audit[4237]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=4237 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:21:26.215000 audit[4237]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff3dca08e0 a2=0 a3=7fff3dca08cc items=0 ppid=4115 pid=4237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.215000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 12 18:21:26.216000 audit[4238]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_chain pid=4238 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:21:26.216000 audit[4238]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe3993fad0 a2=0 a3=7ffe3993fabc items=0 ppid=4115 pid=4238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.216000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 18:21:26.218000 audit[4240]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=4240 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:21:26.218000 audit[4240]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff7c9cdb70 a2=0 a3=7fff7c9cdb5c items=0 ppid=4115 pid=4240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.218000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 18:21:26.220000 audit[4241]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=4241 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:21:26.220000 audit[4241]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe27b1e660 a2=0 a3=7ffe27b1e64c items=0 ppid=4115 pid=4241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.220000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 18:21:26.222000 audit[4243]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=4243 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:21:26.222000 audit[4243]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe2ee58f90 a2=0 a3=7ffe2ee58f7c items=0 ppid=4115 pid=4243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.222000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 18:21:26.225000 audit[4246]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=4246 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:21:26.225000 audit[4246]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd7cc626c0 a2=0 a3=7ffd7cc626ac items=0 ppid=4115 pid=4246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.225000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 18:21:26.229000 audit[4249]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=4249 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:21:26.229000 audit[4249]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcb31cf140 a2=0 a3=7ffcb31cf12c items=0 ppid=4115 pid=4249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.229000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 12 18:21:26.230000 audit[4250]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=4250 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:21:26.230000 audit[4250]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc0bae68a0 a2=0 a3=7ffc0bae688c items=0 ppid=4115 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.230000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 18:21:26.232000 audit[4252]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=4252 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:21:26.232000 audit[4252]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe3e234870 a2=0 a3=7ffe3e23485c items=0 ppid=4115 pid=4252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.232000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 18:21:26.235000 audit[4255]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_rule pid=4255 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:21:26.235000 audit[4255]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd314cf4c0 a2=0 a3=7ffd314cf4ac items=0 ppid=4115 pid=4255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.235000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 18:21:26.236000 audit[4256]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_chain pid=4256 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:21:26.236000 audit[4256]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc32a71c00 a2=0 a3=7ffc32a71bec items=0 ppid=4115 pid=4256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.236000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 18:21:26.239000 audit[4258]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=4258 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:21:26.239000 audit[4258]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffdfaf353e0 a2=0 a3=7ffdfaf353cc items=0 ppid=4115 pid=4258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.239000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 18:21:26.319000 audit[4264]: NETFILTER_CFG table=filter:82 family=2 entries=8 op=nft_register_rule pid=4264 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:21:26.319000 audit[4264]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd13483430 a2=0 a3=7ffd1348341c items=0 ppid=4115 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.319000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:21:26.345000 audit[4264]: NETFILTER_CFG table=nat:83 family=2 entries=14 op=nft_register_chain pid=4264 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:21:26.345000 audit[4264]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd13483430 a2=0 a3=7ffd1348341c items=0 ppid=4115 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.345000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:21:26.346000 audit[4269]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=4269 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:21:26.346000 audit[4269]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd5b072c00 a2=0 a3=7ffd5b072bec items=0 ppid=4115 pid=4269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.346000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 18:21:26.349000 audit[4271]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=4271 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:21:26.349000 audit[4271]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd7ba1dca0 a2=0 a3=7ffd7ba1dc8c items=0 ppid=4115 pid=4271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.349000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 12 18:21:26.357000 audit[4274]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=4274 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:21:26.357000 audit[4274]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff63c16d00 a2=0 a3=7fff63c16cec items=0 ppid=4115 pid=4274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.357000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 12 18:21:26.360000 audit[4275]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=4275 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:21:26.360000 audit[4275]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe3aa07490 a2=0 a3=7ffe3aa0747c items=0 ppid=4115 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.360000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 18:21:26.364000 audit[4277]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=4277 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:21:26.364000 audit[4277]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffead931db0 a2=0 a3=7ffead931d9c items=0 ppid=4115 pid=4277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.364000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 18:21:26.368000 audit[4278]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=4278 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:21:26.368000 audit[4278]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc84365b40 a2=0 a3=7ffc84365b2c items=0 ppid=4115 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.368000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 18:21:26.373000 audit[4280]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=4280 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:21:26.373000 audit[4280]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff6c13d460 a2=0 a3=7fff6c13d44c items=0 ppid=4115 pid=4280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.373000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 12 18:21:26.377000 audit[4283]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=4283 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:21:26.377000 audit[4283]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc8b5c0db0 a2=0 a3=7ffc8b5c0d9c items=0 ppid=4115 pid=4283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.377000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 12 18:21:26.379424 kubelet[4009]: I1212 18:21:26.379354 4009 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l5ql4" podStartSLOduration=1.379338071 podStartE2EDuration="1.379338071s" podCreationTimestamp="2025-12-12 18:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:21:26.378923872 +0000 UTC m=+6.224807523" watchObservedRunningTime="2025-12-12 18:21:26.379338071 +0000 UTC m=+6.225221721" Dec 12 18:21:26.380000 audit[4284]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=4284 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:21:26.380000 audit[4284]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe6d901280 a2=0 a3=7ffe6d90126c items=0 ppid=4115 pid=4284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.380000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 18:21:26.382000 audit[4286]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=4286 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:21:26.382000 audit[4286]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdbca7e380 a2=0 a3=7ffdbca7e36c items=0 ppid=4115 pid=4286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.382000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 18:21:26.384000 audit[4287]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=4287 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:21:26.384000 audit[4287]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc2c96e900 a2=0 a3=7ffc2c96e8ec items=0 ppid=4115 pid=4287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.384000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 18:21:26.387000 audit[4289]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=4289 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:21:26.387000 audit[4289]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe95ccb760 a2=0 a3=7ffe95ccb74c items=0 ppid=4115 pid=4289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.387000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 18:21:26.390000 audit[4292]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=4292 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:21:26.390000 audit[4292]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc88464600 a2=0 a3=7ffc884645ec items=0 ppid=4115 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.390000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 12 18:21:26.393000 audit[4295]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=4295 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:21:26.393000 audit[4295]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff7f944c30 a2=0 a3=7fff7f944c1c items=0 ppid=4115 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.393000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 12 18:21:26.394000 audit[4296]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=4296 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:21:26.394000 audit[4296]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd1404daa0 a2=0 a3=7ffd1404da8c items=0 ppid=4115 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.394000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 18:21:26.396000 audit[4298]: NETFILTER_CFG table=nat:99 family=10 entries=1 op=nft_register_rule pid=4298 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:21:26.396000 audit[4298]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff1c40ad30 a2=0 a3=7fff1c40ad1c items=0 ppid=4115 pid=4298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.396000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 18:21:26.399000 audit[4301]: NETFILTER_CFG table=nat:100 family=10 entries=1 op=nft_register_rule pid=4301 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:21:26.399000 audit[4301]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffce173c6b0 a2=0 a3=7ffce173c69c items=0 ppid=4115 pid=4301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.399000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 18:21:26.401000 audit[4302]: NETFILTER_CFG table=nat:101 family=10 entries=1 op=nft_register_chain pid=4302 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:21:26.401000 audit[4302]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf0f24540 a2=0 a3=7ffdf0f2452c items=0 ppid=4115 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.401000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 18:21:26.403000 audit[4304]: NETFILTER_CFG table=nat:102 family=10 entries=2 op=nft_register_chain pid=4304 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:21:26.403000 audit[4304]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd0eed7860 a2=0 a3=7ffd0eed784c items=0 ppid=4115 pid=4304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.403000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 18:21:26.404000 audit[4305]: NETFILTER_CFG table=filter:103 family=10 entries=1 op=nft_register_chain pid=4305 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:21:26.404000 audit[4305]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe29e6c130 a2=0 a3=7ffe29e6c11c items=0 ppid=4115 pid=4305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.404000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 18:21:26.406000 audit[4307]: NETFILTER_CFG table=filter:104 family=10 entries=1 op=nft_register_rule pid=4307 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:21:26.406000 audit[4307]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcdc0e2040 a2=0 a3=7ffcdc0e202c items=0 ppid=4115 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.406000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 18:21:26.409000 audit[4310]: NETFILTER_CFG table=filter:105 family=10 entries=1 op=nft_register_rule pid=4310 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:21:26.409000 audit[4310]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffec0e3bf00 a2=0 a3=7ffec0e3beec items=0 ppid=4115 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.409000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 18:21:26.412000 audit[4312]: NETFILTER_CFG table=filter:106 family=10 entries=3 op=nft_register_rule pid=4312 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 18:21:26.412000 audit[4312]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffe88d652c0 a2=0 a3=7ffe88d652ac items=0 ppid=4115 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.412000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:21:26.412000 audit[4312]: NETFILTER_CFG table=nat:107 family=10 entries=7 op=nft_register_chain pid=4312 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 18:21:26.412000 audit[4312]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffe88d652c0 a2=0 a3=7ffe88d652ac items=0 ppid=4115 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.412000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:21:27.732881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2926434849.mount: Deactivated successfully. Dec 12 18:21:28.313533 containerd[2529]: time="2025-12-12T18:21:28.313490973Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:21:28.318022 containerd[2529]: time="2025-12-12T18:21:28.317987842Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Dec 12 18:21:28.320784 containerd[2529]: time="2025-12-12T18:21:28.320743209Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:21:28.324822 containerd[2529]: time="2025-12-12T18:21:28.324364728Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:21:28.324822 containerd[2529]: time="2025-12-12T18:21:28.324734928Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.138794097s" Dec 12 18:21:28.324822 containerd[2529]: time="2025-12-12T18:21:28.324758351Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 12 18:21:28.332549 containerd[2529]: time="2025-12-12T18:21:28.332508623Z" level=info msg="CreateContainer within sandbox \"b1054b6e0de33db406652cc812a985519ea31b4bef67c849105666c275f32c80\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 18:21:28.355002 containerd[2529]: time="2025-12-12T18:21:28.350954878Z" level=info msg="Container d88c32443a9cc84174f5bd2b98ca61cc0def667e2c1de31818f527a27918a296: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:21:28.370183 containerd[2529]: time="2025-12-12T18:21:28.370157961Z" level=info msg="CreateContainer within sandbox \"b1054b6e0de33db406652cc812a985519ea31b4bef67c849105666c275f32c80\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d88c32443a9cc84174f5bd2b98ca61cc0def667e2c1de31818f527a27918a296\"" Dec 12 18:21:28.370828 containerd[2529]: time="2025-12-12T18:21:28.370808041Z" level=info msg="StartContainer for \"d88c32443a9cc84174f5bd2b98ca61cc0def667e2c1de31818f527a27918a296\"" Dec 12 18:21:28.371826 containerd[2529]: time="2025-12-12T18:21:28.371794966Z" level=info msg="connecting to shim d88c32443a9cc84174f5bd2b98ca61cc0def667e2c1de31818f527a27918a296" address="unix:///run/containerd/s/b8e4dd1cddc05f4f75deadbca268e756feb75ce66df41b58bdb74d4bd60e2d58" protocol=ttrpc version=3 Dec 12 18:21:28.392709 systemd[1]: Started cri-containerd-d88c32443a9cc84174f5bd2b98ca61cc0def667e2c1de31818f527a27918a296.scope - libcontainer container d88c32443a9cc84174f5bd2b98ca61cc0def667e2c1de31818f527a27918a296. Dec 12 18:21:28.400000 audit: BPF prog-id=170 op=LOAD Dec 12 18:21:28.400000 audit: BPF prog-id=171 op=LOAD Dec 12 18:21:28.400000 audit[4321]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4186 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:28.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438386333323434336139636338343137346635626432623938636136 Dec 12 18:21:28.400000 audit: BPF prog-id=171 op=UNLOAD Dec 12 18:21:28.400000 audit[4321]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4186 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:28.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438386333323434336139636338343137346635626432623938636136 Dec 12 18:21:28.400000 audit: BPF prog-id=172 op=LOAD Dec 12 18:21:28.400000 audit[4321]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4186 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:28.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438386333323434336139636338343137346635626432623938636136 Dec 12 18:21:28.400000 audit: BPF prog-id=173 op=LOAD Dec 12 18:21:28.400000 audit[4321]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4186 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:28.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438386333323434336139636338343137346635626432623938636136 Dec 12 18:21:28.400000 audit: BPF prog-id=173 op=UNLOAD Dec 12 18:21:28.400000 audit[4321]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4186 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:28.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438386333323434336139636338343137346635626432623938636136 Dec 12 18:21:28.400000 audit: BPF prog-id=172 op=UNLOAD Dec 12 18:21:28.400000 audit[4321]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4186 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:28.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438386333323434336139636338343137346635626432623938636136 Dec 12 18:21:28.400000 audit: BPF prog-id=174 op=LOAD Dec 12 18:21:28.400000 audit[4321]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4186 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:28.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438386333323434336139636338343137346635626432623938636136 Dec 12 18:21:28.427460 containerd[2529]: time="2025-12-12T18:21:28.427419960Z" level=info msg="StartContainer for \"d88c32443a9cc84174f5bd2b98ca61cc0def667e2c1de31818f527a27918a296\" returns successfully" Dec 12 18:21:31.431841 kubelet[4009]: I1212 18:21:31.431779 4009 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-64n74" podStartSLOduration=4.291263212 podStartE2EDuration="6.431751207s" podCreationTimestamp="2025-12-12 18:21:25 +0000 UTC" firstStartedPulling="2025-12-12 18:21:26.184974174 +0000 UTC m=+6.030857818" lastFinishedPulling="2025-12-12 18:21:28.325462164 +0000 UTC m=+8.171345813" observedRunningTime="2025-12-12 18:21:29.385899024 +0000 UTC m=+9.231782675" watchObservedRunningTime="2025-12-12 18:21:31.431751207 +0000 UTC m=+11.277634859" Dec 12 18:21:32.302778 sudo[3005]: pam_unix(sudo:session): session closed for user root Dec 12 18:21:32.311770 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 12 18:21:32.311876 kernel: audit: type=1106 audit(1765563692.301:546): pid=3005 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:21:32.301000 audit[3005]: USER_END pid=3005 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:21:32.301000 audit[3005]: CRED_DISP pid=3005 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:21:32.325542 kernel: audit: type=1104 audit(1765563692.301:547): pid=3005 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:21:32.412499 sshd[3004]: Connection closed by 10.200.16.10 port 44490 Dec 12 18:21:32.413267 sshd-session[3001]: pam_unix(sshd:session): session closed for user core Dec 12 18:21:32.424624 kernel: audit: type=1106 audit(1765563692.412:548): pid=3001 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:21:32.412000 audit[3001]: USER_END pid=3001 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:21:32.419986 systemd[1]: sshd@6-10.200.8.12:22-10.200.16.10:44490.service: Deactivated successfully. Dec 12 18:21:32.413000 audit[3001]: CRED_DISP pid=3001 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:21:32.422730 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 18:21:32.423002 systemd[1]: session-9.scope: Consumed 5.114s CPU time, 229M memory peak. Dec 12 18:21:32.429870 systemd-logind[2512]: Session 9 logged out. Waiting for processes to exit. Dec 12 18:21:32.430902 systemd-logind[2512]: Removed session 9. Dec 12 18:21:32.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.12:22-10.200.16.10:44490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:32.436544 kernel: audit: type=1104 audit(1765563692.413:549): pid=3001 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:21:32.436593 kernel: audit: type=1131 audit(1765563692.413:550): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.12:22-10.200.16.10:44490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:33.425000 audit[4403]: NETFILTER_CFG table=filter:108 family=2 entries=15 op=nft_register_rule pid=4403 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:21:33.431640 kernel: audit: type=1325 audit(1765563693.425:551): table=filter:108 family=2 entries=15 op=nft_register_rule pid=4403 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:21:33.425000 audit[4403]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdd5c05bc0 a2=0 a3=7ffdd5c05bac items=0 ppid=4115 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:33.443617 kernel: audit: type=1300 audit(1765563693.425:551): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdd5c05bc0 a2=0 a3=7ffdd5c05bac items=0 ppid=4115 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:33.425000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:21:33.457568 kernel: audit: type=1327 audit(1765563693.425:551): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:21:33.431000 audit[4403]: NETFILTER_CFG table=nat:109 family=2 entries=12 op=nft_register_rule pid=4403 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:21:33.464548 kernel: audit: type=1325 audit(1765563693.431:552): table=nat:109 family=2 entries=12 op=nft_register_rule pid=4403 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:21:33.431000 audit[4403]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdd5c05bc0 a2=0 a3=0 items=0 ppid=4115 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:33.475538 kernel: audit: type=1300 audit(1765563693.431:552): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdd5c05bc0 a2=0 a3=0 items=0 ppid=4115 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:33.431000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:21:33.448000 audit[4405]: NETFILTER_CFG table=filter:110 family=2 entries=16 op=nft_register_rule pid=4405 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:21:33.448000 audit[4405]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffedb10fc40 a2=0 a3=7ffedb10fc2c items=0 ppid=4115 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:33.448000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:21:33.459000 audit[4405]: NETFILTER_CFG table=nat:111 family=2 entries=12 op=nft_register_rule pid=4405 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:21:33.459000 audit[4405]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffedb10fc40 a2=0 a3=0 items=0 ppid=4115 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:33.459000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:21:35.469000 audit[4407]: NETFILTER_CFG table=filter:112 family=2 entries=17 op=nft_register_rule pid=4407 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:21:35.469000 audit[4407]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fffdcb0e180 a2=0 a3=7fffdcb0e16c items=0 ppid=4115 pid=4407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:35.469000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:21:35.474000 audit[4407]: NETFILTER_CFG table=nat:113 family=2 entries=12 op=nft_register_rule pid=4407 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:21:35.474000 audit[4407]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffdcb0e180 a2=0 a3=0 items=0 ppid=4115 pid=4407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:35.474000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:21:35.483000 audit[4409]: NETFILTER_CFG table=filter:114 family=2 entries=18 op=nft_register_rule pid=4409 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:21:35.483000 audit[4409]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe72540510 a2=0 a3=7ffe725404fc items=0 ppid=4115 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:35.483000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:21:35.488000 audit[4409]: NETFILTER_CFG table=nat:115 family=2 entries=12 op=nft_register_rule pid=4409 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:21:35.488000 audit[4409]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe72540510 a2=0 a3=0 items=0 ppid=4115 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:35.488000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:21:36.499000 audit[4411]: NETFILTER_CFG table=filter:116 family=2 entries=19 op=nft_register_rule pid=4411 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:21:36.499000 audit[4411]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdb4e8c2c0 a2=0 a3=7ffdb4e8c2ac items=0 ppid=4115 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:36.499000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:21:36.505000 audit[4411]: NETFILTER_CFG table=nat:117 family=2 entries=12 op=nft_register_rule pid=4411 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:21:36.505000 audit[4411]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdb4e8c2c0 a2=0 a3=0 items=0 ppid=4115 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:36.505000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:21:37.254280 systemd[1]: Created slice kubepods-besteffort-podca55b33c_52c2_4807_9909_ab9d5bda037b.slice - libcontainer container kubepods-besteffort-podca55b33c_52c2_4807_9909_ab9d5bda037b.slice. Dec 12 18:21:37.285134 kubelet[4009]: I1212 18:21:37.285015 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqgcw\" (UniqueName: \"kubernetes.io/projected/ca55b33c-52c2-4807-9909-ab9d5bda037b-kube-api-access-pqgcw\") pod \"calico-typha-6b5c8d6c49-skhvl\" (UID: \"ca55b33c-52c2-4807-9909-ab9d5bda037b\") " pod="calico-system/calico-typha-6b5c8d6c49-skhvl" Dec 12 18:21:37.285134 kubelet[4009]: I1212 18:21:37.285060 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca55b33c-52c2-4807-9909-ab9d5bda037b-tigera-ca-bundle\") pod \"calico-typha-6b5c8d6c49-skhvl\" (UID: \"ca55b33c-52c2-4807-9909-ab9d5bda037b\") " pod="calico-system/calico-typha-6b5c8d6c49-skhvl" Dec 12 18:21:37.285134 kubelet[4009]: I1212 18:21:37.285078 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ca55b33c-52c2-4807-9909-ab9d5bda037b-typha-certs\") pod \"calico-typha-6b5c8d6c49-skhvl\" (UID: \"ca55b33c-52c2-4807-9909-ab9d5bda037b\") " pod="calico-system/calico-typha-6b5c8d6c49-skhvl" Dec 12 18:21:37.519482 systemd[1]: Created slice kubepods-besteffort-podfa9bac96_7d43_4356_a47d_4f769cbefc78.slice - libcontainer container kubepods-besteffort-podfa9bac96_7d43_4356_a47d_4f769cbefc78.slice. Dec 12 18:21:37.534744 kernel: kauditd_printk_skb: 25 callbacks suppressed Dec 12 18:21:37.534844 kernel: audit: type=1325 audit(1765563697.529:561): table=filter:118 family=2 entries=21 op=nft_register_rule pid=4415 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:21:37.529000 audit[4415]: NETFILTER_CFG table=filter:118 family=2 entries=21 op=nft_register_rule pid=4415 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:21:37.529000 audit[4415]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc28bee260 a2=0 a3=7ffc28bee24c items=0 ppid=4115 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:37.529000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:21:37.543155 kernel: audit: type=1300 audit(1765563697.529:561): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc28bee260 a2=0 a3=7ffc28bee24c items=0 ppid=4115 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:37.543294 kernel: audit: type=1327 audit(1765563697.529:561): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:21:37.535000 audit[4415]: NETFILTER_CFG table=nat:119 family=2 entries=12 op=nft_register_rule pid=4415 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:21:37.545651 kernel: audit: type=1325 audit(1765563697.535:562): table=nat:119 family=2 entries=12 op=nft_register_rule pid=4415 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:21:37.535000 audit[4415]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc28bee260 a2=0 a3=0 items=0 ppid=4115 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:37.549448 kernel: audit: type=1300 audit(1765563697.535:562): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc28bee260 a2=0 a3=0 items=0 ppid=4115 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:37.535000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:21:37.552904 kernel: audit: type=1327 audit(1765563697.535:562): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:21:37.559130 containerd[2529]: time="2025-12-12T18:21:37.559092465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b5c8d6c49-skhvl,Uid:ca55b33c-52c2-4807-9909-ab9d5bda037b,Namespace:calico-system,Attempt:0,}" Dec 12 18:21:37.586729 kubelet[4009]: I1212 18:21:37.586547 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fa9bac96-7d43-4356-a47d-4f769cbefc78-var-lib-calico\") pod \"calico-node-vnsfd\" (UID: \"fa9bac96-7d43-4356-a47d-4f769cbefc78\") " pod="calico-system/calico-node-vnsfd" Dec 12 18:21:37.586729 kubelet[4009]: I1212 18:21:37.586585 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fa9bac96-7d43-4356-a47d-4f769cbefc78-var-run-calico\") pod \"calico-node-vnsfd\" (UID: \"fa9bac96-7d43-4356-a47d-4f769cbefc78\") " pod="calico-system/calico-node-vnsfd" Dec 12 18:21:37.586729 kubelet[4009]: I1212 18:21:37.586604 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fa9bac96-7d43-4356-a47d-4f769cbefc78-cni-net-dir\") pod \"calico-node-vnsfd\" (UID: \"fa9bac96-7d43-4356-a47d-4f769cbefc78\") " pod="calico-system/calico-node-vnsfd" Dec 12 18:21:37.586729 kubelet[4009]: I1212 18:21:37.586617 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fa9bac96-7d43-4356-a47d-4f769cbefc78-node-certs\") pod \"calico-node-vnsfd\" (UID: \"fa9bac96-7d43-4356-a47d-4f769cbefc78\") " pod="calico-system/calico-node-vnsfd" Dec 12 18:21:37.586729 kubelet[4009]: I1212 18:21:37.586629 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa9bac96-7d43-4356-a47d-4f769cbefc78-xtables-lock\") pod \"calico-node-vnsfd\" (UID: \"fa9bac96-7d43-4356-a47d-4f769cbefc78\") " pod="calico-system/calico-node-vnsfd" Dec 12 18:21:37.586919 kubelet[4009]: I1212 18:21:37.586644 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqc7r\" (UniqueName: \"kubernetes.io/projected/fa9bac96-7d43-4356-a47d-4f769cbefc78-kube-api-access-xqc7r\") pod \"calico-node-vnsfd\" (UID: \"fa9bac96-7d43-4356-a47d-4f769cbefc78\") " pod="calico-system/calico-node-vnsfd" Dec 12 18:21:37.586919 kubelet[4009]: I1212 18:21:37.586663 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fa9bac96-7d43-4356-a47d-4f769cbefc78-cni-log-dir\") pod \"calico-node-vnsfd\" (UID: \"fa9bac96-7d43-4356-a47d-4f769cbefc78\") " pod="calico-system/calico-node-vnsfd" Dec 12 18:21:37.586919 kubelet[4009]: I1212 18:21:37.586704 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fa9bac96-7d43-4356-a47d-4f769cbefc78-flexvol-driver-host\") pod \"calico-node-vnsfd\" (UID: \"fa9bac96-7d43-4356-a47d-4f769cbefc78\") " pod="calico-system/calico-node-vnsfd" Dec 12 18:21:37.586919 kubelet[4009]: I1212 18:21:37.586752 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fa9bac96-7d43-4356-a47d-4f769cbefc78-policysync\") pod \"calico-node-vnsfd\" (UID: \"fa9bac96-7d43-4356-a47d-4f769cbefc78\") " pod="calico-system/calico-node-vnsfd" Dec 12 18:21:37.586919 kubelet[4009]: I1212 18:21:37.586772 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa9bac96-7d43-4356-a47d-4f769cbefc78-tigera-ca-bundle\") pod \"calico-node-vnsfd\" (UID: \"fa9bac96-7d43-4356-a47d-4f769cbefc78\") " pod="calico-system/calico-node-vnsfd" Dec 12 18:21:37.587009 kubelet[4009]: I1212 18:21:37.586789 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa9bac96-7d43-4356-a47d-4f769cbefc78-lib-modules\") pod \"calico-node-vnsfd\" (UID: \"fa9bac96-7d43-4356-a47d-4f769cbefc78\") " pod="calico-system/calico-node-vnsfd" Dec 12 18:21:37.587009 kubelet[4009]: I1212 18:21:37.586827 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fa9bac96-7d43-4356-a47d-4f769cbefc78-cni-bin-dir\") pod \"calico-node-vnsfd\" (UID: \"fa9bac96-7d43-4356-a47d-4f769cbefc78\") " pod="calico-system/calico-node-vnsfd" Dec 12 18:21:37.613235 containerd[2529]: time="2025-12-12T18:21:37.613195239Z" level=info msg="connecting to shim c8492ec0e9086908e591e891edfe22431f9d028f6739cb0ae1dcc062fa43692f" address="unix:///run/containerd/s/5b05d0beb312a5e39b2c65910b46264998f6e6b57498b6f10db4a55f2de91f74" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:21:37.640715 systemd[1]: Started cri-containerd-c8492ec0e9086908e591e891edfe22431f9d028f6739cb0ae1dcc062fa43692f.scope - libcontainer container c8492ec0e9086908e591e891edfe22431f9d028f6739cb0ae1dcc062fa43692f. Dec 12 18:21:37.648000 audit: BPF prog-id=175 op=LOAD Dec 12 18:21:37.652567 kernel: audit: type=1334 audit(1765563697.648:563): prog-id=175 op=LOAD Dec 12 18:21:37.652639 kernel: audit: type=1334 audit(1765563697.649:564): prog-id=176 op=LOAD Dec 12 18:21:37.649000 audit: BPF prog-id=176 op=LOAD Dec 12 18:21:37.661643 kernel: audit: type=1300 audit(1765563697.649:564): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4425 pid=4437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:37.649000 audit[4437]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4425 pid=4437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:37.649000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338343932656330653930383639303865353931653839316564666532 Dec 12 18:21:37.649000 audit: BPF prog-id=176 op=UNLOAD Dec 12 18:21:37.649000 audit[4437]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4425 pid=4437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:37.649000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338343932656330653930383639303865353931653839316564666532 Dec 12 18:21:37.649000 audit: BPF prog-id=177 op=LOAD Dec 12 18:21:37.649000 audit[4437]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4425 pid=4437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:37.649000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338343932656330653930383639303865353931653839316564666532 Dec 12 18:21:37.649000 audit: BPF prog-id=178 op=LOAD Dec 12 18:21:37.649000 audit[4437]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4425 pid=4437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:37.649000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338343932656330653930383639303865353931653839316564666532 Dec 12 18:21:37.649000 audit: BPF prog-id=178 op=UNLOAD Dec 12 18:21:37.649000 audit[4437]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4425 pid=4437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:37.649000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338343932656330653930383639303865353931653839316564666532 Dec 12 18:21:37.649000 audit: BPF prog-id=177 op=UNLOAD Dec 12 18:21:37.649000 audit[4437]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4425 pid=4437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:37.649000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338343932656330653930383639303865353931653839316564666532 Dec 12 18:21:37.649000 audit: BPF prog-id=179 op=LOAD Dec 12 18:21:37.649000 audit[4437]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4425 pid=4437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:37.649000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338343932656330653930383639303865353931653839316564666532 Dec 12 18:21:37.673540 kernel: audit: type=1327 audit(1765563697.649:564): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338343932656330653930383639303865353931653839316564666532 Dec 12 18:21:37.685968 containerd[2529]: time="2025-12-12T18:21:37.685935750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b5c8d6c49-skhvl,Uid:ca55b33c-52c2-4807-9909-ab9d5bda037b,Namespace:calico-system,Attempt:0,} returns sandbox id \"c8492ec0e9086908e591e891edfe22431f9d028f6739cb0ae1dcc062fa43692f\"" Dec 12 18:21:37.687334 containerd[2529]: time="2025-12-12T18:21:37.687295892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 18:21:37.690288 kubelet[4009]: E1212 18:21:37.690261 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.690288 kubelet[4009]: W1212 18:21:37.690282 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.690422 kubelet[4009]: E1212 18:21:37.690304 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.694560 kubelet[4009]: E1212 18:21:37.693692 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.694560 kubelet[4009]: W1212 18:21:37.693709 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.694560 kubelet[4009]: E1212 18:21:37.693724 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.700278 kubelet[4009]: E1212 18:21:37.700259 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.700278 kubelet[4009]: W1212 18:21:37.700273 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.700367 kubelet[4009]: E1212 18:21:37.700288 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.812729 kubelet[4009]: E1212 18:21:37.812628 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bq297" podUID="10291b6f-a9ca-4c45-b211-06a17f4d693f" Dec 12 18:21:37.825470 containerd[2529]: time="2025-12-12T18:21:37.824910662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vnsfd,Uid:fa9bac96-7d43-4356-a47d-4f769cbefc78,Namespace:calico-system,Attempt:0,}" Dec 12 18:21:37.867981 containerd[2529]: time="2025-12-12T18:21:37.867941190Z" level=info msg="connecting to shim fdae9cc5df35d83ac0b9fc1ed253806f6c0ed36cf6700c5a096349d3b6dcf459" address="unix:///run/containerd/s/f7d48c5c8b13d63ca3c73fb871c797807b67859f054bc198c44edbdbc9fd3299" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:21:37.878214 kubelet[4009]: E1212 18:21:37.878112 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.878214 kubelet[4009]: W1212 18:21:37.878133 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.878214 kubelet[4009]: E1212 18:21:37.878154 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.878780 kubelet[4009]: E1212 18:21:37.878283 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.878780 kubelet[4009]: W1212 18:21:37.878289 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.878780 kubelet[4009]: E1212 18:21:37.878310 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.878780 kubelet[4009]: E1212 18:21:37.878498 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.878780 kubelet[4009]: W1212 18:21:37.878506 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.878780 kubelet[4009]: E1212 18:21:37.878543 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.879349 kubelet[4009]: E1212 18:21:37.878860 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.879349 kubelet[4009]: W1212 18:21:37.878869 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.879349 kubelet[4009]: E1212 18:21:37.878892 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.879349 kubelet[4009]: E1212 18:21:37.879055 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.879349 kubelet[4009]: W1212 18:21:37.879061 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.879349 kubelet[4009]: E1212 18:21:37.879071 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.879349 kubelet[4009]: E1212 18:21:37.879294 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.879349 kubelet[4009]: W1212 18:21:37.879303 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.879349 kubelet[4009]: E1212 18:21:37.879311 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.879930 kubelet[4009]: E1212 18:21:37.879450 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.879930 kubelet[4009]: W1212 18:21:37.879456 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.879930 kubelet[4009]: E1212 18:21:37.879464 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.879930 kubelet[4009]: E1212 18:21:37.879598 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.879930 kubelet[4009]: W1212 18:21:37.879603 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.879930 kubelet[4009]: E1212 18:21:37.879610 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.879930 kubelet[4009]: E1212 18:21:37.879784 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.879930 kubelet[4009]: W1212 18:21:37.879791 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.879930 kubelet[4009]: E1212 18:21:37.879800 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.880152 kubelet[4009]: E1212 18:21:37.879929 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.880152 kubelet[4009]: W1212 18:21:37.879946 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.880152 kubelet[4009]: E1212 18:21:37.879953 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.880152 kubelet[4009]: E1212 18:21:37.880054 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.880152 kubelet[4009]: W1212 18:21:37.880059 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.880152 kubelet[4009]: E1212 18:21:37.880065 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.880301 kubelet[4009]: E1212 18:21:37.880162 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.880301 kubelet[4009]: W1212 18:21:37.880167 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.880301 kubelet[4009]: E1212 18:21:37.880173 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.880301 kubelet[4009]: E1212 18:21:37.880272 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.880301 kubelet[4009]: W1212 18:21:37.880277 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.880301 kubelet[4009]: E1212 18:21:37.880283 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.880452 kubelet[4009]: E1212 18:21:37.880386 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.880452 kubelet[4009]: W1212 18:21:37.880391 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.880452 kubelet[4009]: E1212 18:21:37.880398 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.880576 kubelet[4009]: E1212 18:21:37.880481 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.880576 kubelet[4009]: W1212 18:21:37.880486 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.880576 kubelet[4009]: E1212 18:21:37.880491 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.880649 kubelet[4009]: E1212 18:21:37.880584 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.880649 kubelet[4009]: W1212 18:21:37.880589 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.880649 kubelet[4009]: E1212 18:21:37.880595 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.880728 kubelet[4009]: E1212 18:21:37.880695 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.880728 kubelet[4009]: W1212 18:21:37.880700 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.880728 kubelet[4009]: E1212 18:21:37.880706 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.881765 kubelet[4009]: E1212 18:21:37.881751 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.881846 kubelet[4009]: W1212 18:21:37.881835 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.881938 kubelet[4009]: E1212 18:21:37.881899 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.882108 kubelet[4009]: E1212 18:21:37.882101 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.882167 kubelet[4009]: W1212 18:21:37.882159 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.882246 kubelet[4009]: E1212 18:21:37.882204 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.882409 kubelet[4009]: E1212 18:21:37.882402 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.882569 kubelet[4009]: W1212 18:21:37.882457 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.882569 kubelet[4009]: E1212 18:21:37.882467 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.890014 kubelet[4009]: E1212 18:21:37.889979 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.890014 kubelet[4009]: W1212 18:21:37.890002 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.890014 kubelet[4009]: E1212 18:21:37.890016 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.890178 kubelet[4009]: I1212 18:21:37.890041 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/10291b6f-a9ca-4c45-b211-06a17f4d693f-registration-dir\") pod \"csi-node-driver-bq297\" (UID: \"10291b6f-a9ca-4c45-b211-06a17f4d693f\") " pod="calico-system/csi-node-driver-bq297" Dec 12 18:21:37.891682 kubelet[4009]: E1212 18:21:37.891644 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.891682 kubelet[4009]: W1212 18:21:37.891671 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.891682 kubelet[4009]: E1212 18:21:37.891685 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.891816 kubelet[4009]: I1212 18:21:37.891706 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/10291b6f-a9ca-4c45-b211-06a17f4d693f-kubelet-dir\") pod \"csi-node-driver-bq297\" (UID: \"10291b6f-a9ca-4c45-b211-06a17f4d693f\") " pod="calico-system/csi-node-driver-bq297" Dec 12 18:21:37.892278 kubelet[4009]: E1212 18:21:37.892243 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.892278 kubelet[4009]: W1212 18:21:37.892261 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.892697 kubelet[4009]: E1212 18:21:37.892423 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.892697 kubelet[4009]: I1212 18:21:37.892449 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/10291b6f-a9ca-4c45-b211-06a17f4d693f-socket-dir\") pod \"csi-node-driver-bq297\" (UID: \"10291b6f-a9ca-4c45-b211-06a17f4d693f\") " pod="calico-system/csi-node-driver-bq297" Dec 12 18:21:37.893227 kubelet[4009]: E1212 18:21:37.893216 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.893424 kubelet[4009]: W1212 18:21:37.893333 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.893424 kubelet[4009]: E1212 18:21:37.893348 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.893754 kubelet[4009]: I1212 18:21:37.893500 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf7fn\" (UniqueName: \"kubernetes.io/projected/10291b6f-a9ca-4c45-b211-06a17f4d693f-kube-api-access-rf7fn\") pod \"csi-node-driver-bq297\" (UID: \"10291b6f-a9ca-4c45-b211-06a17f4d693f\") " pod="calico-system/csi-node-driver-bq297" Dec 12 18:21:37.894169 kubelet[4009]: E1212 18:21:37.894157 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.894255 kubelet[4009]: W1212 18:21:37.894244 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.894314 kubelet[4009]: E1212 18:21:37.894299 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.894564 kubelet[4009]: E1212 18:21:37.894509 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.894911 systemd[1]: Started cri-containerd-fdae9cc5df35d83ac0b9fc1ed253806f6c0ed36cf6700c5a096349d3b6dcf459.scope - libcontainer container fdae9cc5df35d83ac0b9fc1ed253806f6c0ed36cf6700c5a096349d3b6dcf459. Dec 12 18:21:37.895400 kubelet[4009]: W1212 18:21:37.895184 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.895400 kubelet[4009]: E1212 18:21:37.895203 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.896386 kubelet[4009]: E1212 18:21:37.896131 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.896386 kubelet[4009]: W1212 18:21:37.896144 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.896386 kubelet[4009]: E1212 18:21:37.896157 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.896883 kubelet[4009]: E1212 18:21:37.896869 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.897031 kubelet[4009]: W1212 18:21:37.896940 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.897031 kubelet[4009]: E1212 18:21:37.896953 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.897591 kubelet[4009]: E1212 18:21:37.897557 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.897591 kubelet[4009]: W1212 18:21:37.897567 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.897591 kubelet[4009]: E1212 18:21:37.897579 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.897816 kubelet[4009]: I1212 18:21:37.897799 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/10291b6f-a9ca-4c45-b211-06a17f4d693f-varrun\") pod \"csi-node-driver-bq297\" (UID: \"10291b6f-a9ca-4c45-b211-06a17f4d693f\") " pod="calico-system/csi-node-driver-bq297" Dec 12 18:21:37.897965 kubelet[4009]: E1212 18:21:37.897959 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.898031 kubelet[4009]: W1212 18:21:37.898023 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.898070 kubelet[4009]: E1212 18:21:37.898063 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.898407 kubelet[4009]: E1212 18:21:37.898361 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.898407 kubelet[4009]: W1212 18:21:37.898372 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.898407 kubelet[4009]: E1212 18:21:37.898382 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.898782 kubelet[4009]: E1212 18:21:37.898750 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.898782 kubelet[4009]: W1212 18:21:37.898760 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.898782 kubelet[4009]: E1212 18:21:37.898771 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.899101 kubelet[4009]: E1212 18:21:37.899065 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.899101 kubelet[4009]: W1212 18:21:37.899076 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.899101 kubelet[4009]: E1212 18:21:37.899090 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.899594 kubelet[4009]: E1212 18:21:37.899558 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.899594 kubelet[4009]: W1212 18:21:37.899571 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.899594 kubelet[4009]: E1212 18:21:37.899582 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.899966 kubelet[4009]: E1212 18:21:37.899956 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:37.900033 kubelet[4009]: W1212 18:21:37.900024 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:37.900077 kubelet[4009]: E1212 18:21:37.900070 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:37.915000 audit: BPF prog-id=180 op=LOAD Dec 12 18:21:37.915000 audit: BPF prog-id=181 op=LOAD Dec 12 18:21:37.915000 audit[4498]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4487 pid=4498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:37.915000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664616539636335646633356438336163306239666331656432353338 Dec 12 18:21:37.915000 audit: BPF prog-id=181 op=UNLOAD Dec 12 18:21:37.915000 audit[4498]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4487 pid=4498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:37.915000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664616539636335646633356438336163306239666331656432353338 Dec 12 18:21:37.915000 audit: BPF prog-id=182 op=LOAD Dec 12 18:21:37.915000 audit[4498]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4487 pid=4498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:37.915000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664616539636335646633356438336163306239666331656432353338 Dec 12 18:21:37.916000 audit: BPF prog-id=183 op=LOAD Dec 12 18:21:37.916000 audit[4498]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4487 pid=4498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:37.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664616539636335646633356438336163306239666331656432353338 Dec 12 18:21:37.916000 audit: BPF prog-id=183 op=UNLOAD Dec 12 18:21:37.916000 audit[4498]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4487 pid=4498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:37.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664616539636335646633356438336163306239666331656432353338 Dec 12 18:21:37.916000 audit: BPF prog-id=182 op=UNLOAD Dec 12 18:21:37.916000 audit[4498]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4487 pid=4498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:37.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664616539636335646633356438336163306239666331656432353338 Dec 12 18:21:37.916000 audit: BPF prog-id=184 op=LOAD Dec 12 18:21:37.916000 audit[4498]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4487 pid=4498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:37.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664616539636335646633356438336163306239666331656432353338 Dec 12 18:21:37.937628 containerd[2529]: time="2025-12-12T18:21:37.937512888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vnsfd,Uid:fa9bac96-7d43-4356-a47d-4f769cbefc78,Namespace:calico-system,Attempt:0,} returns sandbox id \"fdae9cc5df35d83ac0b9fc1ed253806f6c0ed36cf6700c5a096349d3b6dcf459\"" Dec 12 18:21:38.000046 kubelet[4009]: E1212 18:21:38.000021 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:38.000046 kubelet[4009]: W1212 18:21:38.000039 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:38.000188 kubelet[4009]: E1212 18:21:38.000056 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:38.000274 kubelet[4009]: E1212 18:21:38.000257 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:38.000274 kubelet[4009]: W1212 18:21:38.000271 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:38.000351 kubelet[4009]: E1212 18:21:38.000280 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:38.000432 kubelet[4009]: E1212 18:21:38.000417 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:38.000432 kubelet[4009]: W1212 18:21:38.000429 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:38.000602 kubelet[4009]: E1212 18:21:38.000438 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:38.000602 kubelet[4009]: E1212 18:21:38.000600 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:38.000669 kubelet[4009]: W1212 18:21:38.000606 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:38.000669 kubelet[4009]: E1212 18:21:38.000613 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:38.000760 kubelet[4009]: E1212 18:21:38.000747 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:38.000760 kubelet[4009]: W1212 18:21:38.000756 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:38.000821 kubelet[4009]: E1212 18:21:38.000763 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:38.000944 kubelet[4009]: E1212 18:21:38.000925 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:38.000944 kubelet[4009]: W1212 18:21:38.000933 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:38.000944 kubelet[4009]: E1212 18:21:38.000942 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:38.001354 kubelet[4009]: E1212 18:21:38.001337 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:38.001354 kubelet[4009]: W1212 18:21:38.001353 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:38.001443 kubelet[4009]: E1212 18:21:38.001367 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:38.001974 kubelet[4009]: E1212 18:21:38.001583 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:38.001974 kubelet[4009]: W1212 18:21:38.001592 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:38.001974 kubelet[4009]: E1212 18:21:38.001600 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:38.001974 kubelet[4009]: E1212 18:21:38.001807 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:38.001974 kubelet[4009]: W1212 18:21:38.001814 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:38.001974 kubelet[4009]: E1212 18:21:38.001822 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:38.002184 kubelet[4009]: E1212 18:21:38.001985 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:38.002184 kubelet[4009]: W1212 18:21:38.001991 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:38.002184 kubelet[4009]: E1212 18:21:38.001999 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:38.002184 kubelet[4009]: E1212 18:21:38.002154 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:38.002184 kubelet[4009]: W1212 18:21:38.002159 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:38.002310 kubelet[4009]: E1212 18:21:38.002166 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:38.002338 kubelet[4009]: E1212 18:21:38.002324 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:38.002338 kubelet[4009]: W1212 18:21:38.002330 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:38.002385 kubelet[4009]: E1212 18:21:38.002357 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:38.002492 kubelet[4009]: E1212 18:21:38.002480 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:38.002713 kubelet[4009]: W1212 18:21:38.002542 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:38.002713 kubelet[4009]: E1212 18:21:38.002553 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:38.002779 kubelet[4009]: E1212 18:21:38.002730 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:38.002779 kubelet[4009]: W1212 18:21:38.002736 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:38.002779 kubelet[4009]: E1212 18:21:38.002766 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:38.003000 kubelet[4009]: E1212 18:21:38.002969 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:38.003079 kubelet[4009]: W1212 18:21:38.002979 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:38.003079 kubelet[4009]: E1212 18:21:38.003056 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:38.003397 kubelet[4009]: E1212 18:21:38.003365 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:38.003397 kubelet[4009]: W1212 18:21:38.003376 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:38.003397 kubelet[4009]: E1212 18:21:38.003386 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:38.003804 kubelet[4009]: E1212 18:21:38.003790 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:38.003935 kubelet[4009]: W1212 18:21:38.003844 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:38.003935 kubelet[4009]: E1212 18:21:38.003857 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:38.004265 kubelet[4009]: E1212 18:21:38.004256 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:38.004335 kubelet[4009]: W1212 18:21:38.004328 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:38.004404 kubelet[4009]: E1212 18:21:38.004378 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:38.004664 kubelet[4009]: E1212 18:21:38.004657 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:38.004746 kubelet[4009]: W1212 18:21:38.004720 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:38.004746 kubelet[4009]: E1212 18:21:38.004728 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:38.004996 kubelet[4009]: E1212 18:21:38.004973 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:38.004996 kubelet[4009]: W1212 18:21:38.004981 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:38.004996 kubelet[4009]: E1212 18:21:38.004988 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:38.005278 kubelet[4009]: E1212 18:21:38.005273 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:38.005344 kubelet[4009]: W1212 18:21:38.005320 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:38.005344 kubelet[4009]: E1212 18:21:38.005328 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:38.005582 kubelet[4009]: E1212 18:21:38.005575 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:38.005667 kubelet[4009]: W1212 18:21:38.005633 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:38.005667 kubelet[4009]: E1212 18:21:38.005644 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:38.006024 kubelet[4009]: E1212 18:21:38.006015 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:38.006166 kubelet[4009]: W1212 18:21:38.006090 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:38.006166 kubelet[4009]: E1212 18:21:38.006106 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:38.006402 kubelet[4009]: E1212 18:21:38.006384 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:38.006447 kubelet[4009]: W1212 18:21:38.006440 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:38.006498 kubelet[4009]: E1212 18:21:38.006486 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:38.007010 kubelet[4009]: E1212 18:21:38.006927 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:38.007010 kubelet[4009]: W1212 18:21:38.006975 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:38.007010 kubelet[4009]: E1212 18:21:38.006988 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:38.011634 kubelet[4009]: E1212 18:21:38.011617 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:38.011634 kubelet[4009]: W1212 18:21:38.011632 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:38.011810 kubelet[4009]: E1212 18:21:38.011644 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:39.007824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3253112701.mount: Deactivated successfully. Dec 12 18:21:39.310766 kubelet[4009]: E1212 18:21:39.310573 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bq297" podUID="10291b6f-a9ca-4c45-b211-06a17f4d693f" Dec 12 18:21:40.956751 containerd[2529]: time="2025-12-12T18:21:40.956707985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:21:40.959525 containerd[2529]: time="2025-12-12T18:21:40.959406558Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Dec 12 18:21:40.962473 containerd[2529]: time="2025-12-12T18:21:40.962447693Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:21:40.966199 containerd[2529]: time="2025-12-12T18:21:40.966169351Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:21:40.966603 containerd[2529]: time="2025-12-12T18:21:40.966580863Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.279238199s" Dec 12 18:21:40.966672 containerd[2529]: time="2025-12-12T18:21:40.966660777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 12 18:21:40.967558 containerd[2529]: time="2025-12-12T18:21:40.967538851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 18:21:40.987822 containerd[2529]: time="2025-12-12T18:21:40.987786035Z" level=info msg="CreateContainer within sandbox \"c8492ec0e9086908e591e891edfe22431f9d028f6739cb0ae1dcc062fa43692f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 18:21:41.009772 containerd[2529]: time="2025-12-12T18:21:41.009743400Z" level=info msg="Container 46d6e6e30d59de932dfb17fc511bede7f401f7ff71bb89ffe852f9c231e7aeef: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:21:41.013416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1897378665.mount: Deactivated successfully. Dec 12 18:21:41.028670 containerd[2529]: time="2025-12-12T18:21:41.028638339Z" level=info msg="CreateContainer within sandbox \"c8492ec0e9086908e591e891edfe22431f9d028f6739cb0ae1dcc062fa43692f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"46d6e6e30d59de932dfb17fc511bede7f401f7ff71bb89ffe852f9c231e7aeef\"" Dec 12 18:21:41.029650 containerd[2529]: time="2025-12-12T18:21:41.029139681Z" level=info msg="StartContainer for \"46d6e6e30d59de932dfb17fc511bede7f401f7ff71bb89ffe852f9c231e7aeef\"" Dec 12 18:21:41.030673 containerd[2529]: time="2025-12-12T18:21:41.030648404Z" level=info msg="connecting to shim 46d6e6e30d59de932dfb17fc511bede7f401f7ff71bb89ffe852f9c231e7aeef" address="unix:///run/containerd/s/5b05d0beb312a5e39b2c65910b46264998f6e6b57498b6f10db4a55f2de91f74" protocol=ttrpc version=3 Dec 12 18:21:41.053705 systemd[1]: Started cri-containerd-46d6e6e30d59de932dfb17fc511bede7f401f7ff71bb89ffe852f9c231e7aeef.scope - libcontainer container 46d6e6e30d59de932dfb17fc511bede7f401f7ff71bb89ffe852f9c231e7aeef. Dec 12 18:21:41.062000 audit: BPF prog-id=185 op=LOAD Dec 12 18:21:41.063000 audit: BPF prog-id=186 op=LOAD Dec 12 18:21:41.063000 audit[4595]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4425 pid=4595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:41.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436643665366533306435396465393332646662313766633531316265 Dec 12 18:21:41.063000 audit: BPF prog-id=186 op=UNLOAD Dec 12 18:21:41.063000 audit[4595]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4425 pid=4595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:41.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436643665366533306435396465393332646662313766633531316265 Dec 12 18:21:41.063000 audit: BPF prog-id=187 op=LOAD Dec 12 18:21:41.063000 audit[4595]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4425 pid=4595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:41.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436643665366533306435396465393332646662313766633531316265 Dec 12 18:21:41.063000 audit: BPF prog-id=188 op=LOAD Dec 12 18:21:41.063000 audit[4595]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4425 pid=4595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:41.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436643665366533306435396465393332646662313766633531316265 Dec 12 18:21:41.063000 audit: BPF prog-id=188 op=UNLOAD Dec 12 18:21:41.063000 audit[4595]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4425 pid=4595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:41.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436643665366533306435396465393332646662313766633531316265 Dec 12 18:21:41.063000 audit: BPF prog-id=187 op=UNLOAD Dec 12 18:21:41.063000 audit[4595]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4425 pid=4595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:41.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436643665366533306435396465393332646662313766633531316265 Dec 12 18:21:41.063000 audit: BPF prog-id=189 op=LOAD Dec 12 18:21:41.063000 audit[4595]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4425 pid=4595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:41.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436643665366533306435396465393332646662313766633531316265 Dec 12 18:21:41.097678 containerd[2529]: time="2025-12-12T18:21:41.097643103Z" level=info msg="StartContainer for \"46d6e6e30d59de932dfb17fc511bede7f401f7ff71bb89ffe852f9c231e7aeef\" returns successfully" Dec 12 18:21:41.311490 kubelet[4009]: E1212 18:21:41.311079 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bq297" podUID="10291b6f-a9ca-4c45-b211-06a17f4d693f" Dec 12 18:21:41.400182 kubelet[4009]: E1212 18:21:41.400154 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:41.400182 kubelet[4009]: W1212 18:21:41.400178 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:41.400458 kubelet[4009]: E1212 18:21:41.400197 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:41.400458 kubelet[4009]: E1212 18:21:41.400329 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:41.400458 kubelet[4009]: W1212 18:21:41.400334 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:41.400458 kubelet[4009]: E1212 18:21:41.400342 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:41.400458 kubelet[4009]: E1212 18:21:41.400445 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:41.400458 kubelet[4009]: W1212 18:21:41.400451 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:41.400458 kubelet[4009]: E1212 18:21:41.400458 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:41.400782 kubelet[4009]: E1212 18:21:41.400622 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:41.400782 kubelet[4009]: W1212 18:21:41.400629 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:41.400782 kubelet[4009]: E1212 18:21:41.400636 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:41.400782 kubelet[4009]: E1212 18:21:41.400747 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:41.400782 kubelet[4009]: W1212 18:21:41.400753 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:41.400782 kubelet[4009]: E1212 18:21:41.400759 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:41.400976 kubelet[4009]: E1212 18:21:41.400859 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:41.400976 kubelet[4009]: W1212 18:21:41.400864 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:41.400976 kubelet[4009]: E1212 18:21:41.400870 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:41.400976 kubelet[4009]: E1212 18:21:41.400971 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:41.400976 kubelet[4009]: W1212 18:21:41.400975 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:41.401106 kubelet[4009]: E1212 18:21:41.400981 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:41.401106 kubelet[4009]: E1212 18:21:41.401070 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:41.401106 kubelet[4009]: W1212 18:21:41.401074 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:41.401106 kubelet[4009]: E1212 18:21:41.401080 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:41.401210 kubelet[4009]: E1212 18:21:41.401176 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:41.401210 kubelet[4009]: W1212 18:21:41.401182 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:41.401210 kubelet[4009]: E1212 18:21:41.401187 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:41.401678 kubelet[4009]: E1212 18:21:41.401660 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:41.401678 kubelet[4009]: W1212 18:21:41.401675 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:41.401843 kubelet[4009]: E1212 18:21:41.401689 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:41.401843 kubelet[4009]: E1212 18:21:41.401826 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:41.401843 kubelet[4009]: W1212 18:21:41.401832 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:41.401843 kubelet[4009]: E1212 18:21:41.401840 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:41.401954 kubelet[4009]: E1212 18:21:41.401948 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:41.401985 kubelet[4009]: W1212 18:21:41.401954 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:41.401985 kubelet[4009]: E1212 18:21:41.401960 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:41.402070 kubelet[4009]: E1212 18:21:41.402065 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:41.402070 kubelet[4009]: W1212 18:21:41.402070 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:41.402126 kubelet[4009]: E1212 18:21:41.402077 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:41.402186 kubelet[4009]: E1212 18:21:41.402178 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:41.402220 kubelet[4009]: W1212 18:21:41.402187 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:41.402220 kubelet[4009]: E1212 18:21:41.402194 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:41.402669 kubelet[4009]: E1212 18:21:41.402653 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:41.402669 kubelet[4009]: W1212 18:21:41.402667 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:41.402752 kubelet[4009]: E1212 18:21:41.402677 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:41.424782 kubelet[4009]: E1212 18:21:41.424758 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:41.424782 kubelet[4009]: W1212 18:21:41.424778 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:41.424946 kubelet[4009]: E1212 18:21:41.424791 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:41.424946 kubelet[4009]: E1212 18:21:41.424943 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:41.425017 kubelet[4009]: W1212 18:21:41.424950 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:41.425017 kubelet[4009]: E1212 18:21:41.424957 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:41.425108 kubelet[4009]: E1212 18:21:41.425099 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:41.425139 kubelet[4009]: W1212 18:21:41.425108 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:41.425139 kubelet[4009]: E1212 18:21:41.425116 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:41.425283 kubelet[4009]: E1212 18:21:41.425274 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:41.425283 kubelet[4009]: W1212 18:21:41.425283 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:41.425337 kubelet[4009]: E1212 18:21:41.425289 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:41.425416 kubelet[4009]: E1212 18:21:41.425408 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:41.425458 kubelet[4009]: W1212 18:21:41.425417 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:41.425458 kubelet[4009]: E1212 18:21:41.425424 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:41.425560 kubelet[4009]: E1212 18:21:41.425551 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:41.425586 kubelet[4009]: W1212 18:21:41.425560 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:41.425586 kubelet[4009]: E1212 18:21:41.425569 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:41.425732 kubelet[4009]: E1212 18:21:41.425723 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:41.425770 kubelet[4009]: W1212 18:21:41.425733 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:41.425770 kubelet[4009]: E1212 18:21:41.425740 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:41.425910 kubelet[4009]: E1212 18:21:41.425900 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:41.425939 kubelet[4009]: W1212 18:21:41.425911 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:41.425939 kubelet[4009]: E1212 18:21:41.425918 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:41.426096 kubelet[4009]: E1212 18:21:41.426012 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:41.426096 kubelet[4009]: W1212 18:21:41.426018 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:41.426096 kubelet[4009]: E1212 18:21:41.426023 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:41.426595 kubelet[4009]: E1212 18:21:41.426104 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:41.426595 kubelet[4009]: W1212 18:21:41.426109 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:41.426595 kubelet[4009]: E1212 18:21:41.426114 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:41.426595 kubelet[4009]: E1212 18:21:41.426591 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:41.426714 kubelet[4009]: W1212 18:21:41.426598 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:41.426714 kubelet[4009]: E1212 18:21:41.426607 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:41.427095 kubelet[4009]: E1212 18:21:41.427081 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:41.427095 kubelet[4009]: W1212 18:21:41.427091 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:41.427162 kubelet[4009]: E1212 18:21:41.427101 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:41.427470 kubelet[4009]: E1212 18:21:41.427457 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:41.427470 kubelet[4009]: W1212 18:21:41.427470 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:41.427559 kubelet[4009]: E1212 18:21:41.427479 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:41.427802 kubelet[4009]: E1212 18:21:41.427773 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:41.427802 kubelet[4009]: W1212 18:21:41.427784 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:41.427802 kubelet[4009]: E1212 18:21:41.427793 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:41.431755 kubelet[4009]: E1212 18:21:41.431738 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:41.431755 kubelet[4009]: W1212 18:21:41.431754 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:41.431867 kubelet[4009]: E1212 18:21:41.431768 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:41.432159 kubelet[4009]: E1212 18:21:41.432143 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:41.432159 kubelet[4009]: W1212 18:21:41.432156 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:41.432229 kubelet[4009]: E1212 18:21:41.432174 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:41.432667 kubelet[4009]: E1212 18:21:41.432654 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:41.432667 kubelet[4009]: W1212 18:21:41.432666 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:41.432667 kubelet[4009]: E1212 18:21:41.432678 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:41.432846 kubelet[4009]: E1212 18:21:41.432837 4009 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:21:41.432874 kubelet[4009]: W1212 18:21:41.432846 4009 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:21:41.432874 kubelet[4009]: E1212 18:21:41.432854 4009 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:21:42.212554 containerd[2529]: time="2025-12-12T18:21:42.212080721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:21:42.214792 containerd[2529]: time="2025-12-12T18:21:42.214757172Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 12 18:21:42.220111 containerd[2529]: time="2025-12-12T18:21:42.218510102Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:21:42.222999 containerd[2529]: time="2025-12-12T18:21:42.222967728Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:21:42.224120 containerd[2529]: time="2025-12-12T18:21:42.223709526Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.256037577s" Dec 12 18:21:42.224120 containerd[2529]: time="2025-12-12T18:21:42.223743810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 12 18:21:42.232906 containerd[2529]: time="2025-12-12T18:21:42.232873320Z" level=info msg="CreateContainer within sandbox \"fdae9cc5df35d83ac0b9fc1ed253806f6c0ed36cf6700c5a096349d3b6dcf459\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 18:21:42.253075 containerd[2529]: time="2025-12-12T18:21:42.253045191Z" level=info msg="Container 7e3f3083f86d1822240636ef2a56d03c364148689e8e9cad2d2302c265d3e068: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:21:42.258895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3277469205.mount: Deactivated successfully. Dec 12 18:21:42.280298 containerd[2529]: time="2025-12-12T18:21:42.280270505Z" level=info msg="CreateContainer within sandbox \"fdae9cc5df35d83ac0b9fc1ed253806f6c0ed36cf6700c5a096349d3b6dcf459\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7e3f3083f86d1822240636ef2a56d03c364148689e8e9cad2d2302c265d3e068\"" Dec 12 18:21:42.281304 containerd[2529]: time="2025-12-12T18:21:42.281111131Z" level=info msg="StartContainer for \"7e3f3083f86d1822240636ef2a56d03c364148689e8e9cad2d2302c265d3e068\"" Dec 12 18:21:42.283148 containerd[2529]: time="2025-12-12T18:21:42.283100750Z" level=info msg="connecting to shim 7e3f3083f86d1822240636ef2a56d03c364148689e8e9cad2d2302c265d3e068" address="unix:///run/containerd/s/f7d48c5c8b13d63ca3c73fb871c797807b67859f054bc198c44edbdbc9fd3299" protocol=ttrpc version=3 Dec 12 18:21:42.310831 systemd[1]: Started cri-containerd-7e3f3083f86d1822240636ef2a56d03c364148689e8e9cad2d2302c265d3e068.scope - libcontainer container 7e3f3083f86d1822240636ef2a56d03c364148689e8e9cad2d2302c265d3e068. Dec 12 18:21:42.356000 audit: BPF prog-id=190 op=LOAD Dec 12 18:21:42.356000 audit[4669]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4487 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:42.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765336633303833663836643138323232343036333665663261353664 Dec 12 18:21:42.356000 audit: BPF prog-id=191 op=LOAD Dec 12 18:21:42.356000 audit[4669]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4487 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:42.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765336633303833663836643138323232343036333665663261353664 Dec 12 18:21:42.356000 audit: BPF prog-id=191 op=UNLOAD Dec 12 18:21:42.356000 audit[4669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4487 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:42.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765336633303833663836643138323232343036333665663261353664 Dec 12 18:21:42.356000 audit: BPF prog-id=190 op=UNLOAD Dec 12 18:21:42.356000 audit[4669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4487 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:42.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765336633303833663836643138323232343036333665663261353664 Dec 12 18:21:42.356000 audit: BPF prog-id=192 op=LOAD Dec 12 18:21:42.356000 audit[4669]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4487 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:42.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765336633303833663836643138323232343036333665663261353664 Dec 12 18:21:42.382630 containerd[2529]: time="2025-12-12T18:21:42.382585654Z" level=info msg="StartContainer for \"7e3f3083f86d1822240636ef2a56d03c364148689e8e9cad2d2302c265d3e068\" returns successfully" Dec 12 18:21:42.386194 systemd[1]: cri-containerd-7e3f3083f86d1822240636ef2a56d03c364148689e8e9cad2d2302c265d3e068.scope: Deactivated successfully. Dec 12 18:21:42.389633 containerd[2529]: time="2025-12-12T18:21:42.389602477Z" level=info msg="received container exit event container_id:\"7e3f3083f86d1822240636ef2a56d03c364148689e8e9cad2d2302c265d3e068\" id:\"7e3f3083f86d1822240636ef2a56d03c364148689e8e9cad2d2302c265d3e068\" pid:4681 exited_at:{seconds:1765563702 nanos:389265537}" Dec 12 18:21:42.390000 audit: BPF prog-id=192 op=UNLOAD Dec 12 18:21:42.398539 kubelet[4009]: I1212 18:21:42.398261 4009 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:21:42.416901 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e3f3083f86d1822240636ef2a56d03c364148689e8e9cad2d2302c265d3e068-rootfs.mount: Deactivated successfully. Dec 12 18:21:42.419029 kubelet[4009]: I1212 18:21:42.418925 4009 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6b5c8d6c49-skhvl" podStartSLOduration=2.138279522 podStartE2EDuration="5.418907562s" podCreationTimestamp="2025-12-12 18:21:37 +0000 UTC" firstStartedPulling="2025-12-12 18:21:37.686812806 +0000 UTC m=+17.532696449" lastFinishedPulling="2025-12-12 18:21:40.967440841 +0000 UTC m=+20.813324489" observedRunningTime="2025-12-12 18:21:41.447261172 +0000 UTC m=+21.293144820" watchObservedRunningTime="2025-12-12 18:21:42.418907562 +0000 UTC m=+22.264791204" Dec 12 18:21:43.310473 kubelet[4009]: E1212 18:21:43.310431 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bq297" podUID="10291b6f-a9ca-4c45-b211-06a17f4d693f" Dec 12 18:21:45.310330 kubelet[4009]: E1212 18:21:45.310280 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bq297" podUID="10291b6f-a9ca-4c45-b211-06a17f4d693f" Dec 12 18:21:45.405745 containerd[2529]: time="2025-12-12T18:21:45.405704764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 18:21:47.310215 kubelet[4009]: E1212 18:21:47.310145 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bq297" podUID="10291b6f-a9ca-4c45-b211-06a17f4d693f" Dec 12 18:21:48.956763 containerd[2529]: time="2025-12-12T18:21:48.956716672Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:21:48.959214 containerd[2529]: time="2025-12-12T18:21:48.959176363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 12 18:21:48.962059 containerd[2529]: time="2025-12-12T18:21:48.961762891Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:21:48.965182 containerd[2529]: time="2025-12-12T18:21:48.965154236Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:21:48.965606 containerd[2529]: time="2025-12-12T18:21:48.965580025Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.559831157s" Dec 12 18:21:48.965665 containerd[2529]: time="2025-12-12T18:21:48.965610301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 12 18:21:48.971884 containerd[2529]: time="2025-12-12T18:21:48.971847180Z" level=info msg="CreateContainer within sandbox \"fdae9cc5df35d83ac0b9fc1ed253806f6c0ed36cf6700c5a096349d3b6dcf459\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 18:21:49.002379 containerd[2529]: time="2025-12-12T18:21:49.001648518Z" level=info msg="Container a7c41bd23f62f8ca55302bc023f23df2bc90602e0606a8ae3cd15173bdc8ce1a: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:21:49.016635 containerd[2529]: time="2025-12-12T18:21:49.016609013Z" level=info msg="CreateContainer within sandbox \"fdae9cc5df35d83ac0b9fc1ed253806f6c0ed36cf6700c5a096349d3b6dcf459\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a7c41bd23f62f8ca55302bc023f23df2bc90602e0606a8ae3cd15173bdc8ce1a\"" Dec 12 18:21:49.017270 containerd[2529]: time="2025-12-12T18:21:49.017245847Z" level=info msg="StartContainer for \"a7c41bd23f62f8ca55302bc023f23df2bc90602e0606a8ae3cd15173bdc8ce1a\"" Dec 12 18:21:49.019000 containerd[2529]: time="2025-12-12T18:21:49.018973829Z" level=info msg="connecting to shim a7c41bd23f62f8ca55302bc023f23df2bc90602e0606a8ae3cd15173bdc8ce1a" address="unix:///run/containerd/s/f7d48c5c8b13d63ca3c73fb871c797807b67859f054bc198c44edbdbc9fd3299" protocol=ttrpc version=3 Dec 12 18:21:49.041772 systemd[1]: Started cri-containerd-a7c41bd23f62f8ca55302bc023f23df2bc90602e0606a8ae3cd15173bdc8ce1a.scope - libcontainer container a7c41bd23f62f8ca55302bc023f23df2bc90602e0606a8ae3cd15173bdc8ce1a. Dec 12 18:21:49.110262 kernel: kauditd_printk_skb: 78 callbacks suppressed Dec 12 18:21:49.110352 kernel: audit: type=1334 audit(1765563709.107:593): prog-id=193 op=LOAD Dec 12 18:21:49.107000 audit: BPF prog-id=193 op=LOAD Dec 12 18:21:49.116577 kernel: audit: type=1300 audit(1765563709.107:593): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4487 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:49.107000 audit[4725]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4487 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:49.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137633431626432336636326638636135353330326263303233663233 Dec 12 18:21:49.119881 kernel: audit: type=1327 audit(1765563709.107:593): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137633431626432336636326638636135353330326263303233663233 Dec 12 18:21:49.107000 audit: BPF prog-id=194 op=LOAD Dec 12 18:21:49.122224 kernel: audit: type=1334 audit(1765563709.107:594): prog-id=194 op=LOAD Dec 12 18:21:49.107000 audit[4725]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4487 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:49.126151 kernel: audit: type=1300 audit(1765563709.107:594): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4487 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:49.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137633431626432336636326638636135353330326263303233663233 Dec 12 18:21:49.107000 audit: BPF prog-id=194 op=UNLOAD Dec 12 18:21:49.135486 kernel: audit: type=1327 audit(1765563709.107:594): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137633431626432336636326638636135353330326263303233663233 Dec 12 18:21:49.135541 kernel: audit: type=1334 audit(1765563709.107:595): prog-id=194 op=UNLOAD Dec 12 18:21:49.107000 audit[4725]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4487 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:49.139063 kernel: audit: type=1300 audit(1765563709.107:595): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4487 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:49.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137633431626432336636326638636135353330326263303233663233 Dec 12 18:21:49.144562 kernel: audit: type=1327 audit(1765563709.107:595): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137633431626432336636326638636135353330326263303233663233 Dec 12 18:21:49.107000 audit: BPF prog-id=193 op=UNLOAD Dec 12 18:21:49.148777 kernel: audit: type=1334 audit(1765563709.107:596): prog-id=193 op=UNLOAD Dec 12 18:21:49.107000 audit[4725]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4487 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:49.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137633431626432336636326638636135353330326263303233663233 Dec 12 18:21:49.107000 audit: BPF prog-id=195 op=LOAD Dec 12 18:21:49.107000 audit[4725]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4487 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:49.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137633431626432336636326638636135353330326263303233663233 Dec 12 18:21:49.162723 containerd[2529]: time="2025-12-12T18:21:49.162685253Z" level=info msg="StartContainer for \"a7c41bd23f62f8ca55302bc023f23df2bc90602e0606a8ae3cd15173bdc8ce1a\" returns successfully" Dec 12 18:21:49.310912 kubelet[4009]: E1212 18:21:49.310771 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bq297" podUID="10291b6f-a9ca-4c45-b211-06a17f4d693f" Dec 12 18:21:50.267018 containerd[2529]: time="2025-12-12T18:21:50.266969135Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:21:50.268904 systemd[1]: cri-containerd-a7c41bd23f62f8ca55302bc023f23df2bc90602e0606a8ae3cd15173bdc8ce1a.scope: Deactivated successfully. Dec 12 18:21:50.269217 systemd[1]: cri-containerd-a7c41bd23f62f8ca55302bc023f23df2bc90602e0606a8ae3cd15173bdc8ce1a.scope: Consumed 407ms CPU time, 193.3M memory peak, 171.3M written to disk. Dec 12 18:21:50.270988 containerd[2529]: time="2025-12-12T18:21:50.270947180Z" level=info msg="received container exit event container_id:\"a7c41bd23f62f8ca55302bc023f23df2bc90602e0606a8ae3cd15173bdc8ce1a\" id:\"a7c41bd23f62f8ca55302bc023f23df2bc90602e0606a8ae3cd15173bdc8ce1a\" pid:4738 exited_at:{seconds:1765563710 nanos:270734136}" Dec 12 18:21:50.270000 audit: BPF prog-id=195 op=UNLOAD Dec 12 18:21:50.280830 kubelet[4009]: I1212 18:21:50.280793 4009 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 18:21:50.298776 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7c41bd23f62f8ca55302bc023f23df2bc90602e0606a8ae3cd15173bdc8ce1a-rootfs.mount: Deactivated successfully. Dec 12 18:21:50.378678 systemd[1]: Created slice kubepods-burstable-pod793be58b_966e_4ae5_98e0_dcfd87576ade.slice - libcontainer container kubepods-burstable-pod793be58b_966e_4ae5_98e0_dcfd87576ade.slice. Dec 12 18:21:50.483994 kubelet[4009]: I1212 18:21:50.483956 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/793be58b-966e-4ae5-98e0-dcfd87576ade-config-volume\") pod \"coredns-674b8bbfcf-qszww\" (UID: \"793be58b-966e-4ae5-98e0-dcfd87576ade\") " pod="kube-system/coredns-674b8bbfcf-qszww" Dec 12 18:21:50.483994 kubelet[4009]: I1212 18:21:50.483995 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2dmq\" (UniqueName: \"kubernetes.io/projected/793be58b-966e-4ae5-98e0-dcfd87576ade-kube-api-access-r2dmq\") pod \"coredns-674b8bbfcf-qszww\" (UID: \"793be58b-966e-4ae5-98e0-dcfd87576ade\") " pod="kube-system/coredns-674b8bbfcf-qszww" Dec 12 18:21:50.683165 containerd[2529]: time="2025-12-12T18:21:50.683119223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qszww,Uid:793be58b-966e-4ae5-98e0-dcfd87576ade,Namespace:kube-system,Attempt:0,}" Dec 12 18:21:50.685627 kubelet[4009]: I1212 18:21:50.685600 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdef70f5-08e1-4939-ba2b-d4a667c25459-tigera-ca-bundle\") pod \"calico-kube-controllers-5c595dfbb8-snvpt\" (UID: \"cdef70f5-08e1-4939-ba2b-d4a667c25459\") " pod="calico-system/calico-kube-controllers-5c595dfbb8-snvpt" Dec 12 18:21:50.685732 kubelet[4009]: I1212 18:21:50.685660 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbzj2\" (UniqueName: \"kubernetes.io/projected/cdef70f5-08e1-4939-ba2b-d4a667c25459-kube-api-access-pbzj2\") pod \"calico-kube-controllers-5c595dfbb8-snvpt\" (UID: \"cdef70f5-08e1-4939-ba2b-d4a667c25459\") " pod="calico-system/calico-kube-controllers-5c595dfbb8-snvpt" Dec 12 18:21:50.724323 systemd[1]: Created slice kubepods-besteffort-podcdef70f5_08e1_4939_ba2b_d4a667c25459.slice - libcontainer container kubepods-besteffort-podcdef70f5_08e1_4939_ba2b_d4a667c25459.slice. Dec 12 18:21:50.786969 kubelet[4009]: I1212 18:21:50.786873 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m79zj\" (UniqueName: \"kubernetes.io/projected/c72f9f09-ad62-40d6-8632-b537f3032703-kube-api-access-m79zj\") pod \"calico-apiserver-74c7564cb4-4f4mj\" (UID: \"c72f9f09-ad62-40d6-8632-b537f3032703\") " pod="calico-apiserver/calico-apiserver-74c7564cb4-4f4mj" Dec 12 18:21:50.786969 kubelet[4009]: I1212 18:21:50.786926 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c72f9f09-ad62-40d6-8632-b537f3032703-calico-apiserver-certs\") pod \"calico-apiserver-74c7564cb4-4f4mj\" (UID: \"c72f9f09-ad62-40d6-8632-b537f3032703\") " pod="calico-apiserver/calico-apiserver-74c7564cb4-4f4mj" Dec 12 18:21:50.888103 kubelet[4009]: E1212 18:21:50.888059 4009 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: object "calico-apiserver"/"calico-apiserver-certs" not registered Dec 12 18:21:50.888247 kubelet[4009]: E1212 18:21:50.888142 4009 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c72f9f09-ad62-40d6-8632-b537f3032703-calico-apiserver-certs podName:c72f9f09-ad62-40d6-8632-b537f3032703 nodeName:}" failed. No retries permitted until 2025-12-12 18:21:51.388120029 +0000 UTC m=+31.234003678 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/c72f9f09-ad62-40d6-8632-b537f3032703-calico-apiserver-certs") pod "calico-apiserver-74c7564cb4-4f4mj" (UID: "c72f9f09-ad62-40d6-8632-b537f3032703") : object "calico-apiserver"/"calico-apiserver-certs" not registered Dec 12 18:21:50.893310 kubelet[4009]: E1212 18:21:50.893283 4009 projected.go:289] Couldn't get configMap calico-apiserver/kube-root-ca.crt: object "calico-apiserver"/"kube-root-ca.crt" not registered Dec 12 18:21:50.893310 kubelet[4009]: E1212 18:21:50.893308 4009 projected.go:194] Error preparing data for projected volume kube-api-access-m79zj for pod calico-apiserver/calico-apiserver-74c7564cb4-4f4mj: object "calico-apiserver"/"kube-root-ca.crt" not registered Dec 12 18:21:50.893458 kubelet[4009]: E1212 18:21:50.893383 4009 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c72f9f09-ad62-40d6-8632-b537f3032703-kube-api-access-m79zj podName:c72f9f09-ad62-40d6-8632-b537f3032703 nodeName:}" failed. No retries permitted until 2025-12-12 18:21:51.393366538 +0000 UTC m=+31.239250179 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m79zj" (UniqueName: "kubernetes.io/projected/c72f9f09-ad62-40d6-8632-b537f3032703-kube-api-access-m79zj") pod "calico-apiserver-74c7564cb4-4f4mj" (UID: "c72f9f09-ad62-40d6-8632-b537f3032703") : object "calico-apiserver"/"kube-root-ca.crt" not registered Dec 12 18:21:51.024914 systemd[1]: Created slice kubepods-besteffort-podc72f9f09_ad62_40d6_8632_b537f3032703.slice - libcontainer container kubepods-besteffort-podc72f9f09_ad62_40d6_8632_b537f3032703.slice. Dec 12 18:21:51.027901 containerd[2529]: time="2025-12-12T18:21:51.027820043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c595dfbb8-snvpt,Uid:cdef70f5-08e1-4939-ba2b-d4a667c25459,Namespace:calico-system,Attempt:0,}" Dec 12 18:21:51.088794 kubelet[4009]: I1212 18:21:51.088750 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35e2c86e-df5d-42ff-b822-6a19153c7bf9-whisker-ca-bundle\") pod \"whisker-75c498d4d6-hhrrs\" (UID: \"35e2c86e-df5d-42ff-b822-6a19153c7bf9\") " pod="calico-system/whisker-75c498d4d6-hhrrs" Dec 12 18:21:51.088922 kubelet[4009]: I1212 18:21:51.088804 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/35e2c86e-df5d-42ff-b822-6a19153c7bf9-whisker-backend-key-pair\") pod \"whisker-75c498d4d6-hhrrs\" (UID: \"35e2c86e-df5d-42ff-b822-6a19153c7bf9\") " pod="calico-system/whisker-75c498d4d6-hhrrs" Dec 12 18:21:51.088922 kubelet[4009]: I1212 18:21:51.088827 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkt6c\" (UniqueName: \"kubernetes.io/projected/35e2c86e-df5d-42ff-b822-6a19153c7bf9-kube-api-access-jkt6c\") pod \"whisker-75c498d4d6-hhrrs\" (UID: \"35e2c86e-df5d-42ff-b822-6a19153c7bf9\") " pod="calico-system/whisker-75c498d4d6-hhrrs" Dec 12 18:21:51.204034 systemd[1]: Created slice kubepods-besteffort-pod35e2c86e_df5d_42ff_b822_6a19153c7bf9.slice - libcontainer container kubepods-besteffort-pod35e2c86e_df5d_42ff_b822_6a19153c7bf9.slice. Dec 12 18:21:51.216883 systemd[1]: Created slice kubepods-burstable-pod3cbfe13e_d642_4617_a3ea_e4339732c6c2.slice - libcontainer container kubepods-burstable-pod3cbfe13e_d642_4617_a3ea_e4339732c6c2.slice. Dec 12 18:21:51.235997 systemd[1]: Created slice kubepods-besteffort-pod588d2c87_beaf_4e83_ba6f_8e3f0d453589.slice - libcontainer container kubepods-besteffort-pod588d2c87_beaf_4e83_ba6f_8e3f0d453589.slice. Dec 12 18:21:51.244200 systemd[1]: Created slice kubepods-besteffort-pod985aeb6e_874b_4206_ac9b_71fb5aaf32cf.slice - libcontainer container kubepods-besteffort-pod985aeb6e_874b_4206_ac9b_71fb5aaf32cf.slice. Dec 12 18:21:51.291091 kubelet[4009]: I1212 18:21:51.291007 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2w4t\" (UniqueName: \"kubernetes.io/projected/588d2c87-beaf-4e83-ba6f-8e3f0d453589-kube-api-access-f2w4t\") pod \"calico-apiserver-74c7564cb4-hnnls\" (UID: \"588d2c87-beaf-4e83-ba6f-8e3f0d453589\") " pod="calico-apiserver/calico-apiserver-74c7564cb4-hnnls" Dec 12 18:21:51.291919 kubelet[4009]: I1212 18:21:51.291656 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3cbfe13e-d642-4617-a3ea-e4339732c6c2-config-volume\") pod \"coredns-674b8bbfcf-hkwbv\" (UID: \"3cbfe13e-d642-4617-a3ea-e4339732c6c2\") " pod="kube-system/coredns-674b8bbfcf-hkwbv" Dec 12 18:21:51.291919 kubelet[4009]: I1212 18:21:51.291871 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/588d2c87-beaf-4e83-ba6f-8e3f0d453589-calico-apiserver-certs\") pod \"calico-apiserver-74c7564cb4-hnnls\" (UID: \"588d2c87-beaf-4e83-ba6f-8e3f0d453589\") " pod="calico-apiserver/calico-apiserver-74c7564cb4-hnnls" Dec 12 18:21:51.293114 kubelet[4009]: I1212 18:21:51.291902 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs47m\" (UniqueName: \"kubernetes.io/projected/985aeb6e-874b-4206-ac9b-71fb5aaf32cf-kube-api-access-cs47m\") pod \"goldmane-666569f655-4rvrk\" (UID: \"985aeb6e-874b-4206-ac9b-71fb5aaf32cf\") " pod="calico-system/goldmane-666569f655-4rvrk" Dec 12 18:21:51.293114 kubelet[4009]: I1212 18:21:51.292814 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqctg\" (UniqueName: \"kubernetes.io/projected/3cbfe13e-d642-4617-a3ea-e4339732c6c2-kube-api-access-hqctg\") pod \"coredns-674b8bbfcf-hkwbv\" (UID: \"3cbfe13e-d642-4617-a3ea-e4339732c6c2\") " pod="kube-system/coredns-674b8bbfcf-hkwbv" Dec 12 18:21:51.293114 kubelet[4009]: I1212 18:21:51.292838 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/985aeb6e-874b-4206-ac9b-71fb5aaf32cf-config\") pod \"goldmane-666569f655-4rvrk\" (UID: \"985aeb6e-874b-4206-ac9b-71fb5aaf32cf\") " pod="calico-system/goldmane-666569f655-4rvrk" Dec 12 18:21:51.293114 kubelet[4009]: I1212 18:21:51.292862 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/985aeb6e-874b-4206-ac9b-71fb5aaf32cf-goldmane-ca-bundle\") pod \"goldmane-666569f655-4rvrk\" (UID: \"985aeb6e-874b-4206-ac9b-71fb5aaf32cf\") " pod="calico-system/goldmane-666569f655-4rvrk" Dec 12 18:21:51.293114 kubelet[4009]: I1212 18:21:51.292880 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/985aeb6e-874b-4206-ac9b-71fb5aaf32cf-goldmane-key-pair\") pod \"goldmane-666569f655-4rvrk\" (UID: \"985aeb6e-874b-4206-ac9b-71fb5aaf32cf\") " pod="calico-system/goldmane-666569f655-4rvrk" Dec 12 18:21:51.311552 containerd[2529]: time="2025-12-12T18:21:51.311490058Z" level=error msg="Failed to destroy network for sandbox \"8eb7c4f1103ac83bdefed8b45847e6adffc08e569223d0382b05a7a8b79eb894\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:21:51.314432 systemd[1]: run-netns-cni\x2d4e0d0656\x2d6637\x2d5fef\x2df761\x2d5bb1f3046579.mount: Deactivated successfully. Dec 12 18:21:51.317419 containerd[2529]: time="2025-12-12T18:21:51.317370263Z" level=error msg="Failed to destroy network for sandbox \"b67a1da99462d2d211e4e238444b8aba33a686e140e493a90209360386b74f9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:21:51.320655 systemd[1]: run-netns-cni\x2d1c8d994d\x2d1ce7\x2dad9f\x2d1462\x2df51831d2730e.mount: Deactivated successfully. Dec 12 18:21:51.321795 containerd[2529]: time="2025-12-12T18:21:51.321754213Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qszww,Uid:793be58b-966e-4ae5-98e0-dcfd87576ade,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eb7c4f1103ac83bdefed8b45847e6adffc08e569223d0382b05a7a8b79eb894\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:21:51.322379 systemd[1]: Created slice kubepods-besteffort-pod10291b6f_a9ca_4c45_b211_06a17f4d693f.slice - libcontainer container kubepods-besteffort-pod10291b6f_a9ca_4c45_b211_06a17f4d693f.slice. Dec 12 18:21:51.322911 kubelet[4009]: E1212 18:21:51.322583 4009 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eb7c4f1103ac83bdefed8b45847e6adffc08e569223d0382b05a7a8b79eb894\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:21:51.322911 kubelet[4009]: E1212 18:21:51.322630 4009 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eb7c4f1103ac83bdefed8b45847e6adffc08e569223d0382b05a7a8b79eb894\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qszww" Dec 12 18:21:51.322911 kubelet[4009]: E1212 18:21:51.322651 4009 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eb7c4f1103ac83bdefed8b45847e6adffc08e569223d0382b05a7a8b79eb894\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qszww" Dec 12 18:21:51.323373 kubelet[4009]: E1212 18:21:51.322698 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-qszww_kube-system(793be58b-966e-4ae5-98e0-dcfd87576ade)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-qszww_kube-system(793be58b-966e-4ae5-98e0-dcfd87576ade)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8eb7c4f1103ac83bdefed8b45847e6adffc08e569223d0382b05a7a8b79eb894\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-qszww" podUID="793be58b-966e-4ae5-98e0-dcfd87576ade" Dec 12 18:21:51.326718 containerd[2529]: time="2025-12-12T18:21:51.326692723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bq297,Uid:10291b6f-a9ca-4c45-b211-06a17f4d693f,Namespace:calico-system,Attempt:0,}" Dec 12 18:21:51.327512 containerd[2529]: time="2025-12-12T18:21:51.327432269Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c595dfbb8-snvpt,Uid:cdef70f5-08e1-4939-ba2b-d4a667c25459,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b67a1da99462d2d211e4e238444b8aba33a686e140e493a90209360386b74f9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:21:51.327642 kubelet[4009]: E1212 18:21:51.327603 4009 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b67a1da99462d2d211e4e238444b8aba33a686e140e493a90209360386b74f9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:21:51.327703 kubelet[4009]: E1212 18:21:51.327658 4009 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b67a1da99462d2d211e4e238444b8aba33a686e140e493a90209360386b74f9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c595dfbb8-snvpt" Dec 12 18:21:51.327703 kubelet[4009]: E1212 18:21:51.327680 4009 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b67a1da99462d2d211e4e238444b8aba33a686e140e493a90209360386b74f9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c595dfbb8-snvpt" Dec 12 18:21:51.327759 kubelet[4009]: E1212 18:21:51.327727 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5c595dfbb8-snvpt_calico-system(cdef70f5-08e1-4939-ba2b-d4a667c25459)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5c595dfbb8-snvpt_calico-system(cdef70f5-08e1-4939-ba2b-d4a667c25459)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b67a1da99462d2d211e4e238444b8aba33a686e140e493a90209360386b74f9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c595dfbb8-snvpt" podUID="cdef70f5-08e1-4939-ba2b-d4a667c25459" Dec 12 18:21:51.375108 containerd[2529]: time="2025-12-12T18:21:51.375068186Z" level=error msg="Failed to destroy network for sandbox \"89759a2ed98ac7b8e418c5fb102095f2a75b93ead22673ec8d11c4e71b265f2a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:21:51.377100 systemd[1]: run-netns-cni\x2d25b97f56\x2d711e\x2d8e94\x2da0c3\x2decaa3cd3f6d2.mount: Deactivated successfully. Dec 12 18:21:51.382405 containerd[2529]: time="2025-12-12T18:21:51.382372793Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bq297,Uid:10291b6f-a9ca-4c45-b211-06a17f4d693f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"89759a2ed98ac7b8e418c5fb102095f2a75b93ead22673ec8d11c4e71b265f2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:21:51.382591 kubelet[4009]: E1212 18:21:51.382563 4009 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89759a2ed98ac7b8e418c5fb102095f2a75b93ead22673ec8d11c4e71b265f2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:21:51.382650 kubelet[4009]: E1212 18:21:51.382609 4009 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89759a2ed98ac7b8e418c5fb102095f2a75b93ead22673ec8d11c4e71b265f2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bq297" Dec 12 18:21:51.382650 kubelet[4009]: E1212 18:21:51.382630 4009 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89759a2ed98ac7b8e418c5fb102095f2a75b93ead22673ec8d11c4e71b265f2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bq297" Dec 12 18:21:51.382709 kubelet[4009]: E1212 18:21:51.382676 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bq297_calico-system(10291b6f-a9ca-4c45-b211-06a17f4d693f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bq297_calico-system(10291b6f-a9ca-4c45-b211-06a17f4d693f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89759a2ed98ac7b8e418c5fb102095f2a75b93ead22673ec8d11c4e71b265f2a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bq297" podUID="10291b6f-a9ca-4c45-b211-06a17f4d693f" Dec 12 18:21:51.421204 containerd[2529]: time="2025-12-12T18:21:51.421178619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 18:21:51.511500 containerd[2529]: time="2025-12-12T18:21:51.511462014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75c498d4d6-hhrrs,Uid:35e2c86e-df5d-42ff-b822-6a19153c7bf9,Namespace:calico-system,Attempt:0,}" Dec 12 18:21:51.527544 containerd[2529]: time="2025-12-12T18:21:51.526970375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hkwbv,Uid:3cbfe13e-d642-4617-a3ea-e4339732c6c2,Namespace:kube-system,Attempt:0,}" Dec 12 18:21:51.555744 containerd[2529]: time="2025-12-12T18:21:51.555585282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4rvrk,Uid:985aeb6e-874b-4206-ac9b-71fb5aaf32cf,Namespace:calico-system,Attempt:0,}" Dec 12 18:21:51.556167 containerd[2529]: time="2025-12-12T18:21:51.556145963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74c7564cb4-hnnls,Uid:588d2c87-beaf-4e83-ba6f-8e3f0d453589,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:21:51.593739 containerd[2529]: time="2025-12-12T18:21:51.593687545Z" level=error msg="Failed to destroy network for sandbox \"7d986538aadaeb9de9c07e1ca8a4b72b3124367fb8daaea94b368843b4727f32\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:21:51.601242 containerd[2529]: time="2025-12-12T18:21:51.601203323Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75c498d4d6-hhrrs,Uid:35e2c86e-df5d-42ff-b822-6a19153c7bf9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d986538aadaeb9de9c07e1ca8a4b72b3124367fb8daaea94b368843b4727f32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:21:51.603426 kubelet[4009]: E1212 18:21:51.601582 4009 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d986538aadaeb9de9c07e1ca8a4b72b3124367fb8daaea94b368843b4727f32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:21:51.603426 kubelet[4009]: E1212 18:21:51.603078 4009 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d986538aadaeb9de9c07e1ca8a4b72b3124367fb8daaea94b368843b4727f32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75c498d4d6-hhrrs" Dec 12 18:21:51.603426 kubelet[4009]: E1212 18:21:51.603109 4009 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d986538aadaeb9de9c07e1ca8a4b72b3124367fb8daaea94b368843b4727f32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75c498d4d6-hhrrs" Dec 12 18:21:51.603812 kubelet[4009]: E1212 18:21:51.603170 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-75c498d4d6-hhrrs_calico-system(35e2c86e-df5d-42ff-b822-6a19153c7bf9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-75c498d4d6-hhrrs_calico-system(35e2c86e-df5d-42ff-b822-6a19153c7bf9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d986538aadaeb9de9c07e1ca8a4b72b3124367fb8daaea94b368843b4727f32\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-75c498d4d6-hhrrs" podUID="35e2c86e-df5d-42ff-b822-6a19153c7bf9" Dec 12 18:21:51.629940 containerd[2529]: time="2025-12-12T18:21:51.629915661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74c7564cb4-4f4mj,Uid:c72f9f09-ad62-40d6-8632-b537f3032703,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:21:51.636882 containerd[2529]: time="2025-12-12T18:21:51.636845331Z" level=error msg="Failed to destroy network for sandbox \"164ac0465cc4c871fa867ad93d26f930e05cced3bb20e37771e21b8eeae86bcf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:21:51.650665 containerd[2529]: time="2025-12-12T18:21:51.650626905Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hkwbv,Uid:3cbfe13e-d642-4617-a3ea-e4339732c6c2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"164ac0465cc4c871fa867ad93d26f930e05cced3bb20e37771e21b8eeae86bcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:21:51.651626 kubelet[4009]: E1212 18:21:51.650886 4009 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"164ac0465cc4c871fa867ad93d26f930e05cced3bb20e37771e21b8eeae86bcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:21:51.651626 kubelet[4009]: E1212 18:21:51.650936 4009 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"164ac0465cc4c871fa867ad93d26f930e05cced3bb20e37771e21b8eeae86bcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hkwbv" Dec 12 18:21:51.651626 kubelet[4009]: E1212 18:21:51.650959 4009 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"164ac0465cc4c871fa867ad93d26f930e05cced3bb20e37771e21b8eeae86bcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hkwbv" Dec 12 18:21:51.651769 kubelet[4009]: E1212 18:21:51.651017 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hkwbv_kube-system(3cbfe13e-d642-4617-a3ea-e4339732c6c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hkwbv_kube-system(3cbfe13e-d642-4617-a3ea-e4339732c6c2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"164ac0465cc4c871fa867ad93d26f930e05cced3bb20e37771e21b8eeae86bcf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hkwbv" podUID="3cbfe13e-d642-4617-a3ea-e4339732c6c2" Dec 12 18:21:51.653115 containerd[2529]: time="2025-12-12T18:21:51.653057703Z" level=error msg="Failed to destroy network for sandbox \"9aa6c68aca5a57c0b2d23360735d103f86f2787d13ea17abdbfd7b024ef7c6eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:21:51.655800 containerd[2529]: time="2025-12-12T18:21:51.655773306Z" level=error msg="Failed to destroy network for sandbox \"2b90dc3cabff190e20fa8d2e46e405687ca0c27a76b4ff9b3f07ccdedb0cdc6b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:21:51.658715 containerd[2529]: time="2025-12-12T18:21:51.658682461Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74c7564cb4-hnnls,Uid:588d2c87-beaf-4e83-ba6f-8e3f0d453589,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9aa6c68aca5a57c0b2d23360735d103f86f2787d13ea17abdbfd7b024ef7c6eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:21:51.659335 kubelet[4009]: E1212 18:21:51.659304 4009 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9aa6c68aca5a57c0b2d23360735d103f86f2787d13ea17abdbfd7b024ef7c6eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:21:51.659456 kubelet[4009]: E1212 18:21:51.659442 4009 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9aa6c68aca5a57c0b2d23360735d103f86f2787d13ea17abdbfd7b024ef7c6eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74c7564cb4-hnnls" Dec 12 18:21:51.659561 kubelet[4009]: E1212 18:21:51.659510 4009 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9aa6c68aca5a57c0b2d23360735d103f86f2787d13ea17abdbfd7b024ef7c6eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74c7564cb4-hnnls" Dec 12 18:21:51.660511 kubelet[4009]: E1212 18:21:51.659830 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74c7564cb4-hnnls_calico-apiserver(588d2c87-beaf-4e83-ba6f-8e3f0d453589)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74c7564cb4-hnnls_calico-apiserver(588d2c87-beaf-4e83-ba6f-8e3f0d453589)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9aa6c68aca5a57c0b2d23360735d103f86f2787d13ea17abdbfd7b024ef7c6eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74c7564cb4-hnnls" podUID="588d2c87-beaf-4e83-ba6f-8e3f0d453589" Dec 12 18:21:51.666307 containerd[2529]: time="2025-12-12T18:21:51.666255883Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4rvrk,Uid:985aeb6e-874b-4206-ac9b-71fb5aaf32cf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b90dc3cabff190e20fa8d2e46e405687ca0c27a76b4ff9b3f07ccdedb0cdc6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:21:51.666655 kubelet[4009]: E1212 18:21:51.666629 4009 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b90dc3cabff190e20fa8d2e46e405687ca0c27a76b4ff9b3f07ccdedb0cdc6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:21:51.666800 kubelet[4009]: E1212 18:21:51.666783 4009 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b90dc3cabff190e20fa8d2e46e405687ca0c27a76b4ff9b3f07ccdedb0cdc6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-4rvrk" Dec 12 18:21:51.666886 kubelet[4009]: E1212 18:21:51.666872 4009 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b90dc3cabff190e20fa8d2e46e405687ca0c27a76b4ff9b3f07ccdedb0cdc6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-4rvrk" Dec 12 18:21:51.666983 kubelet[4009]: E1212 18:21:51.666964 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-4rvrk_calico-system(985aeb6e-874b-4206-ac9b-71fb5aaf32cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-4rvrk_calico-system(985aeb6e-874b-4206-ac9b-71fb5aaf32cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b90dc3cabff190e20fa8d2e46e405687ca0c27a76b4ff9b3f07ccdedb0cdc6b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-4rvrk" podUID="985aeb6e-874b-4206-ac9b-71fb5aaf32cf" Dec 12 18:21:51.686846 containerd[2529]: time="2025-12-12T18:21:51.686815212Z" level=error msg="Failed to destroy network for sandbox \"475338de711394fefb17d8ca6fce570c294cb98386d0e515ed00ef283fdf49ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:21:51.691940 containerd[2529]: time="2025-12-12T18:21:51.691911061Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74c7564cb4-4f4mj,Uid:c72f9f09-ad62-40d6-8632-b537f3032703,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"475338de711394fefb17d8ca6fce570c294cb98386d0e515ed00ef283fdf49ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:21:51.692240 kubelet[4009]: E1212 18:21:51.692046 4009 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"475338de711394fefb17d8ca6fce570c294cb98386d0e515ed00ef283fdf49ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:21:51.692240 kubelet[4009]: E1212 18:21:51.692086 4009 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"475338de711394fefb17d8ca6fce570c294cb98386d0e515ed00ef283fdf49ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74c7564cb4-4f4mj" Dec 12 18:21:51.692240 kubelet[4009]: E1212 18:21:51.692100 4009 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"475338de711394fefb17d8ca6fce570c294cb98386d0e515ed00ef283fdf49ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74c7564cb4-4f4mj" Dec 12 18:21:51.692340 kubelet[4009]: E1212 18:21:51.692153 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74c7564cb4-4f4mj_calico-apiserver(c72f9f09-ad62-40d6-8632-b537f3032703)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74c7564cb4-4f4mj_calico-apiserver(c72f9f09-ad62-40d6-8632-b537f3032703)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"475338de711394fefb17d8ca6fce570c294cb98386d0e515ed00ef283fdf49ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74c7564cb4-4f4mj" podUID="c72f9f09-ad62-40d6-8632-b537f3032703" Dec 12 18:21:52.009806 kubelet[4009]: I1212 18:21:52.009397 4009 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:21:52.048000 audit[4994]: NETFILTER_CFG table=filter:120 family=2 entries=21 op=nft_register_rule pid=4994 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:21:52.048000 audit[4994]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff61ea9130 a2=0 a3=7fff61ea911c items=0 ppid=4115 pid=4994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:52.048000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:21:52.053000 audit[4994]: NETFILTER_CFG table=nat:121 family=2 entries=19 op=nft_register_chain pid=4994 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:21:52.053000 audit[4994]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff61ea9130 a2=0 a3=7fff61ea911c items=0 ppid=4115 pid=4994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:52.053000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:21:56.062632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount434174258.mount: Deactivated successfully. Dec 12 18:21:56.096202 containerd[2529]: time="2025-12-12T18:21:56.096154272Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:21:56.098490 containerd[2529]: time="2025-12-12T18:21:56.098391514Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 12 18:21:56.100885 containerd[2529]: time="2025-12-12T18:21:56.100860065Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:21:56.104877 containerd[2529]: time="2025-12-12T18:21:56.104385904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:21:56.104877 containerd[2529]: time="2025-12-12T18:21:56.104749379Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.683535227s" Dec 12 18:21:56.104877 containerd[2529]: time="2025-12-12T18:21:56.104777111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 12 18:21:56.124410 containerd[2529]: time="2025-12-12T18:21:56.124374905Z" level=info msg="CreateContainer within sandbox \"fdae9cc5df35d83ac0b9fc1ed253806f6c0ed36cf6700c5a096349d3b6dcf459\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 18:21:56.154204 containerd[2529]: time="2025-12-12T18:21:56.152642347Z" level=info msg="Container ff2d24ff0fa4b6204086e953ba93c121311ea71a9d636fbd506361b37b81aedb: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:21:56.175051 containerd[2529]: time="2025-12-12T18:21:56.175022048Z" level=info msg="CreateContainer within sandbox \"fdae9cc5df35d83ac0b9fc1ed253806f6c0ed36cf6700c5a096349d3b6dcf459\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ff2d24ff0fa4b6204086e953ba93c121311ea71a9d636fbd506361b37b81aedb\"" Dec 12 18:21:56.175734 containerd[2529]: time="2025-12-12T18:21:56.175709849Z" level=info msg="StartContainer for \"ff2d24ff0fa4b6204086e953ba93c121311ea71a9d636fbd506361b37b81aedb\"" Dec 12 18:21:56.177282 containerd[2529]: time="2025-12-12T18:21:56.177246722Z" level=info msg="connecting to shim ff2d24ff0fa4b6204086e953ba93c121311ea71a9d636fbd506361b37b81aedb" address="unix:///run/containerd/s/f7d48c5c8b13d63ca3c73fb871c797807b67859f054bc198c44edbdbc9fd3299" protocol=ttrpc version=3 Dec 12 18:21:56.195692 systemd[1]: Started cri-containerd-ff2d24ff0fa4b6204086e953ba93c121311ea71a9d636fbd506361b37b81aedb.scope - libcontainer container ff2d24ff0fa4b6204086e953ba93c121311ea71a9d636fbd506361b37b81aedb. Dec 12 18:21:56.252049 kernel: kauditd_printk_skb: 12 callbacks suppressed Dec 12 18:21:56.252242 kernel: audit: type=1334 audit(1765563716.247:601): prog-id=196 op=LOAD Dec 12 18:21:56.247000 audit: BPF prog-id=196 op=LOAD Dec 12 18:21:56.247000 audit[5003]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4487 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:56.262554 kernel: audit: type=1300 audit(1765563716.247:601): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4487 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:56.262620 kernel: audit: type=1327 audit(1765563716.247:601): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666326432346666306661346236323034303836653935336261393363 Dec 12 18:21:56.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666326432346666306661346236323034303836653935336261393363 Dec 12 18:21:56.264076 kernel: audit: type=1334 audit(1765563716.251:602): prog-id=197 op=LOAD Dec 12 18:21:56.251000 audit: BPF prog-id=197 op=LOAD Dec 12 18:21:56.267574 kernel: audit: type=1300 audit(1765563716.251:602): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4487 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:56.251000 audit[5003]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4487 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:56.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666326432346666306661346236323034303836653935336261393363 Dec 12 18:21:56.276531 kernel: audit: type=1327 audit(1765563716.251:602): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666326432346666306661346236323034303836653935336261393363 Dec 12 18:21:56.251000 audit: BPF prog-id=197 op=UNLOAD Dec 12 18:21:56.283312 kernel: audit: type=1334 audit(1765563716.251:603): prog-id=197 op=UNLOAD Dec 12 18:21:56.283380 kernel: audit: type=1300 audit(1765563716.251:603): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4487 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:56.251000 audit[5003]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4487 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:56.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666326432346666306661346236323034303836653935336261393363 Dec 12 18:21:56.290721 kernel: audit: type=1327 audit(1765563716.251:603): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666326432346666306661346236323034303836653935336261393363 Dec 12 18:21:56.290780 kernel: audit: type=1334 audit(1765563716.251:604): prog-id=196 op=UNLOAD Dec 12 18:21:56.251000 audit: BPF prog-id=196 op=UNLOAD Dec 12 18:21:56.251000 audit[5003]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4487 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:56.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666326432346666306661346236323034303836653935336261393363 Dec 12 18:21:56.251000 audit: BPF prog-id=198 op=LOAD Dec 12 18:21:56.251000 audit[5003]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4487 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:56.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666326432346666306661346236323034303836653935336261393363 Dec 12 18:21:56.301613 containerd[2529]: time="2025-12-12T18:21:56.301583165Z" level=info msg="StartContainer for \"ff2d24ff0fa4b6204086e953ba93c121311ea71a9d636fbd506361b37b81aedb\" returns successfully" Dec 12 18:21:56.527446 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 18:21:56.527584 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 18:21:56.615139 kubelet[4009]: I1212 18:21:56.614843 4009 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vnsfd" podStartSLOduration=1.4479087640000001 podStartE2EDuration="19.614822404s" podCreationTimestamp="2025-12-12 18:21:37 +0000 UTC" firstStartedPulling="2025-12-12 18:21:37.938561943 +0000 UTC m=+17.784445586" lastFinishedPulling="2025-12-12 18:21:56.105475575 +0000 UTC m=+35.951359226" observedRunningTime="2025-12-12 18:21:56.449369846 +0000 UTC m=+36.295253521" watchObservedRunningTime="2025-12-12 18:21:56.614822404 +0000 UTC m=+36.460706181" Dec 12 18:21:56.729904 kubelet[4009]: I1212 18:21:56.729872 4009 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkt6c\" (UniqueName: \"kubernetes.io/projected/35e2c86e-df5d-42ff-b822-6a19153c7bf9-kube-api-access-jkt6c\") pod \"35e2c86e-df5d-42ff-b822-6a19153c7bf9\" (UID: \"35e2c86e-df5d-42ff-b822-6a19153c7bf9\") " Dec 12 18:21:56.729904 kubelet[4009]: I1212 18:21:56.729911 4009 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35e2c86e-df5d-42ff-b822-6a19153c7bf9-whisker-ca-bundle\") pod \"35e2c86e-df5d-42ff-b822-6a19153c7bf9\" (UID: \"35e2c86e-df5d-42ff-b822-6a19153c7bf9\") " Dec 12 18:21:56.730090 kubelet[4009]: I1212 18:21:56.729930 4009 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/35e2c86e-df5d-42ff-b822-6a19153c7bf9-whisker-backend-key-pair\") pod \"35e2c86e-df5d-42ff-b822-6a19153c7bf9\" (UID: \"35e2c86e-df5d-42ff-b822-6a19153c7bf9\") " Dec 12 18:21:56.730798 kubelet[4009]: I1212 18:21:56.730768 4009 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35e2c86e-df5d-42ff-b822-6a19153c7bf9-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "35e2c86e-df5d-42ff-b822-6a19153c7bf9" (UID: "35e2c86e-df5d-42ff-b822-6a19153c7bf9"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 18:21:56.733905 kubelet[4009]: I1212 18:21:56.733859 4009 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35e2c86e-df5d-42ff-b822-6a19153c7bf9-kube-api-access-jkt6c" (OuterVolumeSpecName: "kube-api-access-jkt6c") pod "35e2c86e-df5d-42ff-b822-6a19153c7bf9" (UID: "35e2c86e-df5d-42ff-b822-6a19153c7bf9"). InnerVolumeSpecName "kube-api-access-jkt6c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 18:21:56.734133 kubelet[4009]: I1212 18:21:56.734115 4009 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35e2c86e-df5d-42ff-b822-6a19153c7bf9-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "35e2c86e-df5d-42ff-b822-6a19153c7bf9" (UID: "35e2c86e-df5d-42ff-b822-6a19153c7bf9"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 18:21:56.831153 kubelet[4009]: I1212 18:21:56.831044 4009 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35e2c86e-df5d-42ff-b822-6a19153c7bf9-whisker-ca-bundle\") on node \"ci-4515.1.0-a-53d1559fda\" DevicePath \"\"" Dec 12 18:21:56.831153 kubelet[4009]: I1212 18:21:56.831076 4009 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/35e2c86e-df5d-42ff-b822-6a19153c7bf9-whisker-backend-key-pair\") on node \"ci-4515.1.0-a-53d1559fda\" DevicePath \"\"" Dec 12 18:21:56.831153 kubelet[4009]: I1212 18:21:56.831087 4009 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jkt6c\" (UniqueName: \"kubernetes.io/projected/35e2c86e-df5d-42ff-b822-6a19153c7bf9-kube-api-access-jkt6c\") on node \"ci-4515.1.0-a-53d1559fda\" DevicePath \"\"" Dec 12 18:21:57.063064 systemd[1]: var-lib-kubelet-pods-35e2c86e\x2ddf5d\x2d42ff\x2db822\x2d6a19153c7bf9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djkt6c.mount: Deactivated successfully. Dec 12 18:21:57.063154 systemd[1]: var-lib-kubelet-pods-35e2c86e\x2ddf5d\x2d42ff\x2db822\x2d6a19153c7bf9-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 18:21:57.439329 systemd[1]: Removed slice kubepods-besteffort-pod35e2c86e_df5d_42ff_b822_6a19153c7bf9.slice - libcontainer container kubepods-besteffort-pod35e2c86e_df5d_42ff_b822_6a19153c7bf9.slice. Dec 12 18:21:57.503991 systemd[1]: Created slice kubepods-besteffort-pod129d48cc_df60_49a9_8eb1_5bf2a56866a1.slice - libcontainer container kubepods-besteffort-pod129d48cc_df60_49a9_8eb1_5bf2a56866a1.slice. Dec 12 18:21:57.534959 kubelet[4009]: I1212 18:21:57.534834 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/129d48cc-df60-49a9-8eb1-5bf2a56866a1-whisker-backend-key-pair\") pod \"whisker-66465f8f84-gfntv\" (UID: \"129d48cc-df60-49a9-8eb1-5bf2a56866a1\") " pod="calico-system/whisker-66465f8f84-gfntv" Dec 12 18:21:57.534959 kubelet[4009]: I1212 18:21:57.534877 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmxlz\" (UniqueName: \"kubernetes.io/projected/129d48cc-df60-49a9-8eb1-5bf2a56866a1-kube-api-access-xmxlz\") pod \"whisker-66465f8f84-gfntv\" (UID: \"129d48cc-df60-49a9-8eb1-5bf2a56866a1\") " pod="calico-system/whisker-66465f8f84-gfntv" Dec 12 18:21:57.534959 kubelet[4009]: I1212 18:21:57.534900 4009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/129d48cc-df60-49a9-8eb1-5bf2a56866a1-whisker-ca-bundle\") pod \"whisker-66465f8f84-gfntv\" (UID: \"129d48cc-df60-49a9-8eb1-5bf2a56866a1\") " pod="calico-system/whisker-66465f8f84-gfntv" Dec 12 18:21:57.812280 containerd[2529]: time="2025-12-12T18:21:57.811954975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66465f8f84-gfntv,Uid:129d48cc-df60-49a9-8eb1-5bf2a56866a1,Namespace:calico-system,Attempt:0,}" Dec 12 18:21:57.948936 systemd-networkd[2161]: cali74cd9c1d7ae: Link UP Dec 12 18:21:57.949767 systemd-networkd[2161]: cali74cd9c1d7ae: Gained carrier Dec 12 18:21:57.965064 containerd[2529]: 2025-12-12 18:21:57.858 [INFO][5126] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:21:57.965064 containerd[2529]: 2025-12-12 18:21:57.868 [INFO][5126] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--53d1559fda-k8s-whisker--66465f8f84--gfntv-eth0 whisker-66465f8f84- calico-system 129d48cc-df60-49a9-8eb1-5bf2a56866a1 888 0 2025-12-12 18:21:57 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:66465f8f84 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4515.1.0-a-53d1559fda whisker-66465f8f84-gfntv eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali74cd9c1d7ae [] [] }} ContainerID="9c3c00c4b8934ad051bab29dbce080a16d611199b8f08bbc221612ab69a70d03" Namespace="calico-system" Pod="whisker-66465f8f84-gfntv" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-whisker--66465f8f84--gfntv-" Dec 12 18:21:57.965064 containerd[2529]: 2025-12-12 18:21:57.868 [INFO][5126] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9c3c00c4b8934ad051bab29dbce080a16d611199b8f08bbc221612ab69a70d03" Namespace="calico-system" Pod="whisker-66465f8f84-gfntv" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-whisker--66465f8f84--gfntv-eth0" Dec 12 18:21:57.965064 containerd[2529]: 2025-12-12 18:21:57.901 [INFO][5147] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9c3c00c4b8934ad051bab29dbce080a16d611199b8f08bbc221612ab69a70d03" HandleID="k8s-pod-network.9c3c00c4b8934ad051bab29dbce080a16d611199b8f08bbc221612ab69a70d03" Workload="ci--4515.1.0--a--53d1559fda-k8s-whisker--66465f8f84--gfntv-eth0" Dec 12 18:21:57.965752 containerd[2529]: 2025-12-12 18:21:57.901 [INFO][5147] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9c3c00c4b8934ad051bab29dbce080a16d611199b8f08bbc221612ab69a70d03" HandleID="k8s-pod-network.9c3c00c4b8934ad051bab29dbce080a16d611199b8f08bbc221612ab69a70d03" Workload="ci--4515.1.0--a--53d1559fda-k8s-whisker--66465f8f84--gfntv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cefe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-a-53d1559fda", "pod":"whisker-66465f8f84-gfntv", "timestamp":"2025-12-12 18:21:57.901199583 +0000 UTC"}, Hostname:"ci-4515.1.0-a-53d1559fda", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:21:57.965752 containerd[2529]: 2025-12-12 18:21:57.901 [INFO][5147] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:21:57.965752 containerd[2529]: 2025-12-12 18:21:57.901 [INFO][5147] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:21:57.965752 containerd[2529]: 2025-12-12 18:21:57.901 [INFO][5147] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-53d1559fda' Dec 12 18:21:57.965752 containerd[2529]: 2025-12-12 18:21:57.908 [INFO][5147] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9c3c00c4b8934ad051bab29dbce080a16d611199b8f08bbc221612ab69a70d03" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:21:57.965752 containerd[2529]: 2025-12-12 18:21:57.912 [INFO][5147] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-53d1559fda" Dec 12 18:21:57.965752 containerd[2529]: 2025-12-12 18:21:57.916 [INFO][5147] ipam/ipam.go 511: Trying affinity for 192.168.51.0/26 host="ci-4515.1.0-a-53d1559fda" Dec 12 18:21:57.965752 containerd[2529]: 2025-12-12 18:21:57.917 [INFO][5147] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.0/26 host="ci-4515.1.0-a-53d1559fda" Dec 12 18:21:57.965752 containerd[2529]: 2025-12-12 18:21:57.919 [INFO][5147] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.0/26 host="ci-4515.1.0-a-53d1559fda" Dec 12 18:21:57.966429 containerd[2529]: 2025-12-12 18:21:57.919 [INFO][5147] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.51.0/26 handle="k8s-pod-network.9c3c00c4b8934ad051bab29dbce080a16d611199b8f08bbc221612ab69a70d03" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:21:57.966429 containerd[2529]: 2025-12-12 18:21:57.921 [INFO][5147] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9c3c00c4b8934ad051bab29dbce080a16d611199b8f08bbc221612ab69a70d03 Dec 12 18:21:57.966429 containerd[2529]: 2025-12-12 18:21:57.925 [INFO][5147] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.51.0/26 handle="k8s-pod-network.9c3c00c4b8934ad051bab29dbce080a16d611199b8f08bbc221612ab69a70d03" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:21:57.966429 containerd[2529]: 2025-12-12 18:21:57.933 [INFO][5147] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.51.1/26] block=192.168.51.0/26 handle="k8s-pod-network.9c3c00c4b8934ad051bab29dbce080a16d611199b8f08bbc221612ab69a70d03" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:21:57.966429 containerd[2529]: 2025-12-12 18:21:57.933 [INFO][5147] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.1/26] handle="k8s-pod-network.9c3c00c4b8934ad051bab29dbce080a16d611199b8f08bbc221612ab69a70d03" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:21:57.966429 containerd[2529]: 2025-12-12 18:21:57.933 [INFO][5147] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:21:57.966429 containerd[2529]: 2025-12-12 18:21:57.933 [INFO][5147] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.51.1/26] IPv6=[] ContainerID="9c3c00c4b8934ad051bab29dbce080a16d611199b8f08bbc221612ab69a70d03" HandleID="k8s-pod-network.9c3c00c4b8934ad051bab29dbce080a16d611199b8f08bbc221612ab69a70d03" Workload="ci--4515.1.0--a--53d1559fda-k8s-whisker--66465f8f84--gfntv-eth0" Dec 12 18:21:57.967199 containerd[2529]: 2025-12-12 18:21:57.936 [INFO][5126] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9c3c00c4b8934ad051bab29dbce080a16d611199b8f08bbc221612ab69a70d03" Namespace="calico-system" Pod="whisker-66465f8f84-gfntv" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-whisker--66465f8f84--gfntv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--53d1559fda-k8s-whisker--66465f8f84--gfntv-eth0", GenerateName:"whisker-66465f8f84-", Namespace:"calico-system", SelfLink:"", UID:"129d48cc-df60-49a9-8eb1-5bf2a56866a1", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 21, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66465f8f84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-53d1559fda", ContainerID:"", Pod:"whisker-66465f8f84-gfntv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.51.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali74cd9c1d7ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:21:57.967199 containerd[2529]: 2025-12-12 18:21:57.937 [INFO][5126] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.1/32] ContainerID="9c3c00c4b8934ad051bab29dbce080a16d611199b8f08bbc221612ab69a70d03" Namespace="calico-system" Pod="whisker-66465f8f84-gfntv" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-whisker--66465f8f84--gfntv-eth0" Dec 12 18:21:57.967856 containerd[2529]: 2025-12-12 18:21:57.937 [INFO][5126] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali74cd9c1d7ae ContainerID="9c3c00c4b8934ad051bab29dbce080a16d611199b8f08bbc221612ab69a70d03" Namespace="calico-system" Pod="whisker-66465f8f84-gfntv" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-whisker--66465f8f84--gfntv-eth0" Dec 12 18:21:57.967856 containerd[2529]: 2025-12-12 18:21:57.946 [INFO][5126] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9c3c00c4b8934ad051bab29dbce080a16d611199b8f08bbc221612ab69a70d03" Namespace="calico-system" Pod="whisker-66465f8f84-gfntv" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-whisker--66465f8f84--gfntv-eth0" Dec 12 18:21:57.967925 containerd[2529]: 2025-12-12 18:21:57.947 [INFO][5126] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9c3c00c4b8934ad051bab29dbce080a16d611199b8f08bbc221612ab69a70d03" Namespace="calico-system" Pod="whisker-66465f8f84-gfntv" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-whisker--66465f8f84--gfntv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--53d1559fda-k8s-whisker--66465f8f84--gfntv-eth0", GenerateName:"whisker-66465f8f84-", Namespace:"calico-system", SelfLink:"", UID:"129d48cc-df60-49a9-8eb1-5bf2a56866a1", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 21, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66465f8f84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-53d1559fda", ContainerID:"9c3c00c4b8934ad051bab29dbce080a16d611199b8f08bbc221612ab69a70d03", Pod:"whisker-66465f8f84-gfntv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.51.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali74cd9c1d7ae", MAC:"7e:8e:0b:fd:ca:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:21:57.968001 containerd[2529]: 2025-12-12 18:21:57.963 [INFO][5126] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9c3c00c4b8934ad051bab29dbce080a16d611199b8f08bbc221612ab69a70d03" Namespace="calico-system" Pod="whisker-66465f8f84-gfntv" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-whisker--66465f8f84--gfntv-eth0" Dec 12 18:21:58.007837 containerd[2529]: time="2025-12-12T18:21:58.007759991Z" level=info msg="connecting to shim 9c3c00c4b8934ad051bab29dbce080a16d611199b8f08bbc221612ab69a70d03" address="unix:///run/containerd/s/72142808d7e71b024b8a9b99ae26d621ee87af03fc856ede837efd68bb5af5f9" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:21:58.038880 systemd[1]: Started cri-containerd-9c3c00c4b8934ad051bab29dbce080a16d611199b8f08bbc221612ab69a70d03.scope - libcontainer container 9c3c00c4b8934ad051bab29dbce080a16d611199b8f08bbc221612ab69a70d03. Dec 12 18:21:58.070000 audit: BPF prog-id=199 op=LOAD Dec 12 18:21:58.072000 audit: BPF prog-id=200 op=LOAD Dec 12 18:21:58.072000 audit[5199]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=5186 pid=5199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.072000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963336330306334623839333461643035316261623239646263653038 Dec 12 18:21:58.072000 audit: BPF prog-id=200 op=UNLOAD Dec 12 18:21:58.072000 audit[5199]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5186 pid=5199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.072000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963336330306334623839333461643035316261623239646263653038 Dec 12 18:21:58.072000 audit: BPF prog-id=201 op=LOAD Dec 12 18:21:58.072000 audit[5199]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=5186 pid=5199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.072000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963336330306334623839333461643035316261623239646263653038 Dec 12 18:21:58.072000 audit: BPF prog-id=202 op=LOAD Dec 12 18:21:58.072000 audit[5199]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=5186 pid=5199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.072000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963336330306334623839333461643035316261623239646263653038 Dec 12 18:21:58.072000 audit: BPF prog-id=202 op=UNLOAD Dec 12 18:21:58.072000 audit[5199]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5186 pid=5199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.072000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963336330306334623839333461643035316261623239646263653038 Dec 12 18:21:58.072000 audit: BPF prog-id=201 op=UNLOAD Dec 12 18:21:58.072000 audit[5199]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5186 pid=5199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.072000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963336330306334623839333461643035316261623239646263653038 Dec 12 18:21:58.072000 audit: BPF prog-id=203 op=LOAD Dec 12 18:21:58.072000 audit[5199]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=5186 pid=5199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.072000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963336330306334623839333461643035316261623239646263653038 Dec 12 18:21:58.143140 containerd[2529]: time="2025-12-12T18:21:58.143103210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66465f8f84-gfntv,Uid:129d48cc-df60-49a9-8eb1-5bf2a56866a1,Namespace:calico-system,Attempt:0,} returns sandbox id \"9c3c00c4b8934ad051bab29dbce080a16d611199b8f08bbc221612ab69a70d03\"" Dec 12 18:21:58.147914 containerd[2529]: time="2025-12-12T18:21:58.147706909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:21:58.213000 audit: BPF prog-id=204 op=LOAD Dec 12 18:21:58.213000 audit[5240]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffec01ac6a0 a2=98 a3=1fffffffffffffff items=0 ppid=5096 pid=5240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.213000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:21:58.214000 audit: BPF prog-id=204 op=UNLOAD Dec 12 18:21:58.214000 audit[5240]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffec01ac670 a3=0 items=0 ppid=5096 pid=5240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.214000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:21:58.214000 audit: BPF prog-id=205 op=LOAD Dec 12 18:21:58.214000 audit[5240]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffec01ac580 a2=94 a3=3 items=0 ppid=5096 pid=5240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.214000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:21:58.214000 audit: BPF prog-id=205 op=UNLOAD Dec 12 18:21:58.214000 audit[5240]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffec01ac580 a2=94 a3=3 items=0 ppid=5096 pid=5240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.214000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:21:58.214000 audit: BPF prog-id=206 op=LOAD Dec 12 18:21:58.214000 audit[5240]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffec01ac5c0 a2=94 a3=7ffec01ac7a0 items=0 ppid=5096 pid=5240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.214000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:21:58.214000 audit: BPF prog-id=206 op=UNLOAD Dec 12 18:21:58.214000 audit[5240]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffec01ac5c0 a2=94 a3=7ffec01ac7a0 items=0 ppid=5096 pid=5240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.214000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:21:58.215000 audit: BPF prog-id=207 op=LOAD Dec 12 18:21:58.215000 audit[5243]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff9f08d610 a2=98 a3=3 items=0 ppid=5096 pid=5243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.215000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:21:58.216000 audit: BPF prog-id=207 op=UNLOAD Dec 12 18:21:58.216000 audit[5243]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff9f08d5e0 a3=0 items=0 ppid=5096 pid=5243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.216000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:21:58.216000 audit: BPF prog-id=208 op=LOAD Dec 12 18:21:58.216000 audit[5243]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff9f08d400 a2=94 a3=54428f items=0 ppid=5096 pid=5243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.216000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:21:58.216000 audit: BPF prog-id=208 op=UNLOAD Dec 12 18:21:58.216000 audit[5243]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff9f08d400 a2=94 a3=54428f items=0 ppid=5096 pid=5243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.216000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:21:58.216000 audit: BPF prog-id=209 op=LOAD Dec 12 18:21:58.216000 audit[5243]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff9f08d430 a2=94 a3=2 items=0 ppid=5096 pid=5243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.216000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:21:58.216000 audit: BPF prog-id=209 op=UNLOAD Dec 12 18:21:58.216000 audit[5243]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff9f08d430 a2=0 a3=2 items=0 ppid=5096 pid=5243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.216000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:21:58.313941 kubelet[4009]: I1212 18:21:58.313901 4009 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35e2c86e-df5d-42ff-b822-6a19153c7bf9" path="/var/lib/kubelet/pods/35e2c86e-df5d-42ff-b822-6a19153c7bf9/volumes" Dec 12 18:21:58.347000 audit: BPF prog-id=210 op=LOAD Dec 12 18:21:58.347000 audit[5243]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff9f08d2f0 a2=94 a3=1 items=0 ppid=5096 pid=5243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.347000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:21:58.347000 audit: BPF prog-id=210 op=UNLOAD Dec 12 18:21:58.347000 audit[5243]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff9f08d2f0 a2=94 a3=1 items=0 ppid=5096 pid=5243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.347000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:21:58.357000 audit: BPF prog-id=211 op=LOAD Dec 12 18:21:58.357000 audit[5243]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff9f08d2e0 a2=94 a3=4 items=0 ppid=5096 pid=5243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.357000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:21:58.357000 audit: BPF prog-id=211 op=UNLOAD Dec 12 18:21:58.357000 audit[5243]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff9f08d2e0 a2=0 a3=4 items=0 ppid=5096 pid=5243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.357000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:21:58.357000 audit: BPF prog-id=212 op=LOAD Dec 12 18:21:58.357000 audit[5243]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff9f08d140 a2=94 a3=5 items=0 ppid=5096 pid=5243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.357000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:21:58.357000 audit: BPF prog-id=212 op=UNLOAD Dec 12 18:21:58.357000 audit[5243]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff9f08d140 a2=0 a3=5 items=0 ppid=5096 pid=5243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.357000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:21:58.357000 audit: BPF prog-id=213 op=LOAD Dec 12 18:21:58.357000 audit[5243]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff9f08d360 a2=94 a3=6 items=0 ppid=5096 pid=5243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.357000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:21:58.357000 audit: BPF prog-id=213 op=UNLOAD Dec 12 18:21:58.357000 audit[5243]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff9f08d360 a2=0 a3=6 items=0 ppid=5096 pid=5243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.357000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:21:58.358000 audit: BPF prog-id=214 op=LOAD Dec 12 18:21:58.358000 audit[5243]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff9f08cb10 a2=94 a3=88 items=0 ppid=5096 pid=5243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.358000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:21:58.358000 audit: BPF prog-id=215 op=LOAD Dec 12 18:21:58.358000 audit[5243]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff9f08c990 a2=94 a3=2 items=0 ppid=5096 pid=5243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.358000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:21:58.358000 audit: BPF prog-id=215 op=UNLOAD Dec 12 18:21:58.358000 audit[5243]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff9f08c9c0 a2=0 a3=7fff9f08cac0 items=0 ppid=5096 pid=5243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.358000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:21:58.358000 audit: BPF prog-id=214 op=UNLOAD Dec 12 18:21:58.358000 audit[5243]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=388c3d10 a2=0 a3=e76623b6e15cfcf9 items=0 ppid=5096 pid=5243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.358000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:21:58.365000 audit: BPF prog-id=216 op=LOAD Dec 12 18:21:58.365000 audit[5259]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffde80bc2f0 a2=98 a3=1999999999999999 items=0 ppid=5096 pid=5259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.365000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:21:58.365000 audit: BPF prog-id=216 op=UNLOAD Dec 12 18:21:58.365000 audit[5259]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffde80bc2c0 a3=0 items=0 ppid=5096 pid=5259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.365000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:21:58.365000 audit: BPF prog-id=217 op=LOAD Dec 12 18:21:58.365000 audit[5259]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffde80bc1d0 a2=94 a3=ffff items=0 ppid=5096 pid=5259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.365000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:21:58.365000 audit: BPF prog-id=217 op=UNLOAD Dec 12 18:21:58.365000 audit[5259]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffde80bc1d0 a2=94 a3=ffff items=0 ppid=5096 pid=5259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.365000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:21:58.365000 audit: BPF prog-id=218 op=LOAD Dec 12 18:21:58.365000 audit[5259]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffde80bc210 a2=94 a3=7ffde80bc3f0 items=0 ppid=5096 pid=5259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.365000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:21:58.365000 audit: BPF prog-id=218 op=UNLOAD Dec 12 18:21:58.365000 audit[5259]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffde80bc210 a2=94 a3=7ffde80bc3f0 items=0 ppid=5096 pid=5259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.365000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:21:58.401656 containerd[2529]: time="2025-12-12T18:21:58.401616576Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:21:58.404478 containerd[2529]: time="2025-12-12T18:21:58.404390695Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:21:58.404478 containerd[2529]: time="2025-12-12T18:21:58.404424739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 18:21:58.404658 kubelet[4009]: E1212 18:21:58.404623 4009 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:21:58.404703 kubelet[4009]: E1212 18:21:58.404673 4009 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:21:58.404837 kubelet[4009]: E1212 18:21:58.404806 4009 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ceb61861a70e4d05aad200164b4c6cd2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xmxlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66465f8f84-gfntv_calico-system(129d48cc-df60-49a9-8eb1-5bf2a56866a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:21:58.406845 containerd[2529]: time="2025-12-12T18:21:58.406821138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:21:58.562206 systemd-networkd[2161]: vxlan.calico: Link UP Dec 12 18:21:58.563883 systemd-networkd[2161]: vxlan.calico: Gained carrier Dec 12 18:21:58.584000 audit: BPF prog-id=219 op=LOAD Dec 12 18:21:58.584000 audit[5284]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd96f7f010 a2=98 a3=0 items=0 ppid=5096 pid=5284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.584000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:21:58.584000 audit: BPF prog-id=219 op=UNLOAD Dec 12 18:21:58.584000 audit[5284]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd96f7efe0 a3=0 items=0 ppid=5096 pid=5284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.584000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:21:58.584000 audit: BPF prog-id=220 op=LOAD Dec 12 18:21:58.584000 audit[5284]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd96f7ee20 a2=94 a3=54428f items=0 ppid=5096 pid=5284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.584000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:21:58.584000 audit: BPF prog-id=220 op=UNLOAD Dec 12 18:21:58.584000 audit[5284]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd96f7ee20 a2=94 a3=54428f items=0 ppid=5096 pid=5284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.584000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:21:58.584000 audit: BPF prog-id=221 op=LOAD Dec 12 18:21:58.584000 audit[5284]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd96f7ee50 a2=94 a3=2 items=0 ppid=5096 pid=5284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.584000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:21:58.584000 audit: BPF prog-id=221 op=UNLOAD Dec 12 18:21:58.584000 audit[5284]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd96f7ee50 a2=0 a3=2 items=0 ppid=5096 pid=5284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.584000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:21:58.584000 audit: BPF prog-id=222 op=LOAD Dec 12 18:21:58.584000 audit[5284]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd96f7ec00 a2=94 a3=4 items=0 ppid=5096 pid=5284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.584000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:21:58.584000 audit: BPF prog-id=222 op=UNLOAD Dec 12 18:21:58.584000 audit[5284]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd96f7ec00 a2=94 a3=4 items=0 ppid=5096 pid=5284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.584000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:21:58.584000 audit: BPF prog-id=223 op=LOAD Dec 12 18:21:58.584000 audit[5284]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd96f7ed00 a2=94 a3=7ffd96f7ee80 items=0 ppid=5096 pid=5284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.584000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:21:58.585000 audit: BPF prog-id=223 op=UNLOAD Dec 12 18:21:58.585000 audit[5284]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd96f7ed00 a2=0 a3=7ffd96f7ee80 items=0 ppid=5096 pid=5284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.585000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:21:58.585000 audit: BPF prog-id=224 op=LOAD Dec 12 18:21:58.585000 audit[5284]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd96f7e430 a2=94 a3=2 items=0 ppid=5096 pid=5284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.585000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:21:58.585000 audit: BPF prog-id=224 op=UNLOAD Dec 12 18:21:58.585000 audit[5284]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd96f7e430 a2=0 a3=2 items=0 ppid=5096 pid=5284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.585000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:21:58.585000 audit: BPF prog-id=225 op=LOAD Dec 12 18:21:58.585000 audit[5284]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd96f7e530 a2=94 a3=30 items=0 ppid=5096 pid=5284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.585000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:21:58.592000 audit: BPF prog-id=226 op=LOAD Dec 12 18:21:58.592000 audit[5288]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc4f314ab0 a2=98 a3=0 items=0 ppid=5096 pid=5288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.592000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:21:58.592000 audit: BPF prog-id=226 op=UNLOAD Dec 12 18:21:58.592000 audit[5288]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc4f314a80 a3=0 items=0 ppid=5096 pid=5288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.592000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:21:58.592000 audit: BPF prog-id=227 op=LOAD Dec 12 18:21:58.592000 audit[5288]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc4f3148a0 a2=94 a3=54428f items=0 ppid=5096 pid=5288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.592000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:21:58.592000 audit: BPF prog-id=227 op=UNLOAD Dec 12 18:21:58.592000 audit[5288]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc4f3148a0 a2=94 a3=54428f items=0 ppid=5096 pid=5288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.592000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:21:58.592000 audit: BPF prog-id=228 op=LOAD Dec 12 18:21:58.592000 audit[5288]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc4f3148d0 a2=94 a3=2 items=0 ppid=5096 pid=5288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.592000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:21:58.592000 audit: BPF prog-id=228 op=UNLOAD Dec 12 18:21:58.592000 audit[5288]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc4f3148d0 a2=0 a3=2 items=0 ppid=5096 pid=5288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.592000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:21:58.666114 containerd[2529]: time="2025-12-12T18:21:58.665894043Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:21:58.668608 containerd[2529]: time="2025-12-12T18:21:58.668557321Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:21:58.668692 containerd[2529]: time="2025-12-12T18:21:58.668617121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 18:21:58.670763 kubelet[4009]: E1212 18:21:58.670722 4009 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:21:58.670822 kubelet[4009]: E1212 18:21:58.670791 4009 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:21:58.671092 kubelet[4009]: E1212 18:21:58.671041 4009 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xmxlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66465f8f84-gfntv_calico-system(129d48cc-df60-49a9-8eb1-5bf2a56866a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:21:58.672469 kubelet[4009]: E1212 18:21:58.672406 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66465f8f84-gfntv" podUID="129d48cc-df60-49a9-8eb1-5bf2a56866a1" Dec 12 18:21:58.726000 audit: BPF prog-id=229 op=LOAD Dec 12 18:21:58.726000 audit[5288]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc4f314790 a2=94 a3=1 items=0 ppid=5096 pid=5288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.726000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:21:58.726000 audit: BPF prog-id=229 op=UNLOAD Dec 12 18:21:58.726000 audit[5288]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc4f314790 a2=94 a3=1 items=0 ppid=5096 pid=5288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.726000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:21:58.735000 audit: BPF prog-id=230 op=LOAD Dec 12 18:21:58.735000 audit[5288]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc4f314780 a2=94 a3=4 items=0 ppid=5096 pid=5288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.735000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:21:58.735000 audit: BPF prog-id=230 op=UNLOAD Dec 12 18:21:58.735000 audit[5288]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc4f314780 a2=0 a3=4 items=0 ppid=5096 pid=5288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.735000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:21:58.735000 audit: BPF prog-id=231 op=LOAD Dec 12 18:21:58.735000 audit[5288]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc4f3145e0 a2=94 a3=5 items=0 ppid=5096 pid=5288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.735000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:21:58.735000 audit: BPF prog-id=231 op=UNLOAD Dec 12 18:21:58.735000 audit[5288]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc4f3145e0 a2=0 a3=5 items=0 ppid=5096 pid=5288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.735000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:21:58.735000 audit: BPF prog-id=232 op=LOAD Dec 12 18:21:58.735000 audit[5288]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc4f314800 a2=94 a3=6 items=0 ppid=5096 pid=5288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.735000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:21:58.735000 audit: BPF prog-id=232 op=UNLOAD Dec 12 18:21:58.735000 audit[5288]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc4f314800 a2=0 a3=6 items=0 ppid=5096 pid=5288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.735000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:21:58.736000 audit: BPF prog-id=233 op=LOAD Dec 12 18:21:58.736000 audit[5288]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc4f313fb0 a2=94 a3=88 items=0 ppid=5096 pid=5288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.736000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:21:58.736000 audit: BPF prog-id=234 op=LOAD Dec 12 18:21:58.736000 audit[5288]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffc4f313e30 a2=94 a3=2 items=0 ppid=5096 pid=5288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.736000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:21:58.736000 audit: BPF prog-id=234 op=UNLOAD Dec 12 18:21:58.736000 audit[5288]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffc4f313e60 a2=0 a3=7ffc4f313f60 items=0 ppid=5096 pid=5288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.736000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:21:58.736000 audit: BPF prog-id=233 op=UNLOAD Dec 12 18:21:58.736000 audit[5288]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=3d9fdd10 a2=0 a3=9734a667b2260e57 items=0 ppid=5096 pid=5288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.736000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:21:58.743000 audit: BPF prog-id=225 op=UNLOAD Dec 12 18:21:58.743000 audit[5096]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0008d4480 a2=0 a3=0 items=0 ppid=5071 pid=5096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.743000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 12 18:21:58.817000 audit[5317]: NETFILTER_CFG table=nat:122 family=2 entries=15 op=nft_register_chain pid=5317 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:21:58.819000 audit[5316]: NETFILTER_CFG table=mangle:123 family=2 entries=16 op=nft_register_chain pid=5316 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:21:58.817000 audit[5317]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffff5b115a0 a2=0 a3=7ffff5b1158c items=0 ppid=5096 pid=5317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.817000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:21:58.819000 audit[5316]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffcb65dddf0 a2=0 a3=7ffcb65ddddc items=0 ppid=5096 pid=5316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.819000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:21:58.846000 audit[5315]: NETFILTER_CFG table=raw:124 family=2 entries=21 op=nft_register_chain pid=5315 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:21:58.846000 audit[5315]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fff42974ae0 a2=0 a3=7fff42974acc items=0 ppid=5096 pid=5315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.846000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:21:58.849000 audit[5320]: NETFILTER_CFG table=filter:125 family=2 entries=94 op=nft_register_chain pid=5320 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:21:58.849000 audit[5320]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffcee6c77e0 a2=0 a3=7ffcee6c77cc items=0 ppid=5096 pid=5320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:58.849000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:21:59.052017 systemd-networkd[2161]: cali74cd9c1d7ae: Gained IPv6LL Dec 12 18:21:59.443828 kubelet[4009]: E1212 18:21:59.443758 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66465f8f84-gfntv" podUID="129d48cc-df60-49a9-8eb1-5bf2a56866a1" Dec 12 18:21:59.466000 audit[5329]: NETFILTER_CFG table=filter:126 family=2 entries=20 op=nft_register_rule pid=5329 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:21:59.466000 audit[5329]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff2db55270 a2=0 a3=7fff2db5525c items=0 ppid=4115 pid=5329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:59.466000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:21:59.470000 audit[5329]: NETFILTER_CFG table=nat:127 family=2 entries=14 op=nft_register_rule pid=5329 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:21:59.470000 audit[5329]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff2db55270 a2=0 a3=0 items=0 ppid=4115 pid=5329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:59.470000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:21:59.677439 kubelet[4009]: I1212 18:21:59.676972 4009 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:22:00.523687 systemd-networkd[2161]: vxlan.calico: Gained IPv6LL Dec 12 18:22:02.311626 containerd[2529]: time="2025-12-12T18:22:02.311355245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74c7564cb4-4f4mj,Uid:c72f9f09-ad62-40d6-8632-b537f3032703,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:22:02.404014 systemd-networkd[2161]: calid7d8b23196e: Link UP Dec 12 18:22:02.404135 systemd-networkd[2161]: calid7d8b23196e: Gained carrier Dec 12 18:22:02.420837 containerd[2529]: 2025-12-12 18:22:02.353 [INFO][5383] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--53d1559fda-k8s-calico--apiserver--74c7564cb4--4f4mj-eth0 calico-apiserver-74c7564cb4- calico-apiserver c72f9f09-ad62-40d6-8632-b537f3032703 816 0 2025-12-12 18:21:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74c7564cb4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515.1.0-a-53d1559fda calico-apiserver-74c7564cb4-4f4mj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid7d8b23196e [] [] }} ContainerID="6b4936f932e584d55864677ca3cc7cfb3d41ea262ff7bc18307f42a5f70b20c7" Namespace="calico-apiserver" Pod="calico-apiserver-74c7564cb4-4f4mj" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-calico--apiserver--74c7564cb4--4f4mj-" Dec 12 18:22:02.420837 containerd[2529]: 2025-12-12 18:22:02.353 [INFO][5383] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6b4936f932e584d55864677ca3cc7cfb3d41ea262ff7bc18307f42a5f70b20c7" Namespace="calico-apiserver" Pod="calico-apiserver-74c7564cb4-4f4mj" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-calico--apiserver--74c7564cb4--4f4mj-eth0" Dec 12 18:22:02.420837 containerd[2529]: 2025-12-12 18:22:02.373 [INFO][5395] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b4936f932e584d55864677ca3cc7cfb3d41ea262ff7bc18307f42a5f70b20c7" HandleID="k8s-pod-network.6b4936f932e584d55864677ca3cc7cfb3d41ea262ff7bc18307f42a5f70b20c7" Workload="ci--4515.1.0--a--53d1559fda-k8s-calico--apiserver--74c7564cb4--4f4mj-eth0" Dec 12 18:22:02.421055 containerd[2529]: 2025-12-12 18:22:02.373 [INFO][5395] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6b4936f932e584d55864677ca3cc7cfb3d41ea262ff7bc18307f42a5f70b20c7" HandleID="k8s-pod-network.6b4936f932e584d55864677ca3cc7cfb3d41ea262ff7bc18307f42a5f70b20c7" Workload="ci--4515.1.0--a--53d1559fda-k8s-calico--apiserver--74c7564cb4--4f4mj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf090), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515.1.0-a-53d1559fda", "pod":"calico-apiserver-74c7564cb4-4f4mj", "timestamp":"2025-12-12 18:22:02.373841733 +0000 UTC"}, Hostname:"ci-4515.1.0-a-53d1559fda", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:22:02.421055 containerd[2529]: 2025-12-12 18:22:02.373 [INFO][5395] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:22:02.421055 containerd[2529]: 2025-12-12 18:22:02.374 [INFO][5395] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:22:02.421055 containerd[2529]: 2025-12-12 18:22:02.374 [INFO][5395] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-53d1559fda' Dec 12 18:22:02.421055 containerd[2529]: 2025-12-12 18:22:02.378 [INFO][5395] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6b4936f932e584d55864677ca3cc7cfb3d41ea262ff7bc18307f42a5f70b20c7" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:02.421055 containerd[2529]: 2025-12-12 18:22:02.381 [INFO][5395] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:02.421055 containerd[2529]: 2025-12-12 18:22:02.384 [INFO][5395] ipam/ipam.go 511: Trying affinity for 192.168.51.0/26 host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:02.421055 containerd[2529]: 2025-12-12 18:22:02.386 [INFO][5395] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.0/26 host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:02.421055 containerd[2529]: 2025-12-12 18:22:02.387 [INFO][5395] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.0/26 host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:02.421732 containerd[2529]: 2025-12-12 18:22:02.387 [INFO][5395] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.51.0/26 handle="k8s-pod-network.6b4936f932e584d55864677ca3cc7cfb3d41ea262ff7bc18307f42a5f70b20c7" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:02.421732 containerd[2529]: 2025-12-12 18:22:02.388 [INFO][5395] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6b4936f932e584d55864677ca3cc7cfb3d41ea262ff7bc18307f42a5f70b20c7 Dec 12 18:22:02.421732 containerd[2529]: 2025-12-12 18:22:02.392 [INFO][5395] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.51.0/26 handle="k8s-pod-network.6b4936f932e584d55864677ca3cc7cfb3d41ea262ff7bc18307f42a5f70b20c7" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:02.421732 containerd[2529]: 2025-12-12 18:22:02.399 [INFO][5395] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.51.2/26] block=192.168.51.0/26 handle="k8s-pod-network.6b4936f932e584d55864677ca3cc7cfb3d41ea262ff7bc18307f42a5f70b20c7" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:02.421732 containerd[2529]: 2025-12-12 18:22:02.399 [INFO][5395] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.2/26] handle="k8s-pod-network.6b4936f932e584d55864677ca3cc7cfb3d41ea262ff7bc18307f42a5f70b20c7" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:02.421732 containerd[2529]: 2025-12-12 18:22:02.399 [INFO][5395] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:22:02.421732 containerd[2529]: 2025-12-12 18:22:02.399 [INFO][5395] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.51.2/26] IPv6=[] ContainerID="6b4936f932e584d55864677ca3cc7cfb3d41ea262ff7bc18307f42a5f70b20c7" HandleID="k8s-pod-network.6b4936f932e584d55864677ca3cc7cfb3d41ea262ff7bc18307f42a5f70b20c7" Workload="ci--4515.1.0--a--53d1559fda-k8s-calico--apiserver--74c7564cb4--4f4mj-eth0" Dec 12 18:22:02.421971 containerd[2529]: 2025-12-12 18:22:02.401 [INFO][5383] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6b4936f932e584d55864677ca3cc7cfb3d41ea262ff7bc18307f42a5f70b20c7" Namespace="calico-apiserver" Pod="calico-apiserver-74c7564cb4-4f4mj" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-calico--apiserver--74c7564cb4--4f4mj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--53d1559fda-k8s-calico--apiserver--74c7564cb4--4f4mj-eth0", GenerateName:"calico-apiserver-74c7564cb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"c72f9f09-ad62-40d6-8632-b537f3032703", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74c7564cb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-53d1559fda", ContainerID:"", Pod:"calico-apiserver-74c7564cb4-4f4mj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid7d8b23196e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:22:02.422059 containerd[2529]: 2025-12-12 18:22:02.401 [INFO][5383] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.2/32] ContainerID="6b4936f932e584d55864677ca3cc7cfb3d41ea262ff7bc18307f42a5f70b20c7" Namespace="calico-apiserver" Pod="calico-apiserver-74c7564cb4-4f4mj" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-calico--apiserver--74c7564cb4--4f4mj-eth0" Dec 12 18:22:02.422059 containerd[2529]: 2025-12-12 18:22:02.401 [INFO][5383] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid7d8b23196e ContainerID="6b4936f932e584d55864677ca3cc7cfb3d41ea262ff7bc18307f42a5f70b20c7" Namespace="calico-apiserver" Pod="calico-apiserver-74c7564cb4-4f4mj" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-calico--apiserver--74c7564cb4--4f4mj-eth0" Dec 12 18:22:02.422059 containerd[2529]: 2025-12-12 18:22:02.403 [INFO][5383] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b4936f932e584d55864677ca3cc7cfb3d41ea262ff7bc18307f42a5f70b20c7" Namespace="calico-apiserver" Pod="calico-apiserver-74c7564cb4-4f4mj" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-calico--apiserver--74c7564cb4--4f4mj-eth0" Dec 12 18:22:02.422147 containerd[2529]: 2025-12-12 18:22:02.405 [INFO][5383] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6b4936f932e584d55864677ca3cc7cfb3d41ea262ff7bc18307f42a5f70b20c7" Namespace="calico-apiserver" Pod="calico-apiserver-74c7564cb4-4f4mj" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-calico--apiserver--74c7564cb4--4f4mj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--53d1559fda-k8s-calico--apiserver--74c7564cb4--4f4mj-eth0", GenerateName:"calico-apiserver-74c7564cb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"c72f9f09-ad62-40d6-8632-b537f3032703", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74c7564cb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-53d1559fda", ContainerID:"6b4936f932e584d55864677ca3cc7cfb3d41ea262ff7bc18307f42a5f70b20c7", Pod:"calico-apiserver-74c7564cb4-4f4mj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid7d8b23196e", MAC:"6a:48:e5:95:a7:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:22:02.422220 containerd[2529]: 2025-12-12 18:22:02.418 [INFO][5383] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6b4936f932e584d55864677ca3cc7cfb3d41ea262ff7bc18307f42a5f70b20c7" Namespace="calico-apiserver" Pod="calico-apiserver-74c7564cb4-4f4mj" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-calico--apiserver--74c7564cb4--4f4mj-eth0" Dec 12 18:22:02.435000 audit[5411]: NETFILTER_CFG table=filter:128 family=2 entries=50 op=nft_register_chain pid=5411 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:22:02.436874 kernel: kauditd_printk_skb: 231 callbacks suppressed Dec 12 18:22:02.436923 kernel: audit: type=1325 audit(1765563722.435:682): table=filter:128 family=2 entries=50 op=nft_register_chain pid=5411 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:22:02.446555 kernel: audit: type=1300 audit(1765563722.435:682): arch=c000003e syscall=46 success=yes exit=28208 a0=3 a1=7ffc250bbe10 a2=0 a3=7ffc250bbdfc items=0 ppid=5096 pid=5411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:02.435000 audit[5411]: SYSCALL arch=c000003e syscall=46 success=yes exit=28208 a0=3 a1=7ffc250bbe10 a2=0 a3=7ffc250bbdfc items=0 ppid=5096 pid=5411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:02.450562 kernel: audit: type=1327 audit(1765563722.435:682): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:22:02.435000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:22:02.473250 containerd[2529]: time="2025-12-12T18:22:02.473211151Z" level=info msg="connecting to shim 6b4936f932e584d55864677ca3cc7cfb3d41ea262ff7bc18307f42a5f70b20c7" address="unix:///run/containerd/s/9a7f4cb8e6ade802eb66db04fe8a4fc29482d957659314e126c91f952ccaa5c2" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:22:02.494706 systemd[1]: Started cri-containerd-6b4936f932e584d55864677ca3cc7cfb3d41ea262ff7bc18307f42a5f70b20c7.scope - libcontainer container 6b4936f932e584d55864677ca3cc7cfb3d41ea262ff7bc18307f42a5f70b20c7. Dec 12 18:22:02.503000 audit: BPF prog-id=235 op=LOAD Dec 12 18:22:02.506544 kernel: audit: type=1334 audit(1765563722.503:683): prog-id=235 op=LOAD Dec 12 18:22:02.503000 audit: BPF prog-id=236 op=LOAD Dec 12 18:22:02.509657 kernel: audit: type=1334 audit(1765563722.503:684): prog-id=236 op=LOAD Dec 12 18:22:02.509721 kernel: audit: type=1300 audit(1765563722.503:684): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5420 pid=5432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:02.503000 audit[5432]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5420 pid=5432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:02.503000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662343933366639333265353834643535383634363737636133636337 Dec 12 18:22:02.521381 kernel: audit: type=1327 audit(1765563722.503:684): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662343933366639333265353834643535383634363737636133636337 Dec 12 18:22:02.503000 audit: BPF prog-id=236 op=UNLOAD Dec 12 18:22:02.524821 kernel: audit: type=1334 audit(1765563722.503:685): prog-id=236 op=UNLOAD Dec 12 18:22:02.503000 audit[5432]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5420 pid=5432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:02.531163 kernel: audit: type=1300 audit(1765563722.503:685): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5420 pid=5432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:02.503000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662343933366639333265353834643535383634363737636133636337 Dec 12 18:22:02.537093 kernel: audit: type=1327 audit(1765563722.503:685): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662343933366639333265353834643535383634363737636133636337 Dec 12 18:22:02.503000 audit: BPF prog-id=237 op=LOAD Dec 12 18:22:02.503000 audit[5432]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5420 pid=5432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:02.503000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662343933366639333265353834643535383634363737636133636337 Dec 12 18:22:02.503000 audit: BPF prog-id=238 op=LOAD Dec 12 18:22:02.503000 audit[5432]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5420 pid=5432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:02.503000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662343933366639333265353834643535383634363737636133636337 Dec 12 18:22:02.503000 audit: BPF prog-id=238 op=UNLOAD Dec 12 18:22:02.503000 audit[5432]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5420 pid=5432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:02.503000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662343933366639333265353834643535383634363737636133636337 Dec 12 18:22:02.503000 audit: BPF prog-id=237 op=UNLOAD Dec 12 18:22:02.503000 audit[5432]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5420 pid=5432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:02.503000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662343933366639333265353834643535383634363737636133636337 Dec 12 18:22:02.503000 audit: BPF prog-id=239 op=LOAD Dec 12 18:22:02.503000 audit[5432]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5420 pid=5432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:02.503000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662343933366639333265353834643535383634363737636133636337 Dec 12 18:22:02.551244 containerd[2529]: time="2025-12-12T18:22:02.551218015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74c7564cb4-4f4mj,Uid:c72f9f09-ad62-40d6-8632-b537f3032703,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6b4936f932e584d55864677ca3cc7cfb3d41ea262ff7bc18307f42a5f70b20c7\"" Dec 12 18:22:02.552982 containerd[2529]: time="2025-12-12T18:22:02.552915649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:22:02.807762 containerd[2529]: time="2025-12-12T18:22:02.807716611Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:22:02.810560 containerd[2529]: time="2025-12-12T18:22:02.810511212Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:22:02.810624 containerd[2529]: time="2025-12-12T18:22:02.810611451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:22:02.810821 kubelet[4009]: E1212 18:22:02.810789 4009 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:22:02.811235 kubelet[4009]: E1212 18:22:02.810836 4009 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:22:02.811235 kubelet[4009]: E1212 18:22:02.810990 4009 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m79zj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74c7564cb4-4f4mj_calico-apiserver(c72f9f09-ad62-40d6-8632-b537f3032703): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:22:02.812190 kubelet[4009]: E1212 18:22:02.812133 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c7564cb4-4f4mj" podUID="c72f9f09-ad62-40d6-8632-b537f3032703" Dec 12 18:22:03.311497 containerd[2529]: time="2025-12-12T18:22:03.311444081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hkwbv,Uid:3cbfe13e-d642-4617-a3ea-e4339732c6c2,Namespace:kube-system,Attempt:0,}" Dec 12 18:22:03.311720 containerd[2529]: time="2025-12-12T18:22:03.311444059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74c7564cb4-hnnls,Uid:588d2c87-beaf-4e83-ba6f-8e3f0d453589,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:22:03.437094 systemd-networkd[2161]: cali8034178b18e: Link UP Dec 12 18:22:03.437503 systemd-networkd[2161]: cali8034178b18e: Gained carrier Dec 12 18:22:03.448473 kubelet[4009]: E1212 18:22:03.448436 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c7564cb4-4f4mj" podUID="c72f9f09-ad62-40d6-8632-b537f3032703" Dec 12 18:22:03.455251 containerd[2529]: 2025-12-12 18:22:03.373 [INFO][5462] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--53d1559fda-k8s-calico--apiserver--74c7564cb4--hnnls-eth0 calico-apiserver-74c7564cb4- calico-apiserver 588d2c87-beaf-4e83-ba6f-8e3f0d453589 821 0 2025-12-12 18:21:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74c7564cb4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515.1.0-a-53d1559fda calico-apiserver-74c7564cb4-hnnls eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8034178b18e [] [] }} ContainerID="af2b3780ea009b27bae6096f27444e04f15297300c737fc432bfdd140e5c8ffa" Namespace="calico-apiserver" Pod="calico-apiserver-74c7564cb4-hnnls" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-calico--apiserver--74c7564cb4--hnnls-" Dec 12 18:22:03.455251 containerd[2529]: 2025-12-12 18:22:03.374 [INFO][5462] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="af2b3780ea009b27bae6096f27444e04f15297300c737fc432bfdd140e5c8ffa" Namespace="calico-apiserver" Pod="calico-apiserver-74c7564cb4-hnnls" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-calico--apiserver--74c7564cb4--hnnls-eth0" Dec 12 18:22:03.455251 containerd[2529]: 2025-12-12 18:22:03.401 [INFO][5488] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af2b3780ea009b27bae6096f27444e04f15297300c737fc432bfdd140e5c8ffa" HandleID="k8s-pod-network.af2b3780ea009b27bae6096f27444e04f15297300c737fc432bfdd140e5c8ffa" Workload="ci--4515.1.0--a--53d1559fda-k8s-calico--apiserver--74c7564cb4--hnnls-eth0" Dec 12 18:22:03.455441 containerd[2529]: 2025-12-12 18:22:03.401 [INFO][5488] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="af2b3780ea009b27bae6096f27444e04f15297300c737fc432bfdd140e5c8ffa" HandleID="k8s-pod-network.af2b3780ea009b27bae6096f27444e04f15297300c737fc432bfdd140e5c8ffa" Workload="ci--4515.1.0--a--53d1559fda-k8s-calico--apiserver--74c7564cb4--hnnls-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00025b820), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515.1.0-a-53d1559fda", "pod":"calico-apiserver-74c7564cb4-hnnls", "timestamp":"2025-12-12 18:22:03.40178136 +0000 UTC"}, Hostname:"ci-4515.1.0-a-53d1559fda", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:22:03.455441 containerd[2529]: 2025-12-12 18:22:03.402 [INFO][5488] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:22:03.455441 containerd[2529]: 2025-12-12 18:22:03.402 [INFO][5488] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:22:03.455441 containerd[2529]: 2025-12-12 18:22:03.402 [INFO][5488] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-53d1559fda' Dec 12 18:22:03.455441 containerd[2529]: 2025-12-12 18:22:03.407 [INFO][5488] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.af2b3780ea009b27bae6096f27444e04f15297300c737fc432bfdd140e5c8ffa" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:03.455441 containerd[2529]: 2025-12-12 18:22:03.410 [INFO][5488] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:03.455441 containerd[2529]: 2025-12-12 18:22:03.412 [INFO][5488] ipam/ipam.go 511: Trying affinity for 192.168.51.0/26 host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:03.455441 containerd[2529]: 2025-12-12 18:22:03.414 [INFO][5488] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.0/26 host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:03.455441 containerd[2529]: 2025-12-12 18:22:03.415 [INFO][5488] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.0/26 host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:03.455709 containerd[2529]: 2025-12-12 18:22:03.415 [INFO][5488] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.51.0/26 handle="k8s-pod-network.af2b3780ea009b27bae6096f27444e04f15297300c737fc432bfdd140e5c8ffa" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:03.455709 containerd[2529]: 2025-12-12 18:22:03.416 [INFO][5488] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.af2b3780ea009b27bae6096f27444e04f15297300c737fc432bfdd140e5c8ffa Dec 12 18:22:03.455709 containerd[2529]: 2025-12-12 18:22:03.420 [INFO][5488] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.51.0/26 handle="k8s-pod-network.af2b3780ea009b27bae6096f27444e04f15297300c737fc432bfdd140e5c8ffa" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:03.455709 containerd[2529]: 2025-12-12 18:22:03.429 [INFO][5488] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.51.3/26] block=192.168.51.0/26 handle="k8s-pod-network.af2b3780ea009b27bae6096f27444e04f15297300c737fc432bfdd140e5c8ffa" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:03.455709 containerd[2529]: 2025-12-12 18:22:03.429 [INFO][5488] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.3/26] handle="k8s-pod-network.af2b3780ea009b27bae6096f27444e04f15297300c737fc432bfdd140e5c8ffa" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:03.455709 containerd[2529]: 2025-12-12 18:22:03.429 [INFO][5488] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:22:03.455709 containerd[2529]: 2025-12-12 18:22:03.429 [INFO][5488] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.51.3/26] IPv6=[] ContainerID="af2b3780ea009b27bae6096f27444e04f15297300c737fc432bfdd140e5c8ffa" HandleID="k8s-pod-network.af2b3780ea009b27bae6096f27444e04f15297300c737fc432bfdd140e5c8ffa" Workload="ci--4515.1.0--a--53d1559fda-k8s-calico--apiserver--74c7564cb4--hnnls-eth0" Dec 12 18:22:03.455868 containerd[2529]: 2025-12-12 18:22:03.432 [INFO][5462] cni-plugin/k8s.go 418: Populated endpoint ContainerID="af2b3780ea009b27bae6096f27444e04f15297300c737fc432bfdd140e5c8ffa" Namespace="calico-apiserver" Pod="calico-apiserver-74c7564cb4-hnnls" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-calico--apiserver--74c7564cb4--hnnls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--53d1559fda-k8s-calico--apiserver--74c7564cb4--hnnls-eth0", GenerateName:"calico-apiserver-74c7564cb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"588d2c87-beaf-4e83-ba6f-8e3f0d453589", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74c7564cb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-53d1559fda", ContainerID:"", Pod:"calico-apiserver-74c7564cb4-hnnls", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8034178b18e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:22:03.455933 containerd[2529]: 2025-12-12 18:22:03.432 [INFO][5462] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.3/32] ContainerID="af2b3780ea009b27bae6096f27444e04f15297300c737fc432bfdd140e5c8ffa" Namespace="calico-apiserver" Pod="calico-apiserver-74c7564cb4-hnnls" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-calico--apiserver--74c7564cb4--hnnls-eth0" Dec 12 18:22:03.455933 containerd[2529]: 2025-12-12 18:22:03.432 [INFO][5462] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8034178b18e ContainerID="af2b3780ea009b27bae6096f27444e04f15297300c737fc432bfdd140e5c8ffa" Namespace="calico-apiserver" Pod="calico-apiserver-74c7564cb4-hnnls" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-calico--apiserver--74c7564cb4--hnnls-eth0" Dec 12 18:22:03.455933 containerd[2529]: 2025-12-12 18:22:03.435 [INFO][5462] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af2b3780ea009b27bae6096f27444e04f15297300c737fc432bfdd140e5c8ffa" Namespace="calico-apiserver" Pod="calico-apiserver-74c7564cb4-hnnls" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-calico--apiserver--74c7564cb4--hnnls-eth0" Dec 12 18:22:03.456004 containerd[2529]: 2025-12-12 18:22:03.436 [INFO][5462] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="af2b3780ea009b27bae6096f27444e04f15297300c737fc432bfdd140e5c8ffa" Namespace="calico-apiserver" Pod="calico-apiserver-74c7564cb4-hnnls" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-calico--apiserver--74c7564cb4--hnnls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--53d1559fda-k8s-calico--apiserver--74c7564cb4--hnnls-eth0", GenerateName:"calico-apiserver-74c7564cb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"588d2c87-beaf-4e83-ba6f-8e3f0d453589", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74c7564cb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-53d1559fda", ContainerID:"af2b3780ea009b27bae6096f27444e04f15297300c737fc432bfdd140e5c8ffa", Pod:"calico-apiserver-74c7564cb4-hnnls", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8034178b18e", MAC:"9e:01:c9:32:6c:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:22:03.456064 containerd[2529]: 2025-12-12 18:22:03.452 [INFO][5462] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="af2b3780ea009b27bae6096f27444e04f15297300c737fc432bfdd140e5c8ffa" Namespace="calico-apiserver" Pod="calico-apiserver-74c7564cb4-hnnls" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-calico--apiserver--74c7564cb4--hnnls-eth0" Dec 12 18:22:03.477000 audit[5507]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=5507 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:22:03.477000 audit[5507]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc0ad293c0 a2=0 a3=7ffc0ad293ac items=0 ppid=4115 pid=5507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:03.477000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:22:03.480000 audit[5507]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=5507 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:22:03.480000 audit[5507]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc0ad293c0 a2=0 a3=0 items=0 ppid=4115 pid=5507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:03.480000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:22:03.487000 audit[5508]: NETFILTER_CFG table=filter:131 family=2 entries=41 op=nft_register_chain pid=5508 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:22:03.487000 audit[5508]: SYSCALL arch=c000003e syscall=46 success=yes exit=23076 a0=3 a1=7ffd485e6fa0 a2=0 a3=7ffd485e6f8c items=0 ppid=5096 pid=5508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:03.487000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:22:03.495618 containerd[2529]: time="2025-12-12T18:22:03.495557284Z" level=info msg="connecting to shim af2b3780ea009b27bae6096f27444e04f15297300c737fc432bfdd140e5c8ffa" address="unix:///run/containerd/s/3d693b28c8de0c1651d51173679e704d4ec344909efd48e4edff3fae33f6a65e" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:22:03.524805 systemd[1]: Started cri-containerd-af2b3780ea009b27bae6096f27444e04f15297300c737fc432bfdd140e5c8ffa.scope - libcontainer container af2b3780ea009b27bae6096f27444e04f15297300c737fc432bfdd140e5c8ffa. Dec 12 18:22:03.535000 audit: BPF prog-id=240 op=LOAD Dec 12 18:22:03.536000 audit: BPF prog-id=241 op=LOAD Dec 12 18:22:03.536000 audit[5528]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=5517 pid=5528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:03.536000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166326233373830656130303962323762616536303936663237343434 Dec 12 18:22:03.536000 audit: BPF prog-id=241 op=UNLOAD Dec 12 18:22:03.536000 audit[5528]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5517 pid=5528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:03.536000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166326233373830656130303962323762616536303936663237343434 Dec 12 18:22:03.536000 audit: BPF prog-id=242 op=LOAD Dec 12 18:22:03.536000 audit[5528]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=5517 pid=5528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:03.536000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166326233373830656130303962323762616536303936663237343434 Dec 12 18:22:03.536000 audit: BPF prog-id=243 op=LOAD Dec 12 18:22:03.536000 audit[5528]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=5517 pid=5528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:03.536000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166326233373830656130303962323762616536303936663237343434 Dec 12 18:22:03.536000 audit: BPF prog-id=243 op=UNLOAD Dec 12 18:22:03.536000 audit[5528]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5517 pid=5528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:03.536000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166326233373830656130303962323762616536303936663237343434 Dec 12 18:22:03.536000 audit: BPF prog-id=242 op=UNLOAD Dec 12 18:22:03.536000 audit[5528]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5517 pid=5528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:03.536000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166326233373830656130303962323762616536303936663237343434 Dec 12 18:22:03.537000 audit: BPF prog-id=244 op=LOAD Dec 12 18:22:03.537000 audit[5528]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=5517 pid=5528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:03.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166326233373830656130303962323762616536303936663237343434 Dec 12 18:22:03.554759 systemd-networkd[2161]: cali9da0207f921: Link UP Dec 12 18:22:03.555739 systemd-networkd[2161]: cali9da0207f921: Gained carrier Dec 12 18:22:03.579554 containerd[2529]: 2025-12-12 18:22:03.369 [INFO][5458] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--53d1559fda-k8s-coredns--674b8bbfcf--hkwbv-eth0 coredns-674b8bbfcf- kube-system 3cbfe13e-d642-4617-a3ea-e4339732c6c2 819 0 2025-12-12 18:21:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515.1.0-a-53d1559fda coredns-674b8bbfcf-hkwbv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9da0207f921 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d4c3817357d298dbf425d9d0bb94a2b30469af122a139eaad1041d54bfa502d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-hkwbv" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-coredns--674b8bbfcf--hkwbv-" Dec 12 18:22:03.579554 containerd[2529]: 2025-12-12 18:22:03.369 [INFO][5458] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d4c3817357d298dbf425d9d0bb94a2b30469af122a139eaad1041d54bfa502d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-hkwbv" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-coredns--674b8bbfcf--hkwbv-eth0" Dec 12 18:22:03.579554 containerd[2529]: 2025-12-12 18:22:03.402 [INFO][5483] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d4c3817357d298dbf425d9d0bb94a2b30469af122a139eaad1041d54bfa502d6" HandleID="k8s-pod-network.d4c3817357d298dbf425d9d0bb94a2b30469af122a139eaad1041d54bfa502d6" Workload="ci--4515.1.0--a--53d1559fda-k8s-coredns--674b8bbfcf--hkwbv-eth0" Dec 12 18:22:03.579728 containerd[2529]: 2025-12-12 18:22:03.402 [INFO][5483] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d4c3817357d298dbf425d9d0bb94a2b30469af122a139eaad1041d54bfa502d6" HandleID="k8s-pod-network.d4c3817357d298dbf425d9d0bb94a2b30469af122a139eaad1041d54bfa502d6" Workload="ci--4515.1.0--a--53d1559fda-k8s-coredns--674b8bbfcf--hkwbv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515.1.0-a-53d1559fda", "pod":"coredns-674b8bbfcf-hkwbv", "timestamp":"2025-12-12 18:22:03.402010123 +0000 UTC"}, Hostname:"ci-4515.1.0-a-53d1559fda", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:22:03.579728 containerd[2529]: 2025-12-12 18:22:03.402 [INFO][5483] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:22:03.579728 containerd[2529]: 2025-12-12 18:22:03.429 [INFO][5483] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:22:03.579728 containerd[2529]: 2025-12-12 18:22:03.429 [INFO][5483] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-53d1559fda' Dec 12 18:22:03.579728 containerd[2529]: 2025-12-12 18:22:03.508 [INFO][5483] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d4c3817357d298dbf425d9d0bb94a2b30469af122a139eaad1041d54bfa502d6" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:03.579728 containerd[2529]: 2025-12-12 18:22:03.512 [INFO][5483] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:03.579728 containerd[2529]: 2025-12-12 18:22:03.517 [INFO][5483] ipam/ipam.go 511: Trying affinity for 192.168.51.0/26 host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:03.579728 containerd[2529]: 2025-12-12 18:22:03.521 [INFO][5483] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.0/26 host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:03.579728 containerd[2529]: 2025-12-12 18:22:03.523 [INFO][5483] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.0/26 host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:03.579952 containerd[2529]: 2025-12-12 18:22:03.523 [INFO][5483] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.51.0/26 handle="k8s-pod-network.d4c3817357d298dbf425d9d0bb94a2b30469af122a139eaad1041d54bfa502d6" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:03.579952 containerd[2529]: 2025-12-12 18:22:03.524 [INFO][5483] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d4c3817357d298dbf425d9d0bb94a2b30469af122a139eaad1041d54bfa502d6 Dec 12 18:22:03.579952 containerd[2529]: 2025-12-12 18:22:03.530 [INFO][5483] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.51.0/26 handle="k8s-pod-network.d4c3817357d298dbf425d9d0bb94a2b30469af122a139eaad1041d54bfa502d6" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:03.579952 containerd[2529]: 2025-12-12 18:22:03.539 [INFO][5483] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.51.4/26] block=192.168.51.0/26 handle="k8s-pod-network.d4c3817357d298dbf425d9d0bb94a2b30469af122a139eaad1041d54bfa502d6" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:03.579952 containerd[2529]: 2025-12-12 18:22:03.539 [INFO][5483] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.4/26] handle="k8s-pod-network.d4c3817357d298dbf425d9d0bb94a2b30469af122a139eaad1041d54bfa502d6" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:03.579952 containerd[2529]: 2025-12-12 18:22:03.540 [INFO][5483] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:22:03.579952 containerd[2529]: 2025-12-12 18:22:03.540 [INFO][5483] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.51.4/26] IPv6=[] ContainerID="d4c3817357d298dbf425d9d0bb94a2b30469af122a139eaad1041d54bfa502d6" HandleID="k8s-pod-network.d4c3817357d298dbf425d9d0bb94a2b30469af122a139eaad1041d54bfa502d6" Workload="ci--4515.1.0--a--53d1559fda-k8s-coredns--674b8bbfcf--hkwbv-eth0" Dec 12 18:22:03.580112 containerd[2529]: 2025-12-12 18:22:03.547 [INFO][5458] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d4c3817357d298dbf425d9d0bb94a2b30469af122a139eaad1041d54bfa502d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-hkwbv" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-coredns--674b8bbfcf--hkwbv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--53d1559fda-k8s-coredns--674b8bbfcf--hkwbv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3cbfe13e-d642-4617-a3ea-e4339732c6c2", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 21, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-53d1559fda", ContainerID:"", Pod:"coredns-674b8bbfcf-hkwbv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9da0207f921", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:22:03.580112 containerd[2529]: 2025-12-12 18:22:03.547 [INFO][5458] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.4/32] ContainerID="d4c3817357d298dbf425d9d0bb94a2b30469af122a139eaad1041d54bfa502d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-hkwbv" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-coredns--674b8bbfcf--hkwbv-eth0" Dec 12 18:22:03.580112 containerd[2529]: 2025-12-12 18:22:03.547 [INFO][5458] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9da0207f921 ContainerID="d4c3817357d298dbf425d9d0bb94a2b30469af122a139eaad1041d54bfa502d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-hkwbv" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-coredns--674b8bbfcf--hkwbv-eth0" Dec 12 18:22:03.580112 containerd[2529]: 2025-12-12 18:22:03.556 [INFO][5458] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d4c3817357d298dbf425d9d0bb94a2b30469af122a139eaad1041d54bfa502d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-hkwbv" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-coredns--674b8bbfcf--hkwbv-eth0" Dec 12 18:22:03.580112 containerd[2529]: 2025-12-12 18:22:03.559 [INFO][5458] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d4c3817357d298dbf425d9d0bb94a2b30469af122a139eaad1041d54bfa502d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-hkwbv" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-coredns--674b8bbfcf--hkwbv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--53d1559fda-k8s-coredns--674b8bbfcf--hkwbv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3cbfe13e-d642-4617-a3ea-e4339732c6c2", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 21, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-53d1559fda", ContainerID:"d4c3817357d298dbf425d9d0bb94a2b30469af122a139eaad1041d54bfa502d6", Pod:"coredns-674b8bbfcf-hkwbv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9da0207f921", MAC:"32:af:58:42:31:ff", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:22:03.580112 containerd[2529]: 2025-12-12 18:22:03.574 [INFO][5458] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d4c3817357d298dbf425d9d0bb94a2b30469af122a139eaad1041d54bfa502d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-hkwbv" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-coredns--674b8bbfcf--hkwbv-eth0" Dec 12 18:22:03.608000 audit[5566]: NETFILTER_CFG table=filter:132 family=2 entries=50 op=nft_register_chain pid=5566 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:22:03.608000 audit[5566]: SYSCALL arch=c000003e syscall=46 success=yes exit=24928 a0=3 a1=7fff60766b10 a2=0 a3=7fff60766afc items=0 ppid=5096 pid=5566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:03.608000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:22:03.611952 containerd[2529]: time="2025-12-12T18:22:03.611867786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74c7564cb4-hnnls,Uid:588d2c87-beaf-4e83-ba6f-8e3f0d453589,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"af2b3780ea009b27bae6096f27444e04f15297300c737fc432bfdd140e5c8ffa\"" Dec 12 18:22:03.613028 containerd[2529]: time="2025-12-12T18:22:03.612984443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:22:03.640590 containerd[2529]: time="2025-12-12T18:22:03.640509840Z" level=info msg="connecting to shim d4c3817357d298dbf425d9d0bb94a2b30469af122a139eaad1041d54bfa502d6" address="unix:///run/containerd/s/79777724cfdd98deec209883f64b718d7467b1ee3c4fd2b1c4cb27387341773f" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:22:03.667697 systemd[1]: Started cri-containerd-d4c3817357d298dbf425d9d0bb94a2b30469af122a139eaad1041d54bfa502d6.scope - libcontainer container d4c3817357d298dbf425d9d0bb94a2b30469af122a139eaad1041d54bfa502d6. Dec 12 18:22:03.676000 audit: BPF prog-id=245 op=LOAD Dec 12 18:22:03.676000 audit: BPF prog-id=246 op=LOAD Dec 12 18:22:03.676000 audit[5587]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5576 pid=5587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:03.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434633338313733353764323938646266343235643964306262393461 Dec 12 18:22:03.676000 audit: BPF prog-id=246 op=UNLOAD Dec 12 18:22:03.676000 audit[5587]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5576 pid=5587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:03.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434633338313733353764323938646266343235643964306262393461 Dec 12 18:22:03.676000 audit: BPF prog-id=247 op=LOAD Dec 12 18:22:03.676000 audit[5587]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5576 pid=5587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:03.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434633338313733353764323938646266343235643964306262393461 Dec 12 18:22:03.677000 audit: BPF prog-id=248 op=LOAD Dec 12 18:22:03.677000 audit[5587]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5576 pid=5587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:03.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434633338313733353764323938646266343235643964306262393461 Dec 12 18:22:03.677000 audit: BPF prog-id=248 op=UNLOAD Dec 12 18:22:03.677000 audit[5587]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5576 pid=5587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:03.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434633338313733353764323938646266343235643964306262393461 Dec 12 18:22:03.677000 audit: BPF prog-id=247 op=UNLOAD Dec 12 18:22:03.677000 audit[5587]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5576 pid=5587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:03.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434633338313733353764323938646266343235643964306262393461 Dec 12 18:22:03.677000 audit: BPF prog-id=249 op=LOAD Dec 12 18:22:03.677000 audit[5587]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5576 pid=5587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:03.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434633338313733353764323938646266343235643964306262393461 Dec 12 18:22:03.708185 containerd[2529]: time="2025-12-12T18:22:03.708149790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hkwbv,Uid:3cbfe13e-d642-4617-a3ea-e4339732c6c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4c3817357d298dbf425d9d0bb94a2b30469af122a139eaad1041d54bfa502d6\"" Dec 12 18:22:03.714853 containerd[2529]: time="2025-12-12T18:22:03.714752641Z" level=info msg="CreateContainer within sandbox \"d4c3817357d298dbf425d9d0bb94a2b30469af122a139eaad1041d54bfa502d6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:22:03.737082 containerd[2529]: time="2025-12-12T18:22:03.737054355Z" level=info msg="Container 0c5b4d60d160dea4efb67628117638e146171ba84966129e262a378d79022b66: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:22:03.749187 containerd[2529]: time="2025-12-12T18:22:03.749163177Z" level=info msg="CreateContainer within sandbox \"d4c3817357d298dbf425d9d0bb94a2b30469af122a139eaad1041d54bfa502d6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0c5b4d60d160dea4efb67628117638e146171ba84966129e262a378d79022b66\"" Dec 12 18:22:03.749744 containerd[2529]: time="2025-12-12T18:22:03.749722134Z" level=info msg="StartContainer for \"0c5b4d60d160dea4efb67628117638e146171ba84966129e262a378d79022b66\"" Dec 12 18:22:03.750797 containerd[2529]: time="2025-12-12T18:22:03.750744630Z" level=info msg="connecting to shim 0c5b4d60d160dea4efb67628117638e146171ba84966129e262a378d79022b66" address="unix:///run/containerd/s/79777724cfdd98deec209883f64b718d7467b1ee3c4fd2b1c4cb27387341773f" protocol=ttrpc version=3 Dec 12 18:22:03.771767 systemd[1]: Started cri-containerd-0c5b4d60d160dea4efb67628117638e146171ba84966129e262a378d79022b66.scope - libcontainer container 0c5b4d60d160dea4efb67628117638e146171ba84966129e262a378d79022b66. Dec 12 18:22:03.790000 audit: BPF prog-id=250 op=LOAD Dec 12 18:22:03.791000 audit: BPF prog-id=251 op=LOAD Dec 12 18:22:03.791000 audit[5616]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=5576 pid=5616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:03.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063356234643630643136306465613465666236373632383131373633 Dec 12 18:22:03.791000 audit: BPF prog-id=251 op=UNLOAD Dec 12 18:22:03.791000 audit[5616]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5576 pid=5616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:03.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063356234643630643136306465613465666236373632383131373633 Dec 12 18:22:03.791000 audit: BPF prog-id=252 op=LOAD Dec 12 18:22:03.791000 audit[5616]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=5576 pid=5616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:03.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063356234643630643136306465613465666236373632383131373633 Dec 12 18:22:03.791000 audit: BPF prog-id=253 op=LOAD Dec 12 18:22:03.791000 audit[5616]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=5576 pid=5616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:03.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063356234643630643136306465613465666236373632383131373633 Dec 12 18:22:03.791000 audit: BPF prog-id=253 op=UNLOAD Dec 12 18:22:03.791000 audit[5616]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5576 pid=5616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:03.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063356234643630643136306465613465666236373632383131373633 Dec 12 18:22:03.791000 audit: BPF prog-id=252 op=UNLOAD Dec 12 18:22:03.791000 audit[5616]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5576 pid=5616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:03.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063356234643630643136306465613465666236373632383131373633 Dec 12 18:22:03.791000 audit: BPF prog-id=254 op=LOAD Dec 12 18:22:03.791000 audit[5616]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=5576 pid=5616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:03.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063356234643630643136306465613465666236373632383131373633 Dec 12 18:22:03.818028 containerd[2529]: time="2025-12-12T18:22:03.817995023Z" level=info msg="StartContainer for \"0c5b4d60d160dea4efb67628117638e146171ba84966129e262a378d79022b66\" returns successfully" Dec 12 18:22:03.877351 containerd[2529]: time="2025-12-12T18:22:03.877083691Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:22:03.880240 containerd[2529]: time="2025-12-12T18:22:03.880154498Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:22:03.880436 containerd[2529]: time="2025-12-12T18:22:03.880353513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:22:03.880712 kubelet[4009]: E1212 18:22:03.880655 4009 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:22:03.880712 kubelet[4009]: E1212 18:22:03.880697 4009 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:22:03.881688 kubelet[4009]: E1212 18:22:03.881023 4009 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f2w4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74c7564cb4-hnnls_calico-apiserver(588d2c87-beaf-4e83-ba6f-8e3f0d453589): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:22:03.882714 kubelet[4009]: E1212 18:22:03.882659 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c7564cb4-hnnls" podUID="588d2c87-beaf-4e83-ba6f-8e3f0d453589" Dec 12 18:22:04.235763 systemd-networkd[2161]: calid7d8b23196e: Gained IPv6LL Dec 12 18:22:04.312024 containerd[2529]: time="2025-12-12T18:22:04.311918837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4rvrk,Uid:985aeb6e-874b-4206-ac9b-71fb5aaf32cf,Namespace:calico-system,Attempt:0,}" Dec 12 18:22:04.312381 containerd[2529]: time="2025-12-12T18:22:04.311919283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bq297,Uid:10291b6f-a9ca-4c45-b211-06a17f4d693f,Namespace:calico-system,Attempt:0,}" Dec 12 18:22:04.457654 kubelet[4009]: E1212 18:22:04.457611 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c7564cb4-hnnls" podUID="588d2c87-beaf-4e83-ba6f-8e3f0d453589" Dec 12 18:22:04.461904 kubelet[4009]: E1212 18:22:04.461872 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c7564cb4-4f4mj" podUID="c72f9f09-ad62-40d6-8632-b537f3032703" Dec 12 18:22:04.496000 audit[5690]: NETFILTER_CFG table=filter:133 family=2 entries=20 op=nft_register_rule pid=5690 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:22:04.500068 systemd-networkd[2161]: calib43ac761da3: Link UP Dec 12 18:22:04.502194 systemd-networkd[2161]: calib43ac761da3: Gained carrier Dec 12 18:22:04.496000 audit[5690]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc12ba1670 a2=0 a3=7ffc12ba165c items=0 ppid=4115 pid=5690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:04.496000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:22:04.505000 audit[5690]: NETFILTER_CFG table=nat:134 family=2 entries=14 op=nft_register_rule pid=5690 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:22:04.505000 audit[5690]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc12ba1670 a2=0 a3=0 items=0 ppid=4115 pid=5690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:04.505000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:22:04.524331 kubelet[4009]: I1212 18:22:04.524169 4009 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-hkwbv" podStartSLOduration=39.524152616 podStartE2EDuration="39.524152616s" podCreationTimestamp="2025-12-12 18:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:22:04.51254266 +0000 UTC m=+44.358426338" watchObservedRunningTime="2025-12-12 18:22:04.524152616 +0000 UTC m=+44.370036268" Dec 12 18:22:04.524979 containerd[2529]: 2025-12-12 18:22:04.373 [INFO][5652] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--53d1559fda-k8s-goldmane--666569f655--4rvrk-eth0 goldmane-666569f655- calico-system 985aeb6e-874b-4206-ac9b-71fb5aaf32cf 822 0 2025-12-12 18:21:35 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4515.1.0-a-53d1559fda goldmane-666569f655-4rvrk eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib43ac761da3 [] [] }} ContainerID="5813abbbf4bac98ac14ac0be6f40d259692438853d9b15332dd3fcbdf8d845f0" Namespace="calico-system" Pod="goldmane-666569f655-4rvrk" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-goldmane--666569f655--4rvrk-" Dec 12 18:22:04.524979 containerd[2529]: 2025-12-12 18:22:04.373 [INFO][5652] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5813abbbf4bac98ac14ac0be6f40d259692438853d9b15332dd3fcbdf8d845f0" Namespace="calico-system" Pod="goldmane-666569f655-4rvrk" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-goldmane--666569f655--4rvrk-eth0" Dec 12 18:22:04.524979 containerd[2529]: 2025-12-12 18:22:04.433 [INFO][5677] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5813abbbf4bac98ac14ac0be6f40d259692438853d9b15332dd3fcbdf8d845f0" HandleID="k8s-pod-network.5813abbbf4bac98ac14ac0be6f40d259692438853d9b15332dd3fcbdf8d845f0" Workload="ci--4515.1.0--a--53d1559fda-k8s-goldmane--666569f655--4rvrk-eth0" Dec 12 18:22:04.524979 containerd[2529]: 2025-12-12 18:22:04.433 [INFO][5677] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5813abbbf4bac98ac14ac0be6f40d259692438853d9b15332dd3fcbdf8d845f0" HandleID="k8s-pod-network.5813abbbf4bac98ac14ac0be6f40d259692438853d9b15332dd3fcbdf8d845f0" Workload="ci--4515.1.0--a--53d1559fda-k8s-goldmane--666569f655--4rvrk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cefe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-a-53d1559fda", "pod":"goldmane-666569f655-4rvrk", "timestamp":"2025-12-12 18:22:04.433160073 +0000 UTC"}, Hostname:"ci-4515.1.0-a-53d1559fda", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:22:04.524979 containerd[2529]: 2025-12-12 18:22:04.433 [INFO][5677] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:22:04.524979 containerd[2529]: 2025-12-12 18:22:04.433 [INFO][5677] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:22:04.524979 containerd[2529]: 2025-12-12 18:22:04.433 [INFO][5677] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-53d1559fda' Dec 12 18:22:04.524979 containerd[2529]: 2025-12-12 18:22:04.439 [INFO][5677] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5813abbbf4bac98ac14ac0be6f40d259692438853d9b15332dd3fcbdf8d845f0" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:04.524979 containerd[2529]: 2025-12-12 18:22:04.443 [INFO][5677] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:04.524979 containerd[2529]: 2025-12-12 18:22:04.449 [INFO][5677] ipam/ipam.go 511: Trying affinity for 192.168.51.0/26 host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:04.524979 containerd[2529]: 2025-12-12 18:22:04.452 [INFO][5677] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.0/26 host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:04.524979 containerd[2529]: 2025-12-12 18:22:04.457 [INFO][5677] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.0/26 host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:04.524979 containerd[2529]: 2025-12-12 18:22:04.457 [INFO][5677] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.51.0/26 handle="k8s-pod-network.5813abbbf4bac98ac14ac0be6f40d259692438853d9b15332dd3fcbdf8d845f0" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:04.524979 containerd[2529]: 2025-12-12 18:22:04.464 [INFO][5677] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5813abbbf4bac98ac14ac0be6f40d259692438853d9b15332dd3fcbdf8d845f0 Dec 12 18:22:04.524979 containerd[2529]: 2025-12-12 18:22:04.473 [INFO][5677] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.51.0/26 handle="k8s-pod-network.5813abbbf4bac98ac14ac0be6f40d259692438853d9b15332dd3fcbdf8d845f0" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:04.524979 containerd[2529]: 2025-12-12 18:22:04.487 [INFO][5677] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.51.5/26] block=192.168.51.0/26 handle="k8s-pod-network.5813abbbf4bac98ac14ac0be6f40d259692438853d9b15332dd3fcbdf8d845f0" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:04.524979 containerd[2529]: 2025-12-12 18:22:04.487 [INFO][5677] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.5/26] handle="k8s-pod-network.5813abbbf4bac98ac14ac0be6f40d259692438853d9b15332dd3fcbdf8d845f0" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:04.524979 containerd[2529]: 2025-12-12 18:22:04.487 [INFO][5677] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:22:04.524979 containerd[2529]: 2025-12-12 18:22:04.487 [INFO][5677] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.51.5/26] IPv6=[] ContainerID="5813abbbf4bac98ac14ac0be6f40d259692438853d9b15332dd3fcbdf8d845f0" HandleID="k8s-pod-network.5813abbbf4bac98ac14ac0be6f40d259692438853d9b15332dd3fcbdf8d845f0" Workload="ci--4515.1.0--a--53d1559fda-k8s-goldmane--666569f655--4rvrk-eth0" Dec 12 18:22:04.525594 containerd[2529]: 2025-12-12 18:22:04.492 [INFO][5652] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5813abbbf4bac98ac14ac0be6f40d259692438853d9b15332dd3fcbdf8d845f0" Namespace="calico-system" Pod="goldmane-666569f655-4rvrk" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-goldmane--666569f655--4rvrk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--53d1559fda-k8s-goldmane--666569f655--4rvrk-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"985aeb6e-874b-4206-ac9b-71fb5aaf32cf", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 21, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-53d1559fda", ContainerID:"", Pod:"goldmane-666569f655-4rvrk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.51.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib43ac761da3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:22:04.525594 containerd[2529]: 2025-12-12 18:22:04.492 [INFO][5652] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.5/32] ContainerID="5813abbbf4bac98ac14ac0be6f40d259692438853d9b15332dd3fcbdf8d845f0" Namespace="calico-system" Pod="goldmane-666569f655-4rvrk" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-goldmane--666569f655--4rvrk-eth0" Dec 12 18:22:04.525594 containerd[2529]: 2025-12-12 18:22:04.492 [INFO][5652] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib43ac761da3 ContainerID="5813abbbf4bac98ac14ac0be6f40d259692438853d9b15332dd3fcbdf8d845f0" Namespace="calico-system" Pod="goldmane-666569f655-4rvrk" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-goldmane--666569f655--4rvrk-eth0" Dec 12 18:22:04.525594 containerd[2529]: 2025-12-12 18:22:04.505 [INFO][5652] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5813abbbf4bac98ac14ac0be6f40d259692438853d9b15332dd3fcbdf8d845f0" Namespace="calico-system" Pod="goldmane-666569f655-4rvrk" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-goldmane--666569f655--4rvrk-eth0" Dec 12 18:22:04.525594 containerd[2529]: 2025-12-12 18:22:04.506 [INFO][5652] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5813abbbf4bac98ac14ac0be6f40d259692438853d9b15332dd3fcbdf8d845f0" Namespace="calico-system" Pod="goldmane-666569f655-4rvrk" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-goldmane--666569f655--4rvrk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--53d1559fda-k8s-goldmane--666569f655--4rvrk-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"985aeb6e-874b-4206-ac9b-71fb5aaf32cf", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 21, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-53d1559fda", ContainerID:"5813abbbf4bac98ac14ac0be6f40d259692438853d9b15332dd3fcbdf8d845f0", Pod:"goldmane-666569f655-4rvrk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.51.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib43ac761da3", MAC:"f2:7c:19:1a:08:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:22:04.525594 containerd[2529]: 2025-12-12 18:22:04.522 [INFO][5652] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5813abbbf4bac98ac14ac0be6f40d259692438853d9b15332dd3fcbdf8d845f0" Namespace="calico-system" Pod="goldmane-666569f655-4rvrk" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-goldmane--666569f655--4rvrk-eth0" Dec 12 18:22:04.536000 audit[5699]: NETFILTER_CFG table=filter:135 family=2 entries=17 op=nft_register_rule pid=5699 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:22:04.536000 audit[5699]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd36f6e760 a2=0 a3=7ffd36f6e74c items=0 ppid=4115 pid=5699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:04.536000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:22:04.539000 audit[5699]: NETFILTER_CFG table=nat:136 family=2 entries=35 op=nft_register_chain pid=5699 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:22:04.539000 audit[5699]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd36f6e760 a2=0 a3=7ffd36f6e74c items=0 ppid=4115 pid=5699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:04.539000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:22:04.576000 audit[5703]: NETFILTER_CFG table=filter:137 family=2 entries=56 op=nft_register_chain pid=5703 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:22:04.576000 audit[5703]: SYSCALL arch=c000003e syscall=46 success=yes exit=28744 a0=3 a1=7ffc33440070 a2=0 a3=7ffc3344005c items=0 ppid=5096 pid=5703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:04.576000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:22:04.583886 containerd[2529]: time="2025-12-12T18:22:04.583851723Z" level=info msg="connecting to shim 5813abbbf4bac98ac14ac0be6f40d259692438853d9b15332dd3fcbdf8d845f0" address="unix:///run/containerd/s/0f21cf24b4c759089c656f87e3e060aa3fe3c5d0dde4c94f6fbf1e1609f1ac59" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:22:04.590884 systemd-networkd[2161]: cali05bb3c40a4f: Link UP Dec 12 18:22:04.592797 systemd-networkd[2161]: cali05bb3c40a4f: Gained carrier Dec 12 18:22:04.619353 containerd[2529]: 2025-12-12 18:22:04.383 [INFO][5656] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--53d1559fda-k8s-csi--node--driver--bq297-eth0 csi-node-driver- calico-system 10291b6f-a9ca-4c45-b211-06a17f4d693f 703 0 2025-12-12 18:21:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4515.1.0-a-53d1559fda csi-node-driver-bq297 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali05bb3c40a4f [] [] }} ContainerID="2343f6261093697a686ff7383f37cd07bf2f59f553b679a0c95cf3501495bbcd" Namespace="calico-system" Pod="csi-node-driver-bq297" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-csi--node--driver--bq297-" Dec 12 18:22:04.619353 containerd[2529]: 2025-12-12 18:22:04.383 [INFO][5656] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2343f6261093697a686ff7383f37cd07bf2f59f553b679a0c95cf3501495bbcd" Namespace="calico-system" Pod="csi-node-driver-bq297" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-csi--node--driver--bq297-eth0" Dec 12 18:22:04.619353 containerd[2529]: 2025-12-12 18:22:04.435 [INFO][5682] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2343f6261093697a686ff7383f37cd07bf2f59f553b679a0c95cf3501495bbcd" HandleID="k8s-pod-network.2343f6261093697a686ff7383f37cd07bf2f59f553b679a0c95cf3501495bbcd" Workload="ci--4515.1.0--a--53d1559fda-k8s-csi--node--driver--bq297-eth0" Dec 12 18:22:04.619353 containerd[2529]: 2025-12-12 18:22:04.436 [INFO][5682] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2343f6261093697a686ff7383f37cd07bf2f59f553b679a0c95cf3501495bbcd" HandleID="k8s-pod-network.2343f6261093697a686ff7383f37cd07bf2f59f553b679a0c95cf3501495bbcd" Workload="ci--4515.1.0--a--53d1559fda-k8s-csi--node--driver--bq297-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5660), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-a-53d1559fda", "pod":"csi-node-driver-bq297", "timestamp":"2025-12-12 18:22:04.435977736 +0000 UTC"}, Hostname:"ci-4515.1.0-a-53d1559fda", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:22:04.619353 containerd[2529]: 2025-12-12 18:22:04.436 [INFO][5682] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:22:04.619353 containerd[2529]: 2025-12-12 18:22:04.487 [INFO][5682] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:22:04.619353 containerd[2529]: 2025-12-12 18:22:04.487 [INFO][5682] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-53d1559fda' Dec 12 18:22:04.619353 containerd[2529]: 2025-12-12 18:22:04.539 [INFO][5682] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2343f6261093697a686ff7383f37cd07bf2f59f553b679a0c95cf3501495bbcd" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:04.619353 containerd[2529]: 2025-12-12 18:22:04.545 [INFO][5682] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:04.619353 containerd[2529]: 2025-12-12 18:22:04.549 [INFO][5682] ipam/ipam.go 511: Trying affinity for 192.168.51.0/26 host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:04.619353 containerd[2529]: 2025-12-12 18:22:04.551 [INFO][5682] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.0/26 host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:04.619353 containerd[2529]: 2025-12-12 18:22:04.553 [INFO][5682] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.0/26 host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:04.619353 containerd[2529]: 2025-12-12 18:22:04.554 [INFO][5682] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.51.0/26 handle="k8s-pod-network.2343f6261093697a686ff7383f37cd07bf2f59f553b679a0c95cf3501495bbcd" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:04.619353 containerd[2529]: 2025-12-12 18:22:04.556 [INFO][5682] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2343f6261093697a686ff7383f37cd07bf2f59f553b679a0c95cf3501495bbcd Dec 12 18:22:04.619353 containerd[2529]: 2025-12-12 18:22:04.565 [INFO][5682] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.51.0/26 handle="k8s-pod-network.2343f6261093697a686ff7383f37cd07bf2f59f553b679a0c95cf3501495bbcd" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:04.619353 containerd[2529]: 2025-12-12 18:22:04.579 [INFO][5682] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.51.6/26] block=192.168.51.0/26 handle="k8s-pod-network.2343f6261093697a686ff7383f37cd07bf2f59f553b679a0c95cf3501495bbcd" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:04.619353 containerd[2529]: 2025-12-12 18:22:04.579 [INFO][5682] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.6/26] handle="k8s-pod-network.2343f6261093697a686ff7383f37cd07bf2f59f553b679a0c95cf3501495bbcd" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:04.619353 containerd[2529]: 2025-12-12 18:22:04.579 [INFO][5682] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:22:04.619353 containerd[2529]: 2025-12-12 18:22:04.579 [INFO][5682] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.51.6/26] IPv6=[] ContainerID="2343f6261093697a686ff7383f37cd07bf2f59f553b679a0c95cf3501495bbcd" HandleID="k8s-pod-network.2343f6261093697a686ff7383f37cd07bf2f59f553b679a0c95cf3501495bbcd" Workload="ci--4515.1.0--a--53d1559fda-k8s-csi--node--driver--bq297-eth0" Dec 12 18:22:04.627895 containerd[2529]: 2025-12-12 18:22:04.585 [INFO][5656] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2343f6261093697a686ff7383f37cd07bf2f59f553b679a0c95cf3501495bbcd" Namespace="calico-system" Pod="csi-node-driver-bq297" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-csi--node--driver--bq297-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--53d1559fda-k8s-csi--node--driver--bq297-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"10291b6f-a9ca-4c45-b211-06a17f4d693f", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 21, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-53d1559fda", ContainerID:"", Pod:"csi-node-driver-bq297", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.51.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali05bb3c40a4f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:22:04.627895 containerd[2529]: 2025-12-12 18:22:04.585 [INFO][5656] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.6/32] ContainerID="2343f6261093697a686ff7383f37cd07bf2f59f553b679a0c95cf3501495bbcd" Namespace="calico-system" Pod="csi-node-driver-bq297" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-csi--node--driver--bq297-eth0" Dec 12 18:22:04.627895 containerd[2529]: 2025-12-12 18:22:04.585 [INFO][5656] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali05bb3c40a4f ContainerID="2343f6261093697a686ff7383f37cd07bf2f59f553b679a0c95cf3501495bbcd" Namespace="calico-system" Pod="csi-node-driver-bq297" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-csi--node--driver--bq297-eth0" Dec 12 18:22:04.627895 containerd[2529]: 2025-12-12 18:22:04.593 [INFO][5656] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2343f6261093697a686ff7383f37cd07bf2f59f553b679a0c95cf3501495bbcd" Namespace="calico-system" Pod="csi-node-driver-bq297" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-csi--node--driver--bq297-eth0" Dec 12 18:22:04.627895 containerd[2529]: 2025-12-12 18:22:04.596 [INFO][5656] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2343f6261093697a686ff7383f37cd07bf2f59f553b679a0c95cf3501495bbcd" Namespace="calico-system" Pod="csi-node-driver-bq297" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-csi--node--driver--bq297-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--53d1559fda-k8s-csi--node--driver--bq297-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"10291b6f-a9ca-4c45-b211-06a17f4d693f", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 21, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-53d1559fda", ContainerID:"2343f6261093697a686ff7383f37cd07bf2f59f553b679a0c95cf3501495bbcd", Pod:"csi-node-driver-bq297", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.51.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali05bb3c40a4f", MAC:"c2:bd:fd:bc:66:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:22:04.627895 containerd[2529]: 2025-12-12 18:22:04.616 [INFO][5656] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2343f6261093697a686ff7383f37cd07bf2f59f553b679a0c95cf3501495bbcd" Namespace="calico-system" Pod="csi-node-driver-bq297" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-csi--node--driver--bq297-eth0" Dec 12 18:22:04.630906 systemd[1]: Started cri-containerd-5813abbbf4bac98ac14ac0be6f40d259692438853d9b15332dd3fcbdf8d845f0.scope - libcontainer container 5813abbbf4bac98ac14ac0be6f40d259692438853d9b15332dd3fcbdf8d845f0. Dec 12 18:22:04.647000 audit: BPF prog-id=255 op=LOAD Dec 12 18:22:04.648000 audit: BPF prog-id=256 op=LOAD Dec 12 18:22:04.648000 audit[5724]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=5713 pid=5724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:04.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538313361626262663462616339386163313461633062653666343064 Dec 12 18:22:04.648000 audit: BPF prog-id=256 op=UNLOAD Dec 12 18:22:04.648000 audit[5724]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5713 pid=5724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:04.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538313361626262663462616339386163313461633062653666343064 Dec 12 18:22:04.648000 audit: BPF prog-id=257 op=LOAD Dec 12 18:22:04.648000 audit[5724]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=5713 pid=5724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:04.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538313361626262663462616339386163313461633062653666343064 Dec 12 18:22:04.648000 audit: BPF prog-id=258 op=LOAD Dec 12 18:22:04.648000 audit[5724]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=5713 pid=5724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:04.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538313361626262663462616339386163313461633062653666343064 Dec 12 18:22:04.648000 audit: BPF prog-id=258 op=UNLOAD Dec 12 18:22:04.648000 audit[5724]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5713 pid=5724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:04.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538313361626262663462616339386163313461633062653666343064 Dec 12 18:22:04.648000 audit: BPF prog-id=257 op=UNLOAD Dec 12 18:22:04.648000 audit[5724]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5713 pid=5724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:04.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538313361626262663462616339386163313461633062653666343064 Dec 12 18:22:04.648000 audit: BPF prog-id=259 op=LOAD Dec 12 18:22:04.648000 audit[5724]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=5713 pid=5724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:04.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538313361626262663462616339386163313461633062653666343064 Dec 12 18:22:04.650000 audit[5749]: NETFILTER_CFG table=filter:138 family=2 entries=52 op=nft_register_chain pid=5749 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:22:04.650000 audit[5749]: SYSCALL arch=c000003e syscall=46 success=yes exit=24328 a0=3 a1=7fff6671ea00 a2=0 a3=7fff6671e9ec items=0 ppid=5096 pid=5749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:04.650000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:22:04.659268 containerd[2529]: time="2025-12-12T18:22:04.659219390Z" level=info msg="connecting to shim 2343f6261093697a686ff7383f37cd07bf2f59f553b679a0c95cf3501495bbcd" address="unix:///run/containerd/s/cf7728370ab6eccfd3fbe9c210c61263d291aee401cd94926972b2ab31598c96" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:22:04.684816 systemd[1]: Started cri-containerd-2343f6261093697a686ff7383f37cd07bf2f59f553b679a0c95cf3501495bbcd.scope - libcontainer container 2343f6261093697a686ff7383f37cd07bf2f59f553b679a0c95cf3501495bbcd. Dec 12 18:22:04.702769 containerd[2529]: time="2025-12-12T18:22:04.702737677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4rvrk,Uid:985aeb6e-874b-4206-ac9b-71fb5aaf32cf,Namespace:calico-system,Attempt:0,} returns sandbox id \"5813abbbf4bac98ac14ac0be6f40d259692438853d9b15332dd3fcbdf8d845f0\"" Dec 12 18:22:04.705140 containerd[2529]: time="2025-12-12T18:22:04.705111361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:22:04.707000 audit: BPF prog-id=260 op=LOAD Dec 12 18:22:04.708000 audit: BPF prog-id=261 op=LOAD Dec 12 18:22:04.708000 audit[5769]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5758 pid=5769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:04.708000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233343366363236313039333639376136383666663733383366333763 Dec 12 18:22:04.708000 audit: BPF prog-id=261 op=UNLOAD Dec 12 18:22:04.708000 audit[5769]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5758 pid=5769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:04.708000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233343366363236313039333639376136383666663733383366333763 Dec 12 18:22:04.708000 audit: BPF prog-id=262 op=LOAD Dec 12 18:22:04.708000 audit[5769]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5758 pid=5769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:04.708000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233343366363236313039333639376136383666663733383366333763 Dec 12 18:22:04.709000 audit: BPF prog-id=263 op=LOAD Dec 12 18:22:04.709000 audit[5769]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5758 pid=5769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:04.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233343366363236313039333639376136383666663733383366333763 Dec 12 18:22:04.709000 audit: BPF prog-id=263 op=UNLOAD Dec 12 18:22:04.709000 audit[5769]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5758 pid=5769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:04.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233343366363236313039333639376136383666663733383366333763 Dec 12 18:22:04.709000 audit: BPF prog-id=262 op=UNLOAD Dec 12 18:22:04.709000 audit[5769]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5758 pid=5769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:04.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233343366363236313039333639376136383666663733383366333763 Dec 12 18:22:04.709000 audit: BPF prog-id=264 op=LOAD Dec 12 18:22:04.709000 audit[5769]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5758 pid=5769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:04.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233343366363236313039333639376136383666663733383366333763 Dec 12 18:22:04.723849 containerd[2529]: time="2025-12-12T18:22:04.723825993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bq297,Uid:10291b6f-a9ca-4c45-b211-06a17f4d693f,Namespace:calico-system,Attempt:0,} returns sandbox id \"2343f6261093697a686ff7383f37cd07bf2f59f553b679a0c95cf3501495bbcd\"" Dec 12 18:22:04.875661 systemd-networkd[2161]: cali8034178b18e: Gained IPv6LL Dec 12 18:22:04.939656 systemd-networkd[2161]: cali9da0207f921: Gained IPv6LL Dec 12 18:22:04.944023 containerd[2529]: time="2025-12-12T18:22:04.943976860Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:22:04.948002 containerd[2529]: time="2025-12-12T18:22:04.947886797Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:22:04.948002 containerd[2529]: time="2025-12-12T18:22:04.947978677Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 18:22:04.948313 kubelet[4009]: E1212 18:22:04.948273 4009 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:22:04.948634 kubelet[4009]: E1212 18:22:04.948325 4009 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:22:04.948759 containerd[2529]: time="2025-12-12T18:22:04.948723784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:22:04.949062 kubelet[4009]: E1212 18:22:04.949012 4009 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cs47m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4rvrk_calico-system(985aeb6e-874b-4206-ac9b-71fb5aaf32cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:22:04.950341 kubelet[4009]: E1212 18:22:04.950285 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4rvrk" podUID="985aeb6e-874b-4206-ac9b-71fb5aaf32cf" Dec 12 18:22:05.199815 containerd[2529]: time="2025-12-12T18:22:05.199773002Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:22:05.202329 containerd[2529]: time="2025-12-12T18:22:05.202303743Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:22:05.202425 containerd[2529]: time="2025-12-12T18:22:05.202388760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 18:22:05.202601 kubelet[4009]: E1212 18:22:05.202564 4009 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:22:05.202670 kubelet[4009]: E1212 18:22:05.202615 4009 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:22:05.202827 kubelet[4009]: E1212 18:22:05.202779 4009 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rf7fn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bq297_calico-system(10291b6f-a9ca-4c45-b211-06a17f4d693f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:22:05.205068 containerd[2529]: time="2025-12-12T18:22:05.205039856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:22:05.311828 containerd[2529]: time="2025-12-12T18:22:05.311764312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c595dfbb8-snvpt,Uid:cdef70f5-08e1-4939-ba2b-d4a667c25459,Namespace:calico-system,Attempt:0,}" Dec 12 18:22:05.312007 containerd[2529]: time="2025-12-12T18:22:05.311764338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qszww,Uid:793be58b-966e-4ae5-98e0-dcfd87576ade,Namespace:kube-system,Attempt:0,}" Dec 12 18:22:05.454380 containerd[2529]: time="2025-12-12T18:22:05.454191519Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:22:05.457932 containerd[2529]: time="2025-12-12T18:22:05.457753337Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:22:05.457932 containerd[2529]: time="2025-12-12T18:22:05.457848799Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 18:22:05.458216 kubelet[4009]: E1212 18:22:05.458180 4009 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:22:05.459209 kubelet[4009]: E1212 18:22:05.458952 4009 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:22:05.459209 kubelet[4009]: E1212 18:22:05.459159 4009 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rf7fn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bq297_calico-system(10291b6f-a9ca-4c45-b211-06a17f4d693f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:22:05.461093 kubelet[4009]: E1212 18:22:05.460896 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bq297" podUID="10291b6f-a9ca-4c45-b211-06a17f4d693f" Dec 12 18:22:05.463698 systemd-networkd[2161]: cali7067e81a900: Link UP Dec 12 18:22:05.465338 systemd-networkd[2161]: cali7067e81a900: Gained carrier Dec 12 18:22:05.470859 kubelet[4009]: E1212 18:22:05.470832 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4rvrk" podUID="985aeb6e-874b-4206-ac9b-71fb5aaf32cf" Dec 12 18:22:05.472561 kubelet[4009]: E1212 18:22:05.472403 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c7564cb4-hnnls" podUID="588d2c87-beaf-4e83-ba6f-8e3f0d453589" Dec 12 18:22:05.474030 kubelet[4009]: E1212 18:22:05.473995 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bq297" podUID="10291b6f-a9ca-4c45-b211-06a17f4d693f" Dec 12 18:22:05.492330 containerd[2529]: 2025-12-12 18:22:05.396 [INFO][5815] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--53d1559fda-k8s-coredns--674b8bbfcf--qszww-eth0 coredns-674b8bbfcf- kube-system 793be58b-966e-4ae5-98e0-dcfd87576ade 811 0 2025-12-12 18:21:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515.1.0-a-53d1559fda coredns-674b8bbfcf-qszww eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7067e81a900 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c0be7e8b46d51974c660eff70d268640b71bbf04a897965e07976f073eae9621" Namespace="kube-system" Pod="coredns-674b8bbfcf-qszww" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-coredns--674b8bbfcf--qszww-" Dec 12 18:22:05.492330 containerd[2529]: 2025-12-12 18:22:05.396 [INFO][5815] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0be7e8b46d51974c660eff70d268640b71bbf04a897965e07976f073eae9621" Namespace="kube-system" Pod="coredns-674b8bbfcf-qszww" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-coredns--674b8bbfcf--qszww-eth0" Dec 12 18:22:05.492330 containerd[2529]: 2025-12-12 18:22:05.425 [INFO][5830] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0be7e8b46d51974c660eff70d268640b71bbf04a897965e07976f073eae9621" HandleID="k8s-pod-network.c0be7e8b46d51974c660eff70d268640b71bbf04a897965e07976f073eae9621" Workload="ci--4515.1.0--a--53d1559fda-k8s-coredns--674b8bbfcf--qszww-eth0" Dec 12 18:22:05.492330 containerd[2529]: 2025-12-12 18:22:05.425 [INFO][5830] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c0be7e8b46d51974c660eff70d268640b71bbf04a897965e07976f073eae9621" HandleID="k8s-pod-network.c0be7e8b46d51974c660eff70d268640b71bbf04a897965e07976f073eae9621" Workload="ci--4515.1.0--a--53d1559fda-k8s-coredns--674b8bbfcf--qszww-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515.1.0-a-53d1559fda", "pod":"coredns-674b8bbfcf-qszww", "timestamp":"2025-12-12 18:22:05.425213348 +0000 UTC"}, Hostname:"ci-4515.1.0-a-53d1559fda", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:22:05.492330 containerd[2529]: 2025-12-12 18:22:05.425 [INFO][5830] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:22:05.492330 containerd[2529]: 2025-12-12 18:22:05.425 [INFO][5830] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:22:05.492330 containerd[2529]: 2025-12-12 18:22:05.425 [INFO][5830] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-53d1559fda' Dec 12 18:22:05.492330 containerd[2529]: 2025-12-12 18:22:05.430 [INFO][5830] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c0be7e8b46d51974c660eff70d268640b71bbf04a897965e07976f073eae9621" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:05.492330 containerd[2529]: 2025-12-12 18:22:05.433 [INFO][5830] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:05.492330 containerd[2529]: 2025-12-12 18:22:05.437 [INFO][5830] ipam/ipam.go 511: Trying affinity for 192.168.51.0/26 host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:05.492330 containerd[2529]: 2025-12-12 18:22:05.438 [INFO][5830] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.0/26 host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:05.492330 containerd[2529]: 2025-12-12 18:22:05.440 [INFO][5830] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.0/26 host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:05.492330 containerd[2529]: 2025-12-12 18:22:05.440 [INFO][5830] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.51.0/26 handle="k8s-pod-network.c0be7e8b46d51974c660eff70d268640b71bbf04a897965e07976f073eae9621" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:05.492330 containerd[2529]: 2025-12-12 18:22:05.441 [INFO][5830] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c0be7e8b46d51974c660eff70d268640b71bbf04a897965e07976f073eae9621 Dec 12 18:22:05.492330 containerd[2529]: 2025-12-12 18:22:05.444 [INFO][5830] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.51.0/26 handle="k8s-pod-network.c0be7e8b46d51974c660eff70d268640b71bbf04a897965e07976f073eae9621" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:05.492330 containerd[2529]: 2025-12-12 18:22:05.452 [INFO][5830] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.51.7/26] block=192.168.51.0/26 handle="k8s-pod-network.c0be7e8b46d51974c660eff70d268640b71bbf04a897965e07976f073eae9621" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:05.492330 containerd[2529]: 2025-12-12 18:22:05.453 [INFO][5830] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.7/26] handle="k8s-pod-network.c0be7e8b46d51974c660eff70d268640b71bbf04a897965e07976f073eae9621" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:05.492330 containerd[2529]: 2025-12-12 18:22:05.453 [INFO][5830] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:22:05.492330 containerd[2529]: 2025-12-12 18:22:05.453 [INFO][5830] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.51.7/26] IPv6=[] ContainerID="c0be7e8b46d51974c660eff70d268640b71bbf04a897965e07976f073eae9621" HandleID="k8s-pod-network.c0be7e8b46d51974c660eff70d268640b71bbf04a897965e07976f073eae9621" Workload="ci--4515.1.0--a--53d1559fda-k8s-coredns--674b8bbfcf--qszww-eth0" Dec 12 18:22:05.493407 containerd[2529]: 2025-12-12 18:22:05.456 [INFO][5815] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0be7e8b46d51974c660eff70d268640b71bbf04a897965e07976f073eae9621" Namespace="kube-system" Pod="coredns-674b8bbfcf-qszww" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-coredns--674b8bbfcf--qszww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--53d1559fda-k8s-coredns--674b8bbfcf--qszww-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"793be58b-966e-4ae5-98e0-dcfd87576ade", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 21, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-53d1559fda", ContainerID:"", Pod:"coredns-674b8bbfcf-qszww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7067e81a900", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:22:05.493407 containerd[2529]: 2025-12-12 18:22:05.456 [INFO][5815] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.7/32] ContainerID="c0be7e8b46d51974c660eff70d268640b71bbf04a897965e07976f073eae9621" Namespace="kube-system" Pod="coredns-674b8bbfcf-qszww" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-coredns--674b8bbfcf--qszww-eth0" Dec 12 18:22:05.493407 containerd[2529]: 2025-12-12 18:22:05.456 [INFO][5815] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7067e81a900 ContainerID="c0be7e8b46d51974c660eff70d268640b71bbf04a897965e07976f073eae9621" Namespace="kube-system" Pod="coredns-674b8bbfcf-qszww" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-coredns--674b8bbfcf--qszww-eth0" Dec 12 18:22:05.493407 containerd[2529]: 2025-12-12 18:22:05.469 [INFO][5815] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0be7e8b46d51974c660eff70d268640b71bbf04a897965e07976f073eae9621" Namespace="kube-system" Pod="coredns-674b8bbfcf-qszww" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-coredns--674b8bbfcf--qszww-eth0" Dec 12 18:22:05.493407 containerd[2529]: 2025-12-12 18:22:05.472 [INFO][5815] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0be7e8b46d51974c660eff70d268640b71bbf04a897965e07976f073eae9621" Namespace="kube-system" Pod="coredns-674b8bbfcf-qszww" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-coredns--674b8bbfcf--qszww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--53d1559fda-k8s-coredns--674b8bbfcf--qszww-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"793be58b-966e-4ae5-98e0-dcfd87576ade", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 21, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-53d1559fda", ContainerID:"c0be7e8b46d51974c660eff70d268640b71bbf04a897965e07976f073eae9621", Pod:"coredns-674b8bbfcf-qszww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7067e81a900", MAC:"62:06:b9:19:66:b3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:22:05.493407 containerd[2529]: 2025-12-12 18:22:05.486 [INFO][5815] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0be7e8b46d51974c660eff70d268640b71bbf04a897965e07976f073eae9621" Namespace="kube-system" Pod="coredns-674b8bbfcf-qszww" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-coredns--674b8bbfcf--qszww-eth0" Dec 12 18:22:05.521000 audit[5850]: NETFILTER_CFG table=filter:139 family=2 entries=14 op=nft_register_rule pid=5850 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:22:05.521000 audit[5850]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffca5089b60 a2=0 a3=7ffca5089b4c items=0 ppid=4115 pid=5850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:05.521000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:22:05.527000 audit[5850]: NETFILTER_CFG table=nat:140 family=2 entries=20 op=nft_register_rule pid=5850 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:22:05.527000 audit[5850]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffca5089b60 a2=0 a3=7ffca5089b4c items=0 ppid=4115 pid=5850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:05.527000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:22:05.548477 containerd[2529]: time="2025-12-12T18:22:05.548437101Z" level=info msg="connecting to shim c0be7e8b46d51974c660eff70d268640b71bbf04a897965e07976f073eae9621" address="unix:///run/containerd/s/e041e278557e9b93cd2f3b2d036d7432760681dda2645dcf98e6d5985634ab9b" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:22:05.586848 systemd[1]: Started cri-containerd-c0be7e8b46d51974c660eff70d268640b71bbf04a897965e07976f073eae9621.scope - libcontainer container c0be7e8b46d51974c660eff70d268640b71bbf04a897965e07976f073eae9621. Dec 12 18:22:05.600877 systemd-networkd[2161]: cali3f91071e616: Link UP Dec 12 18:22:05.601569 systemd-networkd[2161]: cali3f91071e616: Gained carrier Dec 12 18:22:05.602000 audit[5884]: NETFILTER_CFG table=filter:141 family=2 entries=58 op=nft_register_chain pid=5884 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:22:05.602000 audit[5884]: SYSCALL arch=c000003e syscall=46 success=yes exit=26760 a0=3 a1=7ffc8f9ad3d0 a2=0 a3=7ffc8f9ad3bc items=0 ppid=5096 pid=5884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:05.602000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:22:05.609000 audit: BPF prog-id=265 op=LOAD Dec 12 18:22:05.610000 audit: BPF prog-id=266 op=LOAD Dec 12 18:22:05.610000 audit[5872]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5861 pid=5872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:05.610000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330626537653862343664353139373463363630656666373064323638 Dec 12 18:22:05.611000 audit: BPF prog-id=266 op=UNLOAD Dec 12 18:22:05.611000 audit[5872]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5861 pid=5872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:05.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330626537653862343664353139373463363630656666373064323638 Dec 12 18:22:05.611000 audit: BPF prog-id=267 op=LOAD Dec 12 18:22:05.611000 audit[5872]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5861 pid=5872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:05.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330626537653862343664353139373463363630656666373064323638 Dec 12 18:22:05.612000 audit: BPF prog-id=268 op=LOAD Dec 12 18:22:05.612000 audit[5872]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5861 pid=5872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:05.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330626537653862343664353139373463363630656666373064323638 Dec 12 18:22:05.612000 audit: BPF prog-id=268 op=UNLOAD Dec 12 18:22:05.612000 audit[5872]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5861 pid=5872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:05.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330626537653862343664353139373463363630656666373064323638 Dec 12 18:22:05.612000 audit: BPF prog-id=267 op=UNLOAD Dec 12 18:22:05.612000 audit[5872]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5861 pid=5872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:05.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330626537653862343664353139373463363630656666373064323638 Dec 12 18:22:05.612000 audit: BPF prog-id=269 op=LOAD Dec 12 18:22:05.612000 audit[5872]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5861 pid=5872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:05.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330626537653862343664353139373463363630656666373064323638 Dec 12 18:22:05.626316 containerd[2529]: 2025-12-12 18:22:05.394 [INFO][5804] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--53d1559fda-k8s-calico--kube--controllers--5c595dfbb8--snvpt-eth0 calico-kube-controllers-5c595dfbb8- calico-system cdef70f5-08e1-4939-ba2b-d4a667c25459 812 0 2025-12-12 18:21:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5c595dfbb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4515.1.0-a-53d1559fda calico-kube-controllers-5c595dfbb8-snvpt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3f91071e616 [] [] }} ContainerID="88856206d43af9ed3f59140081d41965cc399fd3b96dae59f0da991cdfc286a9" Namespace="calico-system" Pod="calico-kube-controllers-5c595dfbb8-snvpt" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-calico--kube--controllers--5c595dfbb8--snvpt-" Dec 12 18:22:05.626316 containerd[2529]: 2025-12-12 18:22:05.395 [INFO][5804] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="88856206d43af9ed3f59140081d41965cc399fd3b96dae59f0da991cdfc286a9" Namespace="calico-system" Pod="calico-kube-controllers-5c595dfbb8-snvpt" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-calico--kube--controllers--5c595dfbb8--snvpt-eth0" Dec 12 18:22:05.626316 containerd[2529]: 2025-12-12 18:22:05.426 [INFO][5828] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="88856206d43af9ed3f59140081d41965cc399fd3b96dae59f0da991cdfc286a9" HandleID="k8s-pod-network.88856206d43af9ed3f59140081d41965cc399fd3b96dae59f0da991cdfc286a9" Workload="ci--4515.1.0--a--53d1559fda-k8s-calico--kube--controllers--5c595dfbb8--snvpt-eth0" Dec 12 18:22:05.626316 containerd[2529]: 2025-12-12 18:22:05.426 [INFO][5828] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="88856206d43af9ed3f59140081d41965cc399fd3b96dae59f0da991cdfc286a9" HandleID="k8s-pod-network.88856206d43af9ed3f59140081d41965cc399fd3b96dae59f0da991cdfc286a9" Workload="ci--4515.1.0--a--53d1559fda-k8s-calico--kube--controllers--5c595dfbb8--snvpt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-a-53d1559fda", "pod":"calico-kube-controllers-5c595dfbb8-snvpt", "timestamp":"2025-12-12 18:22:05.426158546 +0000 UTC"}, Hostname:"ci-4515.1.0-a-53d1559fda", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:22:05.626316 containerd[2529]: 2025-12-12 18:22:05.426 [INFO][5828] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:22:05.626316 containerd[2529]: 2025-12-12 18:22:05.453 [INFO][5828] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:22:05.626316 containerd[2529]: 2025-12-12 18:22:05.453 [INFO][5828] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-53d1559fda' Dec 12 18:22:05.626316 containerd[2529]: 2025-12-12 18:22:05.533 [INFO][5828] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.88856206d43af9ed3f59140081d41965cc399fd3b96dae59f0da991cdfc286a9" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:05.626316 containerd[2529]: 2025-12-12 18:22:05.543 [INFO][5828] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:05.626316 containerd[2529]: 2025-12-12 18:22:05.551 [INFO][5828] ipam/ipam.go 511: Trying affinity for 192.168.51.0/26 host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:05.626316 containerd[2529]: 2025-12-12 18:22:05.554 [INFO][5828] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.0/26 host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:05.626316 containerd[2529]: 2025-12-12 18:22:05.556 [INFO][5828] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.0/26 host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:05.626316 containerd[2529]: 2025-12-12 18:22:05.556 [INFO][5828] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.51.0/26 handle="k8s-pod-network.88856206d43af9ed3f59140081d41965cc399fd3b96dae59f0da991cdfc286a9" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:05.626316 containerd[2529]: 2025-12-12 18:22:05.572 [INFO][5828] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.88856206d43af9ed3f59140081d41965cc399fd3b96dae59f0da991cdfc286a9 Dec 12 18:22:05.626316 containerd[2529]: 2025-12-12 18:22:05.578 [INFO][5828] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.51.0/26 handle="k8s-pod-network.88856206d43af9ed3f59140081d41965cc399fd3b96dae59f0da991cdfc286a9" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:05.626316 containerd[2529]: 2025-12-12 18:22:05.591 [INFO][5828] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.51.8/26] block=192.168.51.0/26 handle="k8s-pod-network.88856206d43af9ed3f59140081d41965cc399fd3b96dae59f0da991cdfc286a9" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:05.626316 containerd[2529]: 2025-12-12 18:22:05.591 [INFO][5828] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.8/26] handle="k8s-pod-network.88856206d43af9ed3f59140081d41965cc399fd3b96dae59f0da991cdfc286a9" host="ci-4515.1.0-a-53d1559fda" Dec 12 18:22:05.626316 containerd[2529]: 2025-12-12 18:22:05.591 [INFO][5828] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:22:05.626316 containerd[2529]: 2025-12-12 18:22:05.591 [INFO][5828] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.51.8/26] IPv6=[] ContainerID="88856206d43af9ed3f59140081d41965cc399fd3b96dae59f0da991cdfc286a9" HandleID="k8s-pod-network.88856206d43af9ed3f59140081d41965cc399fd3b96dae59f0da991cdfc286a9" Workload="ci--4515.1.0--a--53d1559fda-k8s-calico--kube--controllers--5c595dfbb8--snvpt-eth0" Dec 12 18:22:05.626981 containerd[2529]: 2025-12-12 18:22:05.595 [INFO][5804] cni-plugin/k8s.go 418: Populated endpoint ContainerID="88856206d43af9ed3f59140081d41965cc399fd3b96dae59f0da991cdfc286a9" Namespace="calico-system" Pod="calico-kube-controllers-5c595dfbb8-snvpt" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-calico--kube--controllers--5c595dfbb8--snvpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--53d1559fda-k8s-calico--kube--controllers--5c595dfbb8--snvpt-eth0", GenerateName:"calico-kube-controllers-5c595dfbb8-", Namespace:"calico-system", SelfLink:"", UID:"cdef70f5-08e1-4939-ba2b-d4a667c25459", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 21, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c595dfbb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-53d1559fda", ContainerID:"", Pod:"calico-kube-controllers-5c595dfbb8-snvpt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3f91071e616", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:22:05.626981 containerd[2529]: 2025-12-12 18:22:05.596 [INFO][5804] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.8/32] ContainerID="88856206d43af9ed3f59140081d41965cc399fd3b96dae59f0da991cdfc286a9" Namespace="calico-system" Pod="calico-kube-controllers-5c595dfbb8-snvpt" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-calico--kube--controllers--5c595dfbb8--snvpt-eth0" Dec 12 18:22:05.626981 containerd[2529]: 2025-12-12 18:22:05.597 [INFO][5804] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f91071e616 ContainerID="88856206d43af9ed3f59140081d41965cc399fd3b96dae59f0da991cdfc286a9" Namespace="calico-system" Pod="calico-kube-controllers-5c595dfbb8-snvpt" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-calico--kube--controllers--5c595dfbb8--snvpt-eth0" Dec 12 18:22:05.626981 containerd[2529]: 2025-12-12 18:22:05.605 [INFO][5804] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="88856206d43af9ed3f59140081d41965cc399fd3b96dae59f0da991cdfc286a9" Namespace="calico-system" Pod="calico-kube-controllers-5c595dfbb8-snvpt" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-calico--kube--controllers--5c595dfbb8--snvpt-eth0" Dec 12 18:22:05.626981 containerd[2529]: 2025-12-12 18:22:05.606 [INFO][5804] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="88856206d43af9ed3f59140081d41965cc399fd3b96dae59f0da991cdfc286a9" Namespace="calico-system" Pod="calico-kube-controllers-5c595dfbb8-snvpt" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-calico--kube--controllers--5c595dfbb8--snvpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--53d1559fda-k8s-calico--kube--controllers--5c595dfbb8--snvpt-eth0", GenerateName:"calico-kube-controllers-5c595dfbb8-", Namespace:"calico-system", SelfLink:"", UID:"cdef70f5-08e1-4939-ba2b-d4a667c25459", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 21, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c595dfbb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-53d1559fda", ContainerID:"88856206d43af9ed3f59140081d41965cc399fd3b96dae59f0da991cdfc286a9", Pod:"calico-kube-controllers-5c595dfbb8-snvpt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3f91071e616", MAC:"7a:59:3c:4d:6c:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:22:05.626981 containerd[2529]: 2025-12-12 18:22:05.622 [INFO][5804] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="88856206d43af9ed3f59140081d41965cc399fd3b96dae59f0da991cdfc286a9" Namespace="calico-system" Pod="calico-kube-controllers-5c595dfbb8-snvpt" WorkloadEndpoint="ci--4515.1.0--a--53d1559fda-k8s-calico--kube--controllers--5c595dfbb8--snvpt-eth0" Dec 12 18:22:05.656000 audit[5905]: NETFILTER_CFG table=filter:142 family=2 entries=56 op=nft_register_chain pid=5905 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:22:05.656000 audit[5905]: SYSCALL arch=c000003e syscall=46 success=yes exit=25500 a0=3 a1=7fff73880e00 a2=0 a3=7fff73880dec items=0 ppid=5096 pid=5905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:05.656000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:22:05.666972 containerd[2529]: time="2025-12-12T18:22:05.666840746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qszww,Uid:793be58b-966e-4ae5-98e0-dcfd87576ade,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0be7e8b46d51974c660eff70d268640b71bbf04a897965e07976f073eae9621\"" Dec 12 18:22:05.669939 containerd[2529]: time="2025-12-12T18:22:05.669900619Z" level=info msg="connecting to shim 88856206d43af9ed3f59140081d41965cc399fd3b96dae59f0da991cdfc286a9" address="unix:///run/containerd/s/85555cc19d010ef33381fc47c1136975e6c64e93849d71617e10f185ad5a68bf" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:22:05.683834 containerd[2529]: time="2025-12-12T18:22:05.683738645Z" level=info msg="CreateContainer within sandbox \"c0be7e8b46d51974c660eff70d268640b71bbf04a897965e07976f073eae9621\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:22:05.695874 systemd[1]: Started cri-containerd-88856206d43af9ed3f59140081d41965cc399fd3b96dae59f0da991cdfc286a9.scope - libcontainer container 88856206d43af9ed3f59140081d41965cc399fd3b96dae59f0da991cdfc286a9. Dec 12 18:22:05.707000 audit: BPF prog-id=270 op=LOAD Dec 12 18:22:05.708743 containerd[2529]: time="2025-12-12T18:22:05.708689921Z" level=info msg="Container 0c1f94488704a7b6cd6d579be03aff468dac75565d1d4e58b0bf46283902b352: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:22:05.709000 audit: BPF prog-id=271 op=LOAD Dec 12 18:22:05.709000 audit[5927]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5915 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:05.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838383536323036643433616639656433663539313430303831643431 Dec 12 18:22:05.709000 audit: BPF prog-id=271 op=UNLOAD Dec 12 18:22:05.709000 audit[5927]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5915 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:05.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838383536323036643433616639656433663539313430303831643431 Dec 12 18:22:05.709000 audit: BPF prog-id=272 op=LOAD Dec 12 18:22:05.709000 audit[5927]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5915 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:05.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838383536323036643433616639656433663539313430303831643431 Dec 12 18:22:05.709000 audit: BPF prog-id=273 op=LOAD Dec 12 18:22:05.709000 audit[5927]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5915 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:05.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838383536323036643433616639656433663539313430303831643431 Dec 12 18:22:05.709000 audit: BPF prog-id=273 op=UNLOAD Dec 12 18:22:05.709000 audit[5927]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5915 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:05.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838383536323036643433616639656433663539313430303831643431 Dec 12 18:22:05.709000 audit: BPF prog-id=272 op=UNLOAD Dec 12 18:22:05.709000 audit[5927]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5915 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:05.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838383536323036643433616639656433663539313430303831643431 Dec 12 18:22:05.710000 audit: BPF prog-id=274 op=LOAD Dec 12 18:22:05.710000 audit[5927]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5915 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:05.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838383536323036643433616639656433663539313430303831643431 Dec 12 18:22:05.733847 containerd[2529]: time="2025-12-12T18:22:05.733796482Z" level=info msg="CreateContainer within sandbox \"c0be7e8b46d51974c660eff70d268640b71bbf04a897965e07976f073eae9621\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0c1f94488704a7b6cd6d579be03aff468dac75565d1d4e58b0bf46283902b352\"" Dec 12 18:22:05.735155 containerd[2529]: time="2025-12-12T18:22:05.735130077Z" level=info msg="StartContainer for \"0c1f94488704a7b6cd6d579be03aff468dac75565d1d4e58b0bf46283902b352\"" Dec 12 18:22:05.738699 containerd[2529]: time="2025-12-12T18:22:05.738660536Z" level=info msg="connecting to shim 0c1f94488704a7b6cd6d579be03aff468dac75565d1d4e58b0bf46283902b352" address="unix:///run/containerd/s/e041e278557e9b93cd2f3b2d036d7432760681dda2645dcf98e6d5985634ab9b" protocol=ttrpc version=3 Dec 12 18:22:05.749172 containerd[2529]: time="2025-12-12T18:22:05.749131692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c595dfbb8-snvpt,Uid:cdef70f5-08e1-4939-ba2b-d4a667c25459,Namespace:calico-system,Attempt:0,} returns sandbox id \"88856206d43af9ed3f59140081d41965cc399fd3b96dae59f0da991cdfc286a9\"" Dec 12 18:22:05.751030 containerd[2529]: time="2025-12-12T18:22:05.750957467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:22:05.762698 systemd[1]: Started cri-containerd-0c1f94488704a7b6cd6d579be03aff468dac75565d1d4e58b0bf46283902b352.scope - libcontainer container 0c1f94488704a7b6cd6d579be03aff468dac75565d1d4e58b0bf46283902b352. Dec 12 18:22:05.772000 audit: BPF prog-id=275 op=LOAD Dec 12 18:22:05.772000 audit: BPF prog-id=276 op=LOAD Dec 12 18:22:05.772000 audit[5952]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=5861 pid=5952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:05.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063316639343438383730346137623663643664353739626530336166 Dec 12 18:22:05.772000 audit: BPF prog-id=276 op=UNLOAD Dec 12 18:22:05.772000 audit[5952]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5861 pid=5952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:05.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063316639343438383730346137623663643664353739626530336166 Dec 12 18:22:05.772000 audit: BPF prog-id=277 op=LOAD Dec 12 18:22:05.772000 audit[5952]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=5861 pid=5952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:05.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063316639343438383730346137623663643664353739626530336166 Dec 12 18:22:05.772000 audit: BPF prog-id=278 op=LOAD Dec 12 18:22:05.772000 audit[5952]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=5861 pid=5952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:05.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063316639343438383730346137623663643664353739626530336166 Dec 12 18:22:05.772000 audit: BPF prog-id=278 op=UNLOAD Dec 12 18:22:05.772000 audit[5952]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5861 pid=5952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:05.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063316639343438383730346137623663643664353739626530336166 Dec 12 18:22:05.772000 audit: BPF prog-id=277 op=UNLOAD Dec 12 18:22:05.772000 audit[5952]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5861 pid=5952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:05.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063316639343438383730346137623663643664353739626530336166 Dec 12 18:22:05.773000 audit: BPF prog-id=279 op=LOAD Dec 12 18:22:05.773000 audit[5952]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=5861 pid=5952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:05.773000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063316639343438383730346137623663643664353739626530336166 Dec 12 18:22:05.789678 containerd[2529]: time="2025-12-12T18:22:05.789652184Z" level=info msg="StartContainer for \"0c1f94488704a7b6cd6d579be03aff468dac75565d1d4e58b0bf46283902b352\" returns successfully" Dec 12 18:22:05.963899 systemd-networkd[2161]: calib43ac761da3: Gained IPv6LL Dec 12 18:22:05.992785 containerd[2529]: time="2025-12-12T18:22:05.992732625Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:22:05.995912 containerd[2529]: time="2025-12-12T18:22:05.995878037Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:22:05.995985 containerd[2529]: time="2025-12-12T18:22:05.995958830Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 18:22:05.996214 kubelet[4009]: E1212 18:22:05.996177 4009 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:22:05.996505 kubelet[4009]: E1212 18:22:05.996228 4009 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:22:05.996505 kubelet[4009]: E1212 18:22:05.996387 4009 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pbzj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5c595dfbb8-snvpt_calico-system(cdef70f5-08e1-4939-ba2b-d4a667c25459): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:22:05.997810 kubelet[4009]: E1212 18:22:05.997767 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c595dfbb8-snvpt" podUID="cdef70f5-08e1-4939-ba2b-d4a667c25459" Dec 12 18:22:06.219768 systemd-networkd[2161]: cali05bb3c40a4f: Gained IPv6LL Dec 12 18:22:06.474681 kubelet[4009]: E1212 18:22:06.474568 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c595dfbb8-snvpt" podUID="cdef70f5-08e1-4939-ba2b-d4a667c25459" Dec 12 18:22:06.479697 kubelet[4009]: E1212 18:22:06.479652 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bq297" podUID="10291b6f-a9ca-4c45-b211-06a17f4d693f" Dec 12 18:22:06.480129 kubelet[4009]: E1212 18:22:06.480102 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4rvrk" podUID="985aeb6e-874b-4206-ac9b-71fb5aaf32cf" Dec 12 18:22:06.559000 audit[5986]: NETFILTER_CFG table=filter:143 family=2 entries=14 op=nft_register_rule pid=5986 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:22:06.559000 audit[5986]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffdf1e2b30 a2=0 a3=7fffdf1e2b1c items=0 ppid=4115 pid=5986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:06.559000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:22:06.585000 audit[5986]: NETFILTER_CFG table=nat:144 family=2 entries=56 op=nft_register_chain pid=5986 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:22:06.585000 audit[5986]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fffdf1e2b30 a2=0 a3=7fffdf1e2b1c items=0 ppid=4115 pid=5986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:22:06.585000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:22:07.179739 systemd-networkd[2161]: cali7067e81a900: Gained IPv6LL Dec 12 18:22:07.243675 systemd-networkd[2161]: cali3f91071e616: Gained IPv6LL Dec 12 18:22:07.481683 kubelet[4009]: E1212 18:22:07.481409 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c595dfbb8-snvpt" podUID="cdef70f5-08e1-4939-ba2b-d4a667c25459" Dec 12 18:22:07.498971 kubelet[4009]: I1212 18:22:07.497869 4009 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qszww" podStartSLOduration=42.497849947 podStartE2EDuration="42.497849947s" podCreationTimestamp="2025-12-12 18:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:22:06.542911387 +0000 UTC m=+46.388795035" watchObservedRunningTime="2025-12-12 18:22:07.497849947 +0000 UTC m=+47.343733601" Dec 12 18:22:12.313079 containerd[2529]: time="2025-12-12T18:22:12.312953369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:22:12.554389 containerd[2529]: time="2025-12-12T18:22:12.554321579Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:22:12.559438 containerd[2529]: time="2025-12-12T18:22:12.559405766Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:22:12.559483 containerd[2529]: time="2025-12-12T18:22:12.559472283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 18:22:12.559661 kubelet[4009]: E1212 18:22:12.559628 4009 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:22:12.559980 kubelet[4009]: E1212 18:22:12.559670 4009 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:22:12.559980 kubelet[4009]: E1212 18:22:12.559802 4009 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ceb61861a70e4d05aad200164b4c6cd2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xmxlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66465f8f84-gfntv_calico-system(129d48cc-df60-49a9-8eb1-5bf2a56866a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:22:12.562893 containerd[2529]: time="2025-12-12T18:22:12.562812706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:22:12.800779 containerd[2529]: time="2025-12-12T18:22:12.800730653Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:22:12.803943 containerd[2529]: time="2025-12-12T18:22:12.803911243Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:22:12.804002 containerd[2529]: time="2025-12-12T18:22:12.803984376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 18:22:12.804167 kubelet[4009]: E1212 18:22:12.804133 4009 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:22:12.804228 kubelet[4009]: E1212 18:22:12.804180 4009 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:22:12.804332 kubelet[4009]: E1212 18:22:12.804303 4009 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xmxlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66465f8f84-gfntv_calico-system(129d48cc-df60-49a9-8eb1-5bf2a56866a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:22:12.805834 kubelet[4009]: E1212 18:22:12.805796 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66465f8f84-gfntv" podUID="129d48cc-df60-49a9-8eb1-5bf2a56866a1" Dec 12 18:22:15.312068 containerd[2529]: time="2025-12-12T18:22:15.311763916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:22:15.568688 containerd[2529]: time="2025-12-12T18:22:15.568564039Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:22:15.571873 containerd[2529]: time="2025-12-12T18:22:15.571835636Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:22:15.571979 containerd[2529]: time="2025-12-12T18:22:15.571844780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:22:15.572097 kubelet[4009]: E1212 18:22:15.572064 4009 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:22:15.572405 kubelet[4009]: E1212 18:22:15.572105 4009 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:22:15.572405 kubelet[4009]: E1212 18:22:15.572246 4009 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m79zj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74c7564cb4-4f4mj_calico-apiserver(c72f9f09-ad62-40d6-8632-b537f3032703): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:22:15.573496 kubelet[4009]: E1212 18:22:15.573458 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c7564cb4-4f4mj" podUID="c72f9f09-ad62-40d6-8632-b537f3032703" Dec 12 18:22:17.312494 containerd[2529]: time="2025-12-12T18:22:17.311751481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:22:17.556048 containerd[2529]: time="2025-12-12T18:22:17.556006829Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:22:17.559740 containerd[2529]: time="2025-12-12T18:22:17.559712886Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:22:17.559800 containerd[2529]: time="2025-12-12T18:22:17.559780841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:22:17.559925 kubelet[4009]: E1212 18:22:17.559886 4009 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:22:17.560238 kubelet[4009]: E1212 18:22:17.559933 4009 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:22:17.560238 kubelet[4009]: E1212 18:22:17.560078 4009 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f2w4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74c7564cb4-hnnls_calico-apiserver(588d2c87-beaf-4e83-ba6f-8e3f0d453589): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:22:17.561355 kubelet[4009]: E1212 18:22:17.561318 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c7564cb4-hnnls" podUID="588d2c87-beaf-4e83-ba6f-8e3f0d453589" Dec 12 18:22:18.313545 containerd[2529]: time="2025-12-12T18:22:18.313036050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:22:18.571770 containerd[2529]: time="2025-12-12T18:22:18.571641392Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:22:18.575505 containerd[2529]: time="2025-12-12T18:22:18.575456556Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:22:18.575659 containerd[2529]: time="2025-12-12T18:22:18.575472206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 18:22:18.575688 kubelet[4009]: E1212 18:22:18.575659 4009 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:22:18.575968 kubelet[4009]: E1212 18:22:18.575701 4009 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:22:18.575994 kubelet[4009]: E1212 18:22:18.575937 4009 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cs47m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4rvrk_calico-system(985aeb6e-874b-4206-ac9b-71fb5aaf32cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:22:18.576495 containerd[2529]: time="2025-12-12T18:22:18.576451834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:22:18.577787 kubelet[4009]: E1212 18:22:18.577760 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4rvrk" podUID="985aeb6e-874b-4206-ac9b-71fb5aaf32cf" Dec 12 18:22:18.835098 containerd[2529]: time="2025-12-12T18:22:18.834625215Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:22:18.838907 containerd[2529]: time="2025-12-12T18:22:18.838328342Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:22:18.839101 containerd[2529]: time="2025-12-12T18:22:18.838893269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 18:22:18.839346 kubelet[4009]: E1212 18:22:18.839313 4009 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:22:18.839415 kubelet[4009]: E1212 18:22:18.839356 4009 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:22:18.839540 kubelet[4009]: E1212 18:22:18.839492 4009 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rf7fn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bq297_calico-system(10291b6f-a9ca-4c45-b211-06a17f4d693f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:22:18.842399 containerd[2529]: time="2025-12-12T18:22:18.842229241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:22:19.105276 containerd[2529]: time="2025-12-12T18:22:19.105152443Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:22:19.109475 containerd[2529]: time="2025-12-12T18:22:19.109420110Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:22:19.109475 containerd[2529]: time="2025-12-12T18:22:19.109455669Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 18:22:19.109687 kubelet[4009]: E1212 18:22:19.109650 4009 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:22:19.109739 kubelet[4009]: E1212 18:22:19.109699 4009 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:22:19.109871 kubelet[4009]: E1212 18:22:19.109817 4009 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rf7fn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bq297_calico-system(10291b6f-a9ca-4c45-b211-06a17f4d693f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:22:19.111307 kubelet[4009]: E1212 18:22:19.111266 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bq297" podUID="10291b6f-a9ca-4c45-b211-06a17f4d693f" Dec 12 18:22:19.312761 containerd[2529]: time="2025-12-12T18:22:19.311981589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:22:19.553883 containerd[2529]: time="2025-12-12T18:22:19.553830827Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:22:19.559121 containerd[2529]: time="2025-12-12T18:22:19.559089951Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:22:19.559182 containerd[2529]: time="2025-12-12T18:22:19.559171420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 18:22:19.559340 kubelet[4009]: E1212 18:22:19.559307 4009 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:22:19.559406 kubelet[4009]: E1212 18:22:19.559351 4009 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:22:19.559561 kubelet[4009]: E1212 18:22:19.559498 4009 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pbzj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5c595dfbb8-snvpt_calico-system(cdef70f5-08e1-4939-ba2b-d4a667c25459): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:22:19.560791 kubelet[4009]: E1212 18:22:19.560756 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c595dfbb8-snvpt" podUID="cdef70f5-08e1-4939-ba2b-d4a667c25459" Dec 12 18:22:23.313110 kubelet[4009]: E1212 18:22:23.313059 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66465f8f84-gfntv" podUID="129d48cc-df60-49a9-8eb1-5bf2a56866a1" Dec 12 18:22:27.312121 kubelet[4009]: E1212 18:22:27.312057 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c7564cb4-4f4mj" podUID="c72f9f09-ad62-40d6-8632-b537f3032703" Dec 12 18:22:28.316685 kubelet[4009]: E1212 18:22:28.316318 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c7564cb4-hnnls" podUID="588d2c87-beaf-4e83-ba6f-8e3f0d453589" Dec 12 18:22:29.312283 kubelet[4009]: E1212 18:22:29.311915 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4rvrk" podUID="985aeb6e-874b-4206-ac9b-71fb5aaf32cf" Dec 12 18:22:31.312903 kubelet[4009]: E1212 18:22:31.312789 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bq297" podUID="10291b6f-a9ca-4c45-b211-06a17f4d693f" Dec 12 18:22:34.316541 kubelet[4009]: E1212 18:22:34.315887 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c595dfbb8-snvpt" podUID="cdef70f5-08e1-4939-ba2b-d4a667c25459" Dec 12 18:22:37.312785 containerd[2529]: time="2025-12-12T18:22:37.312190087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:22:37.629615 containerd[2529]: time="2025-12-12T18:22:37.627665153Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:22:37.633357 containerd[2529]: time="2025-12-12T18:22:37.633311117Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:22:37.633467 containerd[2529]: time="2025-12-12T18:22:37.633408301Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 18:22:37.633602 kubelet[4009]: E1212 18:22:37.633566 4009 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:22:37.633893 kubelet[4009]: E1212 18:22:37.633643 4009 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:22:37.633893 kubelet[4009]: E1212 18:22:37.633820 4009 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ceb61861a70e4d05aad200164b4c6cd2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xmxlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66465f8f84-gfntv_calico-system(129d48cc-df60-49a9-8eb1-5bf2a56866a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:22:37.636205 containerd[2529]: time="2025-12-12T18:22:37.636162799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:22:37.886602 containerd[2529]: time="2025-12-12T18:22:37.886396066Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:22:37.889563 containerd[2529]: time="2025-12-12T18:22:37.889482853Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:22:37.889804 containerd[2529]: time="2025-12-12T18:22:37.889534070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 18:22:37.889984 kubelet[4009]: E1212 18:22:37.889935 4009 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:22:37.890611 kubelet[4009]: E1212 18:22:37.890560 4009 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:22:37.891164 kubelet[4009]: E1212 18:22:37.891116 4009 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xmxlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66465f8f84-gfntv_calico-system(129d48cc-df60-49a9-8eb1-5bf2a56866a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:22:37.892475 kubelet[4009]: E1212 18:22:37.892423 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66465f8f84-gfntv" podUID="129d48cc-df60-49a9-8eb1-5bf2a56866a1" Dec 12 18:22:39.312562 containerd[2529]: time="2025-12-12T18:22:39.312498579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:22:39.555835 containerd[2529]: time="2025-12-12T18:22:39.555790611Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:22:39.559018 containerd[2529]: time="2025-12-12T18:22:39.558893698Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:22:39.559018 containerd[2529]: time="2025-12-12T18:22:39.558987223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:22:39.559536 kubelet[4009]: E1212 18:22:39.559300 4009 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:22:39.559536 kubelet[4009]: E1212 18:22:39.559362 4009 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:22:39.560367 kubelet[4009]: E1212 18:22:39.559953 4009 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m79zj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74c7564cb4-4f4mj_calico-apiserver(c72f9f09-ad62-40d6-8632-b537f3032703): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:22:39.562265 kubelet[4009]: E1212 18:22:39.561636 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c7564cb4-4f4mj" podUID="c72f9f09-ad62-40d6-8632-b537f3032703" Dec 12 18:22:40.313929 containerd[2529]: time="2025-12-12T18:22:40.313224188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:22:40.569383 containerd[2529]: time="2025-12-12T18:22:40.568735873Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:22:40.571616 containerd[2529]: time="2025-12-12T18:22:40.571550590Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:22:40.571941 containerd[2529]: time="2025-12-12T18:22:40.571823576Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:22:40.572418 kubelet[4009]: E1212 18:22:40.572363 4009 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:22:40.572836 kubelet[4009]: E1212 18:22:40.572800 4009 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:22:40.573289 kubelet[4009]: E1212 18:22:40.573079 4009 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f2w4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74c7564cb4-hnnls_calico-apiserver(588d2c87-beaf-4e83-ba6f-8e3f0d453589): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:22:40.574601 kubelet[4009]: E1212 18:22:40.574569 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c7564cb4-hnnls" podUID="588d2c87-beaf-4e83-ba6f-8e3f0d453589" Dec 12 18:22:42.314922 containerd[2529]: time="2025-12-12T18:22:42.314879651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:22:42.561787 containerd[2529]: time="2025-12-12T18:22:42.561712760Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:22:42.564606 containerd[2529]: time="2025-12-12T18:22:42.564502473Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:22:42.564853 containerd[2529]: time="2025-12-12T18:22:42.564742430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 18:22:42.565480 kubelet[4009]: E1212 18:22:42.565022 4009 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:22:42.565480 kubelet[4009]: E1212 18:22:42.565439 4009 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:22:42.566701 kubelet[4009]: E1212 18:22:42.566617 4009 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cs47m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4rvrk_calico-system(985aeb6e-874b-4206-ac9b-71fb5aaf32cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:22:42.568419 kubelet[4009]: E1212 18:22:42.568367 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4rvrk" podUID="985aeb6e-874b-4206-ac9b-71fb5aaf32cf" Dec 12 18:22:43.312500 containerd[2529]: time="2025-12-12T18:22:43.312459843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:22:43.555951 containerd[2529]: time="2025-12-12T18:22:43.555797036Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:22:43.558770 containerd[2529]: time="2025-12-12T18:22:43.558675789Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:22:43.558770 containerd[2529]: time="2025-12-12T18:22:43.558717945Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 18:22:43.558911 kubelet[4009]: E1212 18:22:43.558858 4009 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:22:43.558911 kubelet[4009]: E1212 18:22:43.558897 4009 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:22:43.559073 kubelet[4009]: E1212 18:22:43.559039 4009 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rf7fn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bq297_calico-system(10291b6f-a9ca-4c45-b211-06a17f4d693f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:22:43.561685 containerd[2529]: time="2025-12-12T18:22:43.561656535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:22:43.808308 containerd[2529]: time="2025-12-12T18:22:43.808258732Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:22:43.810739 containerd[2529]: time="2025-12-12T18:22:43.810712924Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:22:43.810810 containerd[2529]: time="2025-12-12T18:22:43.810787696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 18:22:43.810957 kubelet[4009]: E1212 18:22:43.810923 4009 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:22:43.811269 kubelet[4009]: E1212 18:22:43.810971 4009 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:22:43.811269 kubelet[4009]: E1212 18:22:43.811105 4009 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rf7fn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bq297_calico-system(10291b6f-a9ca-4c45-b211-06a17f4d693f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:22:43.812371 kubelet[4009]: E1212 18:22:43.812319 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bq297" podUID="10291b6f-a9ca-4c45-b211-06a17f4d693f" Dec 12 18:22:49.313546 containerd[2529]: time="2025-12-12T18:22:49.313307910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:22:49.315063 kubelet[4009]: E1212 18:22:49.314956 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66465f8f84-gfntv" podUID="129d48cc-df60-49a9-8eb1-5bf2a56866a1" Dec 12 18:22:49.566759 containerd[2529]: time="2025-12-12T18:22:49.564987132Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:22:49.567902 containerd[2529]: time="2025-12-12T18:22:49.567869425Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:22:49.568021 containerd[2529]: time="2025-12-12T18:22:49.567942130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 18:22:49.568146 kubelet[4009]: E1212 18:22:49.568104 4009 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:22:49.568196 kubelet[4009]: E1212 18:22:49.568157 4009 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:22:49.568660 kubelet[4009]: E1212 18:22:49.568314 4009 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pbzj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5c595dfbb8-snvpt_calico-system(cdef70f5-08e1-4939-ba2b-d4a667c25459): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:22:49.569632 kubelet[4009]: E1212 18:22:49.569534 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c595dfbb8-snvpt" podUID="cdef70f5-08e1-4939-ba2b-d4a667c25459" Dec 12 18:22:52.313974 kubelet[4009]: E1212 18:22:52.313931 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c7564cb4-4f4mj" podUID="c72f9f09-ad62-40d6-8632-b537f3032703" Dec 12 18:22:52.314428 kubelet[4009]: E1212 18:22:52.314300 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c7564cb4-hnnls" podUID="588d2c87-beaf-4e83-ba6f-8e3f0d453589" Dec 12 18:22:55.314763 kubelet[4009]: E1212 18:22:55.314338 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bq297" podUID="10291b6f-a9ca-4c45-b211-06a17f4d693f" Dec 12 18:22:57.312796 kubelet[4009]: E1212 18:22:57.312701 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4rvrk" podUID="985aeb6e-874b-4206-ac9b-71fb5aaf32cf" Dec 12 18:23:01.313672 kubelet[4009]: E1212 18:23:01.313277 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66465f8f84-gfntv" podUID="129d48cc-df60-49a9-8eb1-5bf2a56866a1" Dec 12 18:23:02.178918 kernel: kauditd_printk_skb: 239 callbacks suppressed Dec 12 18:23:02.179030 kernel: audit: type=1130 audit(1765563782.167:771): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.12:22-10.200.16.10:52820 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:02.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.12:22-10.200.16.10:52820 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:02.168797 systemd[1]: Started sshd@7-10.200.8.12:22-10.200.16.10:52820.service - OpenSSH per-connection server daemon (10.200.16.10:52820). Dec 12 18:23:02.314398 kubelet[4009]: E1212 18:23:02.314352 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c595dfbb8-snvpt" podUID="cdef70f5-08e1-4939-ba2b-d4a667c25459" Dec 12 18:23:02.730000 audit[6079]: USER_ACCT pid=6079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:02.733716 sshd-session[6079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:23:02.735010 sshd[6079]: Accepted publickey for core from 10.200.16.10 port 52820 ssh2: RSA SHA256:MDSGev8JoWWrhohyx7j99tYqdskhx1insLzl+tdEp00 Dec 12 18:23:02.738686 kernel: audit: type=1101 audit(1765563782.730:772): pid=6079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:02.738767 kernel: audit: type=1103 audit(1765563782.731:773): pid=6079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:02.731000 audit[6079]: CRED_ACQ pid=6079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:02.749488 kernel: audit: type=1006 audit(1765563782.731:774): pid=6079 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 12 18:23:02.746249 systemd-logind[2512]: New session 10 of user core. Dec 12 18:23:02.731000 audit[6079]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc28b85ce0 a2=3 a3=0 items=0 ppid=1 pid=6079 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:02.753720 kernel: audit: type=1300 audit(1765563782.731:774): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc28b85ce0 a2=3 a3=0 items=0 ppid=1 pid=6079 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:02.731000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:23:02.757035 kernel: audit: type=1327 audit(1765563782.731:774): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:23:02.758771 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 18:23:02.761000 audit[6079]: USER_START pid=6079 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:02.770550 kernel: audit: type=1105 audit(1765563782.761:775): pid=6079 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:02.771000 audit[6082]: CRED_ACQ pid=6082 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:02.783661 kernel: audit: type=1103 audit(1765563782.771:776): pid=6082 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:03.127975 sshd[6082]: Connection closed by 10.200.16.10 port 52820 Dec 12 18:23:03.128915 sshd-session[6079]: pam_unix(sshd:session): session closed for user core Dec 12 18:23:03.128000 audit[6079]: USER_END pid=6079 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:03.135876 systemd[1]: sshd@7-10.200.8.12:22-10.200.16.10:52820.service: Deactivated successfully. Dec 12 18:23:03.140995 kernel: audit: type=1106 audit(1765563783.128:777): pid=6079 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:03.141071 kernel: audit: type=1104 audit(1765563783.128:778): pid=6079 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:03.128000 audit[6079]: CRED_DISP pid=6079 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:03.140230 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 18:23:03.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.12:22-10.200.16.10:52820 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:03.142115 systemd-logind[2512]: Session 10 logged out. Waiting for processes to exit. Dec 12 18:23:03.143219 systemd-logind[2512]: Removed session 10. Dec 12 18:23:03.311733 kubelet[4009]: E1212 18:23:03.311693 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c7564cb4-4f4mj" podUID="c72f9f09-ad62-40d6-8632-b537f3032703" Dec 12 18:23:04.313449 kubelet[4009]: E1212 18:23:04.313360 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c7564cb4-hnnls" podUID="588d2c87-beaf-4e83-ba6f-8e3f0d453589" Dec 12 18:23:08.241831 systemd[1]: Started sshd@8-10.200.8.12:22-10.200.16.10:52836.service - OpenSSH per-connection server daemon (10.200.16.10:52836). Dec 12 18:23:08.248594 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:23:08.248684 kernel: audit: type=1130 audit(1765563788.241:780): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.12:22-10.200.16.10:52836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:08.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.12:22-10.200.16.10:52836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:08.788000 audit[6096]: USER_ACCT pid=6096 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:08.789616 sshd[6096]: Accepted publickey for core from 10.200.16.10 port 52836 ssh2: RSA SHA256:MDSGev8JoWWrhohyx7j99tYqdskhx1insLzl+tdEp00 Dec 12 18:23:08.791393 sshd-session[6096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:23:08.790000 audit[6096]: CRED_ACQ pid=6096 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:08.796508 systemd-logind[2512]: New session 11 of user core. Dec 12 18:23:08.802017 kernel: audit: type=1101 audit(1765563788.788:781): pid=6096 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:08.802081 kernel: audit: type=1103 audit(1765563788.790:782): pid=6096 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:08.807330 kernel: audit: type=1006 audit(1765563788.790:783): pid=6096 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 12 18:23:08.790000 audit[6096]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff301df0d0 a2=3 a3=0 items=0 ppid=1 pid=6096 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:08.790000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:23:08.815896 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 18:23:08.818255 kernel: audit: type=1300 audit(1765563788.790:783): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff301df0d0 a2=3 a3=0 items=0 ppid=1 pid=6096 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:08.818320 kernel: audit: type=1327 audit(1765563788.790:783): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:23:08.818000 audit[6096]: USER_START pid=6096 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:08.820000 audit[6099]: CRED_ACQ pid=6099 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:08.829937 kernel: audit: type=1105 audit(1765563788.818:784): pid=6096 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:08.829986 kernel: audit: type=1103 audit(1765563788.820:785): pid=6099 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:09.155615 sshd[6099]: Connection closed by 10.200.16.10 port 52836 Dec 12 18:23:09.156360 sshd-session[6096]: pam_unix(sshd:session): session closed for user core Dec 12 18:23:09.157000 audit[6096]: USER_END pid=6096 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:09.160574 systemd[1]: sshd@8-10.200.8.12:22-10.200.16.10:52836.service: Deactivated successfully. Dec 12 18:23:09.163483 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 18:23:09.165607 systemd-logind[2512]: Session 11 logged out. Waiting for processes to exit. Dec 12 18:23:09.166613 kernel: audit: type=1106 audit(1765563789.157:786): pid=6096 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:09.167797 systemd-logind[2512]: Removed session 11. Dec 12 18:23:09.175918 kernel: audit: type=1104 audit(1765563789.157:787): pid=6096 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:09.157000 audit[6096]: CRED_DISP pid=6096 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:09.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.12:22-10.200.16.10:52836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:09.312802 kubelet[4009]: E1212 18:23:09.312110 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4rvrk" podUID="985aeb6e-874b-4206-ac9b-71fb5aaf32cf" Dec 12 18:23:09.313224 kubelet[4009]: E1212 18:23:09.312911 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bq297" podUID="10291b6f-a9ca-4c45-b211-06a17f4d693f" Dec 12 18:23:13.314241 kubelet[4009]: E1212 18:23:13.314197 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66465f8f84-gfntv" podUID="129d48cc-df60-49a9-8eb1-5bf2a56866a1" Dec 12 18:23:14.269467 systemd[1]: Started sshd@9-10.200.8.12:22-10.200.16.10:48172.service - OpenSSH per-connection server daemon (10.200.16.10:48172). Dec 12 18:23:14.276718 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:23:14.276805 kernel: audit: type=1130 audit(1765563794.269:789): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.12:22-10.200.16.10:48172 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:14.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.12:22-10.200.16.10:48172 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:14.808000 audit[6112]: USER_ACCT pid=6112 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:14.818073 kernel: audit: type=1101 audit(1765563794.808:790): pid=6112 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:14.818215 sshd[6112]: Accepted publickey for core from 10.200.16.10 port 48172 ssh2: RSA SHA256:MDSGev8JoWWrhohyx7j99tYqdskhx1insLzl+tdEp00 Dec 12 18:23:14.819086 sshd-session[6112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:23:14.818000 audit[6112]: CRED_ACQ pid=6112 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:14.827539 kernel: audit: type=1103 audit(1765563794.818:791): pid=6112 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:14.833538 kernel: audit: type=1006 audit(1765563794.818:792): pid=6112 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 12 18:23:14.818000 audit[6112]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd38bc9aa0 a2=3 a3=0 items=0 ppid=1 pid=6112 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:14.842963 systemd-logind[2512]: New session 12 of user core. Dec 12 18:23:14.843535 kernel: audit: type=1300 audit(1765563794.818:792): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd38bc9aa0 a2=3 a3=0 items=0 ppid=1 pid=6112 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:14.818000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:23:14.847538 kernel: audit: type=1327 audit(1765563794.818:792): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:23:14.850713 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 18:23:14.853000 audit[6112]: USER_START pid=6112 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:14.856000 audit[6115]: CRED_ACQ pid=6115 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:14.868249 kernel: audit: type=1105 audit(1765563794.853:793): pid=6112 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:14.868316 kernel: audit: type=1103 audit(1765563794.856:794): pid=6115 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:15.200811 sshd[6115]: Connection closed by 10.200.16.10 port 48172 Dec 12 18:23:15.201381 sshd-session[6112]: pam_unix(sshd:session): session closed for user core Dec 12 18:23:15.202000 audit[6112]: USER_END pid=6112 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:15.206861 systemd[1]: sshd@9-10.200.8.12:22-10.200.16.10:48172.service: Deactivated successfully. Dec 12 18:23:15.211553 kernel: audit: type=1106 audit(1765563795.202:795): pid=6112 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:15.202000 audit[6112]: CRED_DISP pid=6112 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:15.213161 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 18:23:15.216465 systemd-logind[2512]: Session 12 logged out. Waiting for processes to exit. Dec 12 18:23:15.218602 systemd-logind[2512]: Removed session 12. Dec 12 18:23:15.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.12:22-10.200.16.10:48172 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:15.219585 kernel: audit: type=1104 audit(1765563795.202:796): pid=6112 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:15.311344 systemd[1]: Started sshd@10-10.200.8.12:22-10.200.16.10:48174.service - OpenSSH per-connection server daemon (10.200.16.10:48174). Dec 12 18:23:15.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.12:22-10.200.16.10:48174 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:15.314277 kubelet[4009]: E1212 18:23:15.314202 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c595dfbb8-snvpt" podUID="cdef70f5-08e1-4939-ba2b-d4a667c25459" Dec 12 18:23:15.852000 audit[6128]: USER_ACCT pid=6128 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:15.853476 sshd[6128]: Accepted publickey for core from 10.200.16.10 port 48174 ssh2: RSA SHA256:MDSGev8JoWWrhohyx7j99tYqdskhx1insLzl+tdEp00 Dec 12 18:23:15.854000 audit[6128]: CRED_ACQ pid=6128 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:15.854000 audit[6128]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff22034700 a2=3 a3=0 items=0 ppid=1 pid=6128 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:15.854000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:23:15.855304 sshd-session[6128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:23:15.861952 systemd-logind[2512]: New session 13 of user core. Dec 12 18:23:15.866687 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 18:23:15.868000 audit[6128]: USER_START pid=6128 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:15.870000 audit[6131]: CRED_ACQ pid=6131 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:16.267088 sshd[6131]: Connection closed by 10.200.16.10 port 48174 Dec 12 18:23:16.268710 sshd-session[6128]: pam_unix(sshd:session): session closed for user core Dec 12 18:23:16.270000 audit[6128]: USER_END pid=6128 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:16.270000 audit[6128]: CRED_DISP pid=6128 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:16.273170 systemd-logind[2512]: Session 13 logged out. Waiting for processes to exit. Dec 12 18:23:16.275224 systemd[1]: sshd@10-10.200.8.12:22-10.200.16.10:48174.service: Deactivated successfully. Dec 12 18:23:16.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.12:22-10.200.16.10:48174 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:16.278487 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 18:23:16.282679 systemd-logind[2512]: Removed session 13. Dec 12 18:23:16.381121 systemd[1]: Started sshd@11-10.200.8.12:22-10.200.16.10:48188.service - OpenSSH per-connection server daemon (10.200.16.10:48188). Dec 12 18:23:16.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.12:22-10.200.16.10:48188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:16.941000 audit[6141]: USER_ACCT pid=6141 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:16.942454 sshd[6141]: Accepted publickey for core from 10.200.16.10 port 48188 ssh2: RSA SHA256:MDSGev8JoWWrhohyx7j99tYqdskhx1insLzl+tdEp00 Dec 12 18:23:16.942000 audit[6141]: CRED_ACQ pid=6141 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:16.942000 audit[6141]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc90bb5730 a2=3 a3=0 items=0 ppid=1 pid=6141 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:16.942000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:23:16.943632 sshd-session[6141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:23:16.948970 systemd-logind[2512]: New session 14 of user core. Dec 12 18:23:16.953721 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 18:23:16.957000 audit[6141]: USER_START pid=6141 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:16.959000 audit[6144]: CRED_ACQ pid=6144 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:17.311126 kubelet[4009]: E1212 18:23:17.311008 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c7564cb4-4f4mj" podUID="c72f9f09-ad62-40d6-8632-b537f3032703" Dec 12 18:23:17.312081 kubelet[4009]: E1212 18:23:17.312039 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c7564cb4-hnnls" podUID="588d2c87-beaf-4e83-ba6f-8e3f0d453589" Dec 12 18:23:17.312458 sshd[6144]: Connection closed by 10.200.16.10 port 48188 Dec 12 18:23:17.313142 sshd-session[6141]: pam_unix(sshd:session): session closed for user core Dec 12 18:23:17.314000 audit[6141]: USER_END pid=6141 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:17.315000 audit[6141]: CRED_DISP pid=6141 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:17.317929 systemd[1]: sshd@11-10.200.8.12:22-10.200.16.10:48188.service: Deactivated successfully. Dec 12 18:23:17.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.12:22-10.200.16.10:48188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:17.321240 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 18:23:17.322065 systemd-logind[2512]: Session 14 logged out. Waiting for processes to exit. Dec 12 18:23:17.324638 systemd-logind[2512]: Removed session 14. Dec 12 18:23:22.315299 kubelet[4009]: E1212 18:23:22.314918 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bq297" podUID="10291b6f-a9ca-4c45-b211-06a17f4d693f" Dec 12 18:23:22.425056 systemd[1]: Started sshd@12-10.200.8.12:22-10.200.16.10:56688.service - OpenSSH per-connection server daemon (10.200.16.10:56688). Dec 12 18:23:22.428059 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 12 18:23:22.428105 kernel: audit: type=1130 audit(1765563802.423:816): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.12:22-10.200.16.10:56688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:22.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.12:22-10.200.16.10:56688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:22.984000 audit[6169]: USER_ACCT pid=6169 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:22.991234 sshd[6169]: Accepted publickey for core from 10.200.16.10 port 56688 ssh2: RSA SHA256:MDSGev8JoWWrhohyx7j99tYqdskhx1insLzl+tdEp00 Dec 12 18:23:22.993541 kernel: audit: type=1101 audit(1765563802.984:817): pid=6169 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:22.993617 kernel: audit: type=1103 audit(1765563802.991:818): pid=6169 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:22.991000 audit[6169]: CRED_ACQ pid=6169 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:22.993103 sshd-session[6169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:23:23.000085 systemd-logind[2512]: New session 15 of user core. Dec 12 18:23:23.001975 kernel: audit: type=1006 audit(1765563802.991:819): pid=6169 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 12 18:23:22.991000 audit[6169]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc5f3933d0 a2=3 a3=0 items=0 ppid=1 pid=6169 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:23.009599 kernel: audit: type=1300 audit(1765563802.991:819): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc5f3933d0 a2=3 a3=0 items=0 ppid=1 pid=6169 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:23.013699 kernel: audit: type=1327 audit(1765563802.991:819): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:23:22.991000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:23:23.013984 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 18:23:23.015000 audit[6169]: USER_START pid=6169 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:23.016000 audit[6172]: CRED_ACQ pid=6172 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:23.032413 kernel: audit: type=1105 audit(1765563803.015:820): pid=6169 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:23.032486 kernel: audit: type=1103 audit(1765563803.016:821): pid=6172 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:23.346600 sshd[6172]: Connection closed by 10.200.16.10 port 56688 Dec 12 18:23:23.347634 sshd-session[6169]: pam_unix(sshd:session): session closed for user core Dec 12 18:23:23.347000 audit[6169]: USER_END pid=6169 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:23.358942 systemd[1]: sshd@12-10.200.8.12:22-10.200.16.10:56688.service: Deactivated successfully. Dec 12 18:23:23.364547 kernel: audit: type=1106 audit(1765563803.347:822): pid=6169 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:23.347000 audit[6169]: CRED_DISP pid=6169 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:23.367693 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 18:23:23.369566 kernel: audit: type=1104 audit(1765563803.347:823): pid=6169 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:23.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.12:22-10.200.16.10:56688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:23.371576 systemd-logind[2512]: Session 15 logged out. Waiting for processes to exit. Dec 12 18:23:23.372756 systemd-logind[2512]: Removed session 15. Dec 12 18:23:24.315360 containerd[2529]: time="2025-12-12T18:23:24.314639958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:23:24.583793 containerd[2529]: time="2025-12-12T18:23:24.583661389Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:23:24.586946 containerd[2529]: time="2025-12-12T18:23:24.586809822Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:23:24.586946 containerd[2529]: time="2025-12-12T18:23:24.586912827Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 18:23:24.587122 kubelet[4009]: E1212 18:23:24.587073 4009 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:23:24.587689 kubelet[4009]: E1212 18:23:24.587138 4009 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:23:24.587689 kubelet[4009]: E1212 18:23:24.587305 4009 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cs47m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4rvrk_calico-system(985aeb6e-874b-4206-ac9b-71fb5aaf32cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:23:24.588558 kubelet[4009]: E1212 18:23:24.588510 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4rvrk" podUID="985aeb6e-874b-4206-ac9b-71fb5aaf32cf" Dec 12 18:23:28.312806 containerd[2529]: time="2025-12-12T18:23:28.312476552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:23:28.313368 kubelet[4009]: E1212 18:23:28.313097 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c595dfbb8-snvpt" podUID="cdef70f5-08e1-4939-ba2b-d4a667c25459" Dec 12 18:23:28.467545 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:23:28.467657 kernel: audit: type=1130 audit(1765563808.464:825): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.12:22-10.200.16.10:56692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:28.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.12:22-10.200.16.10:56692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:28.465849 systemd[1]: Started sshd@13-10.200.8.12:22-10.200.16.10:56692.service - OpenSSH per-connection server daemon (10.200.16.10:56692). Dec 12 18:23:28.554336 containerd[2529]: time="2025-12-12T18:23:28.554292854Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:23:28.557254 containerd[2529]: time="2025-12-12T18:23:28.557189324Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:23:28.558573 containerd[2529]: time="2025-12-12T18:23:28.557215449Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 18:23:28.558797 kubelet[4009]: E1212 18:23:28.558759 4009 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:23:28.558857 kubelet[4009]: E1212 18:23:28.558824 4009 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:23:28.558988 kubelet[4009]: E1212 18:23:28.558960 4009 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ceb61861a70e4d05aad200164b4c6cd2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xmxlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66465f8f84-gfntv_calico-system(129d48cc-df60-49a9-8eb1-5bf2a56866a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:23:28.563386 containerd[2529]: time="2025-12-12T18:23:28.563080960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:23:28.811800 containerd[2529]: time="2025-12-12T18:23:28.811753321Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:23:28.814776 containerd[2529]: time="2025-12-12T18:23:28.814667242Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:23:28.814874 containerd[2529]: time="2025-12-12T18:23:28.814776590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 18:23:28.815685 kubelet[4009]: E1212 18:23:28.815628 4009 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:23:28.815775 kubelet[4009]: E1212 18:23:28.815700 4009 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:23:28.817764 kubelet[4009]: E1212 18:23:28.817705 4009 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xmxlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66465f8f84-gfntv_calico-system(129d48cc-df60-49a9-8eb1-5bf2a56866a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:23:28.818927 kubelet[4009]: E1212 18:23:28.818884 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66465f8f84-gfntv" podUID="129d48cc-df60-49a9-8eb1-5bf2a56866a1" Dec 12 18:23:29.024000 audit[6189]: USER_ACCT pid=6189 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:29.032612 kernel: audit: type=1101 audit(1765563809.024:826): pid=6189 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:29.034544 sshd[6189]: Accepted publickey for core from 10.200.16.10 port 56692 ssh2: RSA SHA256:MDSGev8JoWWrhohyx7j99tYqdskhx1insLzl+tdEp00 Dec 12 18:23:29.035772 sshd-session[6189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:23:29.034000 audit[6189]: CRED_ACQ pid=6189 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:29.049553 kernel: audit: type=1103 audit(1765563809.034:827): pid=6189 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:29.058345 systemd-logind[2512]: New session 16 of user core. Dec 12 18:23:29.059025 kernel: audit: type=1006 audit(1765563809.034:828): pid=6189 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 12 18:23:29.034000 audit[6189]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc8b8c5f70 a2=3 a3=0 items=0 ppid=1 pid=6189 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:29.067696 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 18:23:29.068583 kernel: audit: type=1300 audit(1765563809.034:828): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc8b8c5f70 a2=3 a3=0 items=0 ppid=1 pid=6189 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:29.034000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:23:29.076550 kernel: audit: type=1327 audit(1765563809.034:828): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:23:29.074000 audit[6189]: USER_START pid=6189 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:29.086585 kernel: audit: type=1105 audit(1765563809.074:829): pid=6189 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:29.076000 audit[6192]: CRED_ACQ pid=6192 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:29.096536 kernel: audit: type=1103 audit(1765563809.076:830): pid=6192 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:29.413662 sshd[6192]: Connection closed by 10.200.16.10 port 56692 Dec 12 18:23:29.412030 sshd-session[6189]: pam_unix(sshd:session): session closed for user core Dec 12 18:23:29.413000 audit[6189]: USER_END pid=6189 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:29.416046 systemd-logind[2512]: Session 16 logged out. Waiting for processes to exit. Dec 12 18:23:29.418013 systemd[1]: sshd@13-10.200.8.12:22-10.200.16.10:56692.service: Deactivated successfully. Dec 12 18:23:29.420362 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 18:23:29.423365 kernel: audit: type=1106 audit(1765563809.413:831): pid=6189 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:29.422803 systemd-logind[2512]: Removed session 16. Dec 12 18:23:29.413000 audit[6189]: CRED_DISP pid=6189 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:29.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.12:22-10.200.16.10:56692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:29.431025 kernel: audit: type=1104 audit(1765563809.413:832): pid=6189 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:30.315836 containerd[2529]: time="2025-12-12T18:23:30.315567026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:23:30.577911 containerd[2529]: time="2025-12-12T18:23:30.577777651Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:23:30.587440 containerd[2529]: time="2025-12-12T18:23:30.587386666Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:23:30.587575 containerd[2529]: time="2025-12-12T18:23:30.587499891Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:23:30.587755 kubelet[4009]: E1212 18:23:30.587716 4009 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:23:30.588049 kubelet[4009]: E1212 18:23:30.587772 4009 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:23:30.588354 kubelet[4009]: E1212 18:23:30.588209 4009 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m79zj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74c7564cb4-4f4mj_calico-apiserver(c72f9f09-ad62-40d6-8632-b537f3032703): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:23:30.589707 kubelet[4009]: E1212 18:23:30.589644 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c7564cb4-4f4mj" podUID="c72f9f09-ad62-40d6-8632-b537f3032703" Dec 12 18:23:32.317335 containerd[2529]: time="2025-12-12T18:23:32.317288861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:23:32.574393 containerd[2529]: time="2025-12-12T18:23:32.573918705Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:23:32.577141 containerd[2529]: time="2025-12-12T18:23:32.577088637Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:23:32.577251 containerd[2529]: time="2025-12-12T18:23:32.577200734Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:23:32.577429 kubelet[4009]: E1212 18:23:32.577389 4009 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:23:32.578119 kubelet[4009]: E1212 18:23:32.577447 4009 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:23:32.578119 kubelet[4009]: E1212 18:23:32.577614 4009 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f2w4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74c7564cb4-hnnls_calico-apiserver(588d2c87-beaf-4e83-ba6f-8e3f0d453589): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:23:32.579127 kubelet[4009]: E1212 18:23:32.579074 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c7564cb4-hnnls" podUID="588d2c87-beaf-4e83-ba6f-8e3f0d453589" Dec 12 18:23:34.316169 containerd[2529]: time="2025-12-12T18:23:34.316114017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:23:34.527344 systemd[1]: Started sshd@14-10.200.8.12:22-10.200.16.10:46768.service - OpenSSH per-connection server daemon (10.200.16.10:46768). Dec 12 18:23:34.534818 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:23:34.534843 kernel: audit: type=1130 audit(1765563814.527:834): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.12:22-10.200.16.10:46768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:34.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.12:22-10.200.16.10:46768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:34.557121 containerd[2529]: time="2025-12-12T18:23:34.557083530Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:23:34.561565 containerd[2529]: time="2025-12-12T18:23:34.560510892Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:23:34.561565 containerd[2529]: time="2025-12-12T18:23:34.560643102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 18:23:34.561669 kubelet[4009]: E1212 18:23:34.560769 4009 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:23:34.561669 kubelet[4009]: E1212 18:23:34.560838 4009 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:23:34.561669 kubelet[4009]: E1212 18:23:34.560977 4009 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rf7fn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bq297_calico-system(10291b6f-a9ca-4c45-b211-06a17f4d693f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:23:34.563742 containerd[2529]: time="2025-12-12T18:23:34.563510560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:23:34.809513 containerd[2529]: time="2025-12-12T18:23:34.809462587Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:23:34.812397 containerd[2529]: time="2025-12-12T18:23:34.812348127Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:23:34.812487 containerd[2529]: time="2025-12-12T18:23:34.812456019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 18:23:34.812885 kubelet[4009]: E1212 18:23:34.812634 4009 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:23:34.812885 kubelet[4009]: E1212 18:23:34.812684 4009 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:23:34.812885 kubelet[4009]: E1212 18:23:34.812816 4009 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rf7fn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bq297_calico-system(10291b6f-a9ca-4c45-b211-06a17f4d693f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:23:34.814234 kubelet[4009]: E1212 18:23:34.814189 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bq297" podUID="10291b6f-a9ca-4c45-b211-06a17f4d693f" Dec 12 18:23:35.078000 audit[6234]: USER_ACCT pid=6234 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:35.080718 sshd[6234]: Accepted publickey for core from 10.200.16.10 port 46768 ssh2: RSA SHA256:MDSGev8JoWWrhohyx7j99tYqdskhx1insLzl+tdEp00 Dec 12 18:23:35.083490 sshd-session[6234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:23:35.082000 audit[6234]: CRED_ACQ pid=6234 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:35.086652 kernel: audit: type=1101 audit(1765563815.078:835): pid=6234 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:35.086766 kernel: audit: type=1103 audit(1765563815.082:836): pid=6234 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:35.090289 kernel: audit: type=1006 audit(1765563815.082:837): pid=6234 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 12 18:23:35.091546 kernel: audit: type=1300 audit(1765563815.082:837): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb8992630 a2=3 a3=0 items=0 ppid=1 pid=6234 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:35.082000 audit[6234]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb8992630 a2=3 a3=0 items=0 ppid=1 pid=6234 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:35.096075 systemd-logind[2512]: New session 17 of user core. Dec 12 18:23:35.082000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:23:35.099029 kernel: audit: type=1327 audit(1765563815.082:837): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:23:35.107866 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 18:23:35.118540 kernel: audit: type=1105 audit(1765563815.110:838): pid=6234 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:35.110000 audit[6234]: USER_START pid=6234 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:35.118000 audit[6237]: CRED_ACQ pid=6237 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:35.124545 kernel: audit: type=1103 audit(1765563815.118:839): pid=6237 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:35.441626 sshd[6237]: Connection closed by 10.200.16.10 port 46768 Dec 12 18:23:35.442188 sshd-session[6234]: pam_unix(sshd:session): session closed for user core Dec 12 18:23:35.443000 audit[6234]: USER_END pid=6234 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:35.449098 systemd[1]: sshd@14-10.200.8.12:22-10.200.16.10:46768.service: Deactivated successfully. Dec 12 18:23:35.453387 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 18:23:35.455491 kernel: audit: type=1106 audit(1765563815.443:840): pid=6234 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:35.443000 audit[6234]: CRED_DISP pid=6234 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:35.460027 systemd-logind[2512]: Session 17 logged out. Waiting for processes to exit. Dec 12 18:23:35.462432 systemd-logind[2512]: Removed session 17. Dec 12 18:23:35.464617 kernel: audit: type=1104 audit(1765563815.443:841): pid=6234 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:35.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.12:22-10.200.16.10:46768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:35.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.12:22-10.200.16.10:46772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:35.560871 systemd[1]: Started sshd@15-10.200.8.12:22-10.200.16.10:46772.service - OpenSSH per-connection server daemon (10.200.16.10:46772). Dec 12 18:23:36.123000 audit[6249]: USER_ACCT pid=6249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:36.124077 sshd[6249]: Accepted publickey for core from 10.200.16.10 port 46772 ssh2: RSA SHA256:MDSGev8JoWWrhohyx7j99tYqdskhx1insLzl+tdEp00 Dec 12 18:23:36.124000 audit[6249]: CRED_ACQ pid=6249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:36.124000 audit[6249]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffab3c2120 a2=3 a3=0 items=0 ppid=1 pid=6249 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:36.124000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:23:36.125259 sshd-session[6249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:23:36.129926 systemd-logind[2512]: New session 18 of user core. Dec 12 18:23:36.134677 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 18:23:36.136000 audit[6249]: USER_START pid=6249 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:36.137000 audit[6252]: CRED_ACQ pid=6252 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:36.604826 sshd[6252]: Connection closed by 10.200.16.10 port 46772 Dec 12 18:23:36.606685 sshd-session[6249]: pam_unix(sshd:session): session closed for user core Dec 12 18:23:36.607000 audit[6249]: USER_END pid=6249 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:36.607000 audit[6249]: CRED_DISP pid=6249 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:36.609486 systemd[1]: sshd@15-10.200.8.12:22-10.200.16.10:46772.service: Deactivated successfully. Dec 12 18:23:36.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.12:22-10.200.16.10:46772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:36.611605 systemd-logind[2512]: Session 18 logged out. Waiting for processes to exit. Dec 12 18:23:36.611715 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 18:23:36.614254 systemd-logind[2512]: Removed session 18. Dec 12 18:23:36.717669 systemd[1]: Started sshd@16-10.200.8.12:22-10.200.16.10:46784.service - OpenSSH per-connection server daemon (10.200.16.10:46784). Dec 12 18:23:36.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.12:22-10.200.16.10:46784 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:37.265000 audit[6262]: USER_ACCT pid=6262 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:37.266588 sshd[6262]: Accepted publickey for core from 10.200.16.10 port 46784 ssh2: RSA SHA256:MDSGev8JoWWrhohyx7j99tYqdskhx1insLzl+tdEp00 Dec 12 18:23:37.268000 audit[6262]: CRED_ACQ pid=6262 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:37.268000 audit[6262]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcde74c030 a2=3 a3=0 items=0 ppid=1 pid=6262 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:37.268000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:23:37.269043 sshd-session[6262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:23:37.274227 systemd-logind[2512]: New session 19 of user core. Dec 12 18:23:37.281890 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 18:23:37.285000 audit[6262]: USER_START pid=6262 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:37.287000 audit[6279]: CRED_ACQ pid=6279 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:38.119000 audit[6289]: NETFILTER_CFG table=filter:145 family=2 entries=26 op=nft_register_rule pid=6289 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:23:38.119000 audit[6289]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff7278ad60 a2=0 a3=7fff7278ad4c items=0 ppid=4115 pid=6289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:38.119000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:23:38.127000 audit[6289]: NETFILTER_CFG table=nat:146 family=2 entries=20 op=nft_register_rule pid=6289 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:23:38.127000 audit[6289]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff7278ad60 a2=0 a3=0 items=0 ppid=4115 pid=6289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:38.127000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:23:38.144000 audit[6291]: NETFILTER_CFG table=filter:147 family=2 entries=38 op=nft_register_rule pid=6291 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:23:38.144000 audit[6291]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffd1fbfad90 a2=0 a3=7ffd1fbfad7c items=0 ppid=4115 pid=6291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:38.144000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:23:38.148000 audit[6291]: NETFILTER_CFG table=nat:148 family=2 entries=20 op=nft_register_rule pid=6291 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:23:38.148000 audit[6291]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd1fbfad90 a2=0 a3=0 items=0 ppid=4115 pid=6291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:38.148000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:23:38.215438 sshd[6279]: Connection closed by 10.200.16.10 port 46784 Dec 12 18:23:38.215955 sshd-session[6262]: pam_unix(sshd:session): session closed for user core Dec 12 18:23:38.216000 audit[6262]: USER_END pid=6262 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:38.216000 audit[6262]: CRED_DISP pid=6262 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:38.219631 systemd-logind[2512]: Session 19 logged out. Waiting for processes to exit. Dec 12 18:23:38.220137 systemd[1]: sshd@16-10.200.8.12:22-10.200.16.10:46784.service: Deactivated successfully. Dec 12 18:23:38.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.12:22-10.200.16.10:46784 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:38.221954 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 18:23:38.224020 systemd-logind[2512]: Removed session 19. Dec 12 18:23:38.330369 systemd[1]: Started sshd@17-10.200.8.12:22-10.200.16.10:46786.service - OpenSSH per-connection server daemon (10.200.16.10:46786). Dec 12 18:23:38.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.12:22-10.200.16.10:46786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:38.869079 sshd[6296]: Accepted publickey for core from 10.200.16.10 port 46786 ssh2: RSA SHA256:MDSGev8JoWWrhohyx7j99tYqdskhx1insLzl+tdEp00 Dec 12 18:23:38.868000 audit[6296]: USER_ACCT pid=6296 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:38.869000 audit[6296]: CRED_ACQ pid=6296 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:38.871950 sshd-session[6296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:23:38.871000 audit[6296]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc79c31830 a2=3 a3=0 items=0 ppid=1 pid=6296 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:38.871000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:23:38.881175 systemd-logind[2512]: New session 20 of user core. Dec 12 18:23:38.885085 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 18:23:38.888000 audit[6296]: USER_START pid=6296 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:38.892000 audit[6299]: CRED_ACQ pid=6299 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:39.312183 kubelet[4009]: E1212 18:23:39.312126 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4rvrk" podUID="985aeb6e-874b-4206-ac9b-71fb5aaf32cf" Dec 12 18:23:39.320912 sshd[6299]: Connection closed by 10.200.16.10 port 46786 Dec 12 18:23:39.322693 sshd-session[6296]: pam_unix(sshd:session): session closed for user core Dec 12 18:23:39.324000 audit[6296]: USER_END pid=6296 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:39.325000 audit[6296]: CRED_DISP pid=6296 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:39.329577 systemd-logind[2512]: Session 20 logged out. Waiting for processes to exit. Dec 12 18:23:39.329815 systemd[1]: sshd@17-10.200.8.12:22-10.200.16.10:46786.service: Deactivated successfully. Dec 12 18:23:39.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.12:22-10.200.16.10:46786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:39.332069 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 18:23:39.333798 systemd-logind[2512]: Removed session 20. Dec 12 18:23:39.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.12:22-10.200.16.10:46802 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:39.435844 systemd[1]: Started sshd@18-10.200.8.12:22-10.200.16.10:46802.service - OpenSSH per-connection server daemon (10.200.16.10:46802). Dec 12 18:23:39.977000 audit[6309]: USER_ACCT pid=6309 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:39.979852 kernel: kauditd_printk_skb: 47 callbacks suppressed Dec 12 18:23:39.979928 kernel: audit: type=1101 audit(1765563819.977:875): pid=6309 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:39.979952 sshd[6309]: Accepted publickey for core from 10.200.16.10 port 46802 ssh2: RSA SHA256:MDSGev8JoWWrhohyx7j99tYqdskhx1insLzl+tdEp00 Dec 12 18:23:39.982102 sshd-session[6309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:23:39.979000 audit[6309]: CRED_ACQ pid=6309 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:39.991782 systemd-logind[2512]: New session 21 of user core. Dec 12 18:23:39.998577 kernel: audit: type=1103 audit(1765563819.979:876): pid=6309 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:40.001732 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 18:23:40.007068 kernel: audit: type=1006 audit(1765563819.979:877): pid=6309 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Dec 12 18:23:39.979000 audit[6309]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc2d3e1dd0 a2=3 a3=0 items=0 ppid=1 pid=6309 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:40.015544 kernel: audit: type=1300 audit(1765563819.979:877): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc2d3e1dd0 a2=3 a3=0 items=0 ppid=1 pid=6309 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:39.979000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:23:40.024084 kernel: audit: type=1327 audit(1765563819.979:877): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:23:40.024165 kernel: audit: type=1105 audit(1765563820.008:878): pid=6309 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:40.008000 audit[6309]: USER_START pid=6309 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:40.029100 kernel: audit: type=1103 audit(1765563820.015:879): pid=6312 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:40.015000 audit[6312]: CRED_ACQ pid=6312 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:40.355620 sshd[6312]: Connection closed by 10.200.16.10 port 46802 Dec 12 18:23:40.356201 sshd-session[6309]: pam_unix(sshd:session): session closed for user core Dec 12 18:23:40.357000 audit[6309]: USER_END pid=6309 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:40.367576 kernel: audit: type=1106 audit(1765563820.357:880): pid=6309 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:40.368309 systemd-logind[2512]: Session 21 logged out. Waiting for processes to exit. Dec 12 18:23:40.369099 systemd[1]: sshd@18-10.200.8.12:22-10.200.16.10:46802.service: Deactivated successfully. Dec 12 18:23:40.371898 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 18:23:40.374381 systemd-logind[2512]: Removed session 21. Dec 12 18:23:40.358000 audit[6309]: CRED_DISP pid=6309 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:40.381775 kernel: audit: type=1104 audit(1765563820.358:881): pid=6309 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:40.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.12:22-10.200.16.10:46802 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:40.387538 kernel: audit: type=1131 audit(1765563820.368:882): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.12:22-10.200.16.10:46802 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:42.318764 containerd[2529]: time="2025-12-12T18:23:42.318719066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:23:42.319283 kubelet[4009]: E1212 18:23:42.319245 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66465f8f84-gfntv" podUID="129d48cc-df60-49a9-8eb1-5bf2a56866a1" Dec 12 18:23:42.567466 containerd[2529]: time="2025-12-12T18:23:42.567413031Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:23:42.572468 containerd[2529]: time="2025-12-12T18:23:42.572382485Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:23:42.572746 containerd[2529]: time="2025-12-12T18:23:42.572466947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 18:23:42.572815 kubelet[4009]: E1212 18:23:42.572646 4009 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:23:42.572815 kubelet[4009]: E1212 18:23:42.572801 4009 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:23:42.573283 kubelet[4009]: E1212 18:23:42.572990 4009 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pbzj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5c595dfbb8-snvpt_calico-system(cdef70f5-08e1-4939-ba2b-d4a667c25459): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:23:42.574297 kubelet[4009]: E1212 18:23:42.574170 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c595dfbb8-snvpt" podUID="cdef70f5-08e1-4939-ba2b-d4a667c25459" Dec 12 18:23:43.706000 audit[6324]: NETFILTER_CFG table=filter:149 family=2 entries=26 op=nft_register_rule pid=6324 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:23:43.706000 audit[6324]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffee49b9e60 a2=0 a3=7ffee49b9e4c items=0 ppid=4115 pid=6324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:43.706000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:23:43.712000 audit[6324]: NETFILTER_CFG table=nat:150 family=2 entries=104 op=nft_register_chain pid=6324 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:23:43.712000 audit[6324]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffee49b9e60 a2=0 a3=7ffee49b9e4c items=0 ppid=4115 pid=6324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:43.712000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:23:45.312273 kubelet[4009]: E1212 18:23:45.312167 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c7564cb4-4f4mj" podUID="c72f9f09-ad62-40d6-8632-b537f3032703" Dec 12 18:23:45.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.12:22-10.200.16.10:42864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:45.466999 systemd[1]: Started sshd@19-10.200.8.12:22-10.200.16.10:42864.service - OpenSSH per-connection server daemon (10.200.16.10:42864). Dec 12 18:23:45.475212 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 12 18:23:45.475304 kernel: audit: type=1130 audit(1765563825.465:885): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.12:22-10.200.16.10:42864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:46.005000 audit[6326]: USER_ACCT pid=6326 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:46.015362 kernel: audit: type=1101 audit(1765563826.005:886): pid=6326 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:46.015570 sshd[6326]: Accepted publickey for core from 10.200.16.10 port 42864 ssh2: RSA SHA256:MDSGev8JoWWrhohyx7j99tYqdskhx1insLzl+tdEp00 Dec 12 18:23:46.016470 sshd-session[6326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:23:46.014000 audit[6326]: CRED_ACQ pid=6326 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:46.023983 kernel: audit: type=1103 audit(1765563826.014:887): pid=6326 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:46.026830 systemd-logind[2512]: New session 22 of user core. Dec 12 18:23:46.037143 kernel: audit: type=1006 audit(1765563826.014:888): pid=6326 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 12 18:23:46.037211 kernel: audit: type=1300 audit(1765563826.014:888): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd591665d0 a2=3 a3=0 items=0 ppid=1 pid=6326 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:46.014000 audit[6326]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd591665d0 a2=3 a3=0 items=0 ppid=1 pid=6326 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:46.037489 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 18:23:46.014000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:23:46.043694 kernel: audit: type=1327 audit(1765563826.014:888): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:23:46.039000 audit[6326]: USER_START pid=6326 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:46.042000 audit[6329]: CRED_ACQ pid=6329 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:46.050397 kernel: audit: type=1105 audit(1765563826.039:889): pid=6326 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:46.050451 kernel: audit: type=1103 audit(1765563826.042:890): pid=6329 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:46.371688 sshd[6329]: Connection closed by 10.200.16.10 port 42864 Dec 12 18:23:46.372207 sshd-session[6326]: pam_unix(sshd:session): session closed for user core Dec 12 18:23:46.372000 audit[6326]: USER_END pid=6326 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:46.379494 systemd[1]: sshd@19-10.200.8.12:22-10.200.16.10:42864.service: Deactivated successfully. Dec 12 18:23:46.380540 kernel: audit: type=1106 audit(1765563826.372:891): pid=6326 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:46.372000 audit[6326]: CRED_DISP pid=6326 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:46.385969 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 18:23:46.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.12:22-10.200.16.10:42864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:46.387540 kernel: audit: type=1104 audit(1765563826.372:892): pid=6326 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:46.388143 systemd-logind[2512]: Session 22 logged out. Waiting for processes to exit. Dec 12 18:23:46.389566 systemd-logind[2512]: Removed session 22. Dec 12 18:23:47.312303 kubelet[4009]: E1212 18:23:47.312264 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c7564cb4-hnnls" podUID="588d2c87-beaf-4e83-ba6f-8e3f0d453589" Dec 12 18:23:50.317550 kubelet[4009]: E1212 18:23:50.317028 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bq297" podUID="10291b6f-a9ca-4c45-b211-06a17f4d693f" Dec 12 18:23:51.311960 kubelet[4009]: E1212 18:23:51.311916 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4rvrk" podUID="985aeb6e-874b-4206-ac9b-71fb5aaf32cf" Dec 12 18:23:51.483284 systemd[1]: Started sshd@20-10.200.8.12:22-10.200.16.10:54774.service - OpenSSH per-connection server daemon (10.200.16.10:54774). Dec 12 18:23:51.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.12:22-10.200.16.10:54774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:51.485951 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:23:51.486209 kernel: audit: type=1130 audit(1765563831.481:894): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.12:22-10.200.16.10:54774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:52.051000 audit[6341]: USER_ACCT pid=6341 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:52.056286 sshd[6341]: Accepted publickey for core from 10.200.16.10 port 54774 ssh2: RSA SHA256:MDSGev8JoWWrhohyx7j99tYqdskhx1insLzl+tdEp00 Dec 12 18:23:52.055000 audit[6341]: CRED_ACQ pid=6341 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:52.060725 kernel: audit: type=1101 audit(1765563832.051:895): pid=6341 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:52.060764 kernel: audit: type=1103 audit(1765563832.055:896): pid=6341 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:52.060314 sshd-session[6341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:23:52.065546 kernel: audit: type=1006 audit(1765563832.055:897): pid=6341 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 12 18:23:52.055000 audit[6341]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef44b6b60 a2=3 a3=0 items=0 ppid=1 pid=6341 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:52.055000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:23:52.072255 kernel: audit: type=1300 audit(1765563832.055:897): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef44b6b60 a2=3 a3=0 items=0 ppid=1 pid=6341 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:52.072364 kernel: audit: type=1327 audit(1765563832.055:897): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:23:52.073570 systemd-logind[2512]: New session 23 of user core. Dec 12 18:23:52.078729 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 18:23:52.079000 audit[6341]: USER_START pid=6341 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:52.086564 kernel: audit: type=1105 audit(1765563832.079:898): pid=6341 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:52.085000 audit[6344]: CRED_ACQ pid=6344 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:52.091557 kernel: audit: type=1103 audit(1765563832.085:899): pid=6344 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:52.417142 sshd[6344]: Connection closed by 10.200.16.10 port 54774 Dec 12 18:23:52.418045 sshd-session[6341]: pam_unix(sshd:session): session closed for user core Dec 12 18:23:52.418000 audit[6341]: USER_END pid=6341 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:52.425580 kernel: audit: type=1106 audit(1765563832.418:900): pid=6341 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:52.428204 systemd[1]: sshd@20-10.200.8.12:22-10.200.16.10:54774.service: Deactivated successfully. Dec 12 18:23:52.418000 audit[6341]: CRED_DISP pid=6341 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:52.434555 kernel: audit: type=1104 audit(1765563832.418:901): pid=6341 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:52.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.12:22-10.200.16.10:54774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:52.435420 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 18:23:52.437489 systemd-logind[2512]: Session 23 logged out. Waiting for processes to exit. Dec 12 18:23:52.438309 systemd-logind[2512]: Removed session 23. Dec 12 18:23:54.315540 kubelet[4009]: E1212 18:23:54.315199 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66465f8f84-gfntv" podUID="129d48cc-df60-49a9-8eb1-5bf2a56866a1" Dec 12 18:23:57.312684 kubelet[4009]: E1212 18:23:57.312139 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c595dfbb8-snvpt" podUID="cdef70f5-08e1-4939-ba2b-d4a667c25459" Dec 12 18:23:57.312684 kubelet[4009]: E1212 18:23:57.312378 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c7564cb4-4f4mj" podUID="c72f9f09-ad62-40d6-8632-b537f3032703" Dec 12 18:23:57.532069 systemd[1]: Started sshd@21-10.200.8.12:22-10.200.16.10:54780.service - OpenSSH per-connection server daemon (10.200.16.10:54780). Dec 12 18:23:57.538553 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:23:57.538647 kernel: audit: type=1130 audit(1765563837.531:903): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.12:22-10.200.16.10:54780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:57.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.12:22-10.200.16.10:54780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:58.083000 audit[6358]: USER_ACCT pid=6358 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:58.089690 sshd[6358]: Accepted publickey for core from 10.200.16.10 port 54780 ssh2: RSA SHA256:MDSGev8JoWWrhohyx7j99tYqdskhx1insLzl+tdEp00 Dec 12 18:23:58.090553 kernel: audit: type=1101 audit(1765563838.083:904): pid=6358 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:58.091282 sshd-session[6358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:23:58.090000 audit[6358]: CRED_ACQ pid=6358 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:58.102548 kernel: audit: type=1103 audit(1765563838.090:905): pid=6358 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:58.106321 systemd-logind[2512]: New session 24 of user core. Dec 12 18:23:58.112046 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 12 18:23:58.112593 kernel: audit: type=1006 audit(1765563838.090:906): pid=6358 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 12 18:23:58.090000 audit[6358]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb5f865f0 a2=3 a3=0 items=0 ppid=1 pid=6358 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:58.119535 kernel: audit: type=1300 audit(1765563838.090:906): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb5f865f0 a2=3 a3=0 items=0 ppid=1 pid=6358 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:58.090000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:23:58.120000 audit[6358]: USER_START pid=6358 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:58.126868 kernel: audit: type=1327 audit(1765563838.090:906): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:23:58.126923 kernel: audit: type=1105 audit(1765563838.120:907): pid=6358 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:58.129000 audit[6361]: CRED_ACQ pid=6361 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:58.137994 kernel: audit: type=1103 audit(1765563838.129:908): pid=6361 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:58.466536 sshd[6361]: Connection closed by 10.200.16.10 port 54780 Dec 12 18:23:58.467075 sshd-session[6358]: pam_unix(sshd:session): session closed for user core Dec 12 18:23:58.467000 audit[6358]: USER_END pid=6358 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:58.471082 systemd[1]: sshd@21-10.200.8.12:22-10.200.16.10:54780.service: Deactivated successfully. Dec 12 18:23:58.474118 systemd[1]: session-24.scope: Deactivated successfully. Dec 12 18:23:58.475602 systemd-logind[2512]: Session 24 logged out. Waiting for processes to exit. Dec 12 18:23:58.468000 audit[6358]: CRED_DISP pid=6358 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:58.478450 systemd-logind[2512]: Removed session 24. Dec 12 18:23:58.481223 kernel: audit: type=1106 audit(1765563838.467:909): pid=6358 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:58.481348 kernel: audit: type=1104 audit(1765563838.468:910): pid=6358 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:23:58.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.12:22-10.200.16.10:54780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:59.312356 kubelet[4009]: E1212 18:23:59.312310 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c7564cb4-hnnls" podUID="588d2c87-beaf-4e83-ba6f-8e3f0d453589" Dec 12 18:24:03.312982 kubelet[4009]: E1212 18:24:03.311822 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4rvrk" podUID="985aeb6e-874b-4206-ac9b-71fb5aaf32cf" Dec 12 18:24:03.313460 kubelet[4009]: E1212 18:24:03.313146 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bq297" podUID="10291b6f-a9ca-4c45-b211-06a17f4d693f" Dec 12 18:24:03.599963 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:24:03.600070 kernel: audit: type=1130 audit(1765563843.592:912): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.12:22-10.200.16.10:44536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:24:03.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.12:22-10.200.16.10:44536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:24:03.592996 systemd[1]: Started sshd@22-10.200.8.12:22-10.200.16.10:44536.service - OpenSSH per-connection server daemon (10.200.16.10:44536). Dec 12 18:24:04.140000 audit[6399]: USER_ACCT pid=6399 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:24:04.142812 sshd-session[6399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:24:04.143775 sshd[6399]: Accepted publickey for core from 10.200.16.10 port 44536 ssh2: RSA SHA256:MDSGev8JoWWrhohyx7j99tYqdskhx1insLzl+tdEp00 Dec 12 18:24:04.148542 kernel: audit: type=1101 audit(1765563844.140:913): pid=6399 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:24:04.141000 audit[6399]: CRED_ACQ pid=6399 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:24:04.150796 systemd-logind[2512]: New session 25 of user core. Dec 12 18:24:04.155539 kernel: audit: type=1103 audit(1765563844.141:914): pid=6399 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:24:04.160535 kernel: audit: type=1006 audit(1765563844.141:915): pid=6399 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 12 18:24:04.161731 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 12 18:24:04.141000 audit[6399]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd41278180 a2=3 a3=0 items=0 ppid=1 pid=6399 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:04.170542 kernel: audit: type=1300 audit(1765563844.141:915): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd41278180 a2=3 a3=0 items=0 ppid=1 pid=6399 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:04.141000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:24:04.173552 kernel: audit: type=1327 audit(1765563844.141:915): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:24:04.165000 audit[6399]: USER_START pid=6399 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:24:04.187544 kernel: audit: type=1105 audit(1765563844.165:916): pid=6399 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:24:04.171000 audit[6402]: CRED_ACQ pid=6402 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:24:04.200551 kernel: audit: type=1103 audit(1765563844.171:917): pid=6402 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:24:04.563337 sshd[6402]: Connection closed by 10.200.16.10 port 44536 Dec 12 18:24:04.565113 sshd-session[6399]: pam_unix(sshd:session): session closed for user core Dec 12 18:24:04.565000 audit[6399]: USER_END pid=6399 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:24:04.576547 kernel: audit: type=1106 audit(1765563844.565:918): pid=6399 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:24:04.579032 systemd[1]: sshd@22-10.200.8.12:22-10.200.16.10:44536.service: Deactivated successfully. Dec 12 18:24:04.565000 audit[6399]: CRED_DISP pid=6399 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:24:04.583624 systemd[1]: session-25.scope: Deactivated successfully. Dec 12 18:24:04.584777 systemd-logind[2512]: Session 25 logged out. Waiting for processes to exit. Dec 12 18:24:04.588838 systemd-logind[2512]: Removed session 25. Dec 12 18:24:04.589950 kernel: audit: type=1104 audit(1765563844.565:919): pid=6399 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:24:04.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.12:22-10.200.16.10:44536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:24:08.313198 kubelet[4009]: E1212 18:24:08.312504 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66465f8f84-gfntv" podUID="129d48cc-df60-49a9-8eb1-5bf2a56866a1" Dec 12 18:24:09.311144 kubelet[4009]: E1212 18:24:09.311100 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c7564cb4-4f4mj" podUID="c72f9f09-ad62-40d6-8632-b537f3032703" Dec 12 18:24:09.684568 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:24:09.684680 kernel: audit: type=1130 audit(1765563849.676:921): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.12:22-10.200.16.10:44546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:24:09.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.12:22-10.200.16.10:44546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:24:09.677417 systemd[1]: Started sshd@23-10.200.8.12:22-10.200.16.10:44546.service - OpenSSH per-connection server daemon (10.200.16.10:44546). Dec 12 18:24:10.220000 audit[6414]: USER_ACCT pid=6414 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:24:10.221964 sshd[6414]: Accepted publickey for core from 10.200.16.10 port 44546 ssh2: RSA SHA256:MDSGev8JoWWrhohyx7j99tYqdskhx1insLzl+tdEp00 Dec 12 18:24:10.225584 sshd-session[6414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:24:10.223000 audit[6414]: CRED_ACQ pid=6414 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:24:10.229406 kernel: audit: type=1101 audit(1765563850.220:922): pid=6414 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:24:10.229476 kernel: audit: type=1103 audit(1765563850.223:923): pid=6414 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:24:10.234105 kernel: audit: type=1006 audit(1765563850.223:924): pid=6414 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 12 18:24:10.234399 kernel: audit: type=1300 audit(1765563850.223:924): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd18cee9d0 a2=3 a3=0 items=0 ppid=1 pid=6414 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:10.223000 audit[6414]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd18cee9d0 a2=3 a3=0 items=0 ppid=1 pid=6414 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:10.237659 systemd-logind[2512]: New session 26 of user core. Dec 12 18:24:10.223000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:24:10.242553 kernel: audit: type=1327 audit(1765563850.223:924): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:24:10.245735 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 12 18:24:10.246000 audit[6414]: USER_START pid=6414 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:24:10.254734 kernel: audit: type=1105 audit(1765563850.246:925): pid=6414 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:24:10.253000 audit[6417]: CRED_ACQ pid=6417 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:24:10.261582 kernel: audit: type=1103 audit(1765563850.253:926): pid=6417 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:24:10.647023 sshd[6417]: Connection closed by 10.200.16.10 port 44546 Dec 12 18:24:10.646791 sshd-session[6414]: pam_unix(sshd:session): session closed for user core Dec 12 18:24:10.647000 audit[6414]: USER_END pid=6414 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:24:10.652340 systemd[1]: sshd@23-10.200.8.12:22-10.200.16.10:44546.service: Deactivated successfully. Dec 12 18:24:10.655310 systemd[1]: session-26.scope: Deactivated successfully. Dec 12 18:24:10.659700 kernel: audit: type=1106 audit(1765563850.647:927): pid=6414 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:24:10.659765 kernel: audit: type=1104 audit(1765563850.647:928): pid=6414 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:24:10.647000 audit[6414]: CRED_DISP pid=6414 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 18:24:10.668426 systemd-logind[2512]: Session 26 logged out. Waiting for processes to exit. Dec 12 18:24:10.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.12:22-10.200.16.10:44546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:24:10.669398 systemd-logind[2512]: Removed session 26. Dec 12 18:24:11.312306 kubelet[4009]: E1212 18:24:11.312263 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c595dfbb8-snvpt" podUID="cdef70f5-08e1-4939-ba2b-d4a667c25459" Dec 12 18:24:12.313095 kubelet[4009]: E1212 18:24:12.312700 4009 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c7564cb4-hnnls" podUID="588d2c87-beaf-4e83-ba6f-8e3f0d453589"