Dec 16 13:03:26.785259 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:17:57 -00 2025 Dec 16 13:03:26.785285 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 16 13:03:26.785295 kernel: BIOS-provided physical RAM map: Dec 16 13:03:26.785300 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 16 13:03:26.785305 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Dec 16 13:03:26.785310 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Dec 16 13:03:26.785316 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Dec 16 13:03:26.785321 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Dec 16 13:03:26.785325 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Dec 16 13:03:26.785332 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Dec 16 13:03:26.785377 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Dec 16 13:03:26.785385 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Dec 16 13:03:26.785392 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Dec 16 13:03:26.785398 kernel: printk: legacy bootconsole [earlyser0] enabled Dec 16 13:03:26.785409 kernel: NX (Execute Disable) protection: active Dec 16 13:03:26.785417 kernel: APIC: Static calls initialized Dec 16 13:03:26.785425 kernel: efi: EFI v2.7 by Microsoft Dec 16 13:03:26.785433 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eaa1018 RNG=0x3ffd2018 Dec 16 13:03:26.785442 kernel: random: crng init done Dec 16 13:03:26.785449 kernel: secureboot: Secure boot disabled Dec 16 13:03:26.785457 kernel: SMBIOS 3.1.0 present. Dec 16 13:03:26.785466 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/25/2025 Dec 16 13:03:26.785474 kernel: DMI: Memory slots populated: 2/2 Dec 16 13:03:26.785482 kernel: Hypervisor detected: Microsoft Hyper-V Dec 16 13:03:26.785492 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Dec 16 13:03:26.785499 kernel: Hyper-V: Nested features: 0x3e0101 Dec 16 13:03:26.785506 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Dec 16 13:03:26.785514 kernel: Hyper-V: Using hypercall for remote TLB flush Dec 16 13:03:26.785521 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 16 13:03:26.785547 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 16 13:03:26.785556 kernel: tsc: Detected 2300.000 MHz processor Dec 16 13:03:26.785575 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 13:03:26.785585 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 13:03:26.785595 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Dec 16 13:03:26.785606 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 16 13:03:26.785615 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 13:03:26.785625 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Dec 16 13:03:26.785633 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Dec 16 13:03:26.785641 kernel: Using GB pages for direct mapping Dec 16 13:03:26.785650 kernel: ACPI: Early table checksum verification disabled Dec 16 13:03:26.785662 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Dec 16 13:03:26.785671 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:03:26.785680 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:03:26.785689 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 16 13:03:26.785698 kernel: ACPI: FACS 0x000000003FFFE000 000040 Dec 16 13:03:26.785729 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:03:26.785741 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:03:26.785750 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:03:26.785760 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Dec 16 13:03:26.785769 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Dec 16 13:03:26.785778 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:03:26.785787 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Dec 16 13:03:26.785799 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Dec 16 13:03:26.785808 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Dec 16 13:03:26.785817 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Dec 16 13:03:26.785826 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Dec 16 13:03:26.785835 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Dec 16 13:03:26.785844 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Dec 16 13:03:26.785853 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Dec 16 13:03:26.785865 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Dec 16 13:03:26.785874 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Dec 16 13:03:26.785883 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Dec 16 13:03:26.785892 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Dec 16 13:03:26.785902 kernel: NODE_DATA(0) allocated [mem 0x2bfff6dc0-0x2bfffdfff] Dec 16 13:03:26.785911 kernel: Zone ranges: Dec 16 13:03:26.785920 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 13:03:26.785931 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 16 13:03:26.785940 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Dec 16 13:03:26.785949 kernel: Device empty Dec 16 13:03:26.785959 kernel: Movable zone start for each node Dec 16 13:03:26.785968 kernel: Early memory node ranges Dec 16 13:03:26.785977 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 16 13:03:26.785986 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Dec 16 13:03:26.785997 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Dec 16 13:03:26.786006 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Dec 16 13:03:26.786016 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Dec 16 13:03:26.786025 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Dec 16 13:03:26.786034 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 13:03:26.786043 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 16 13:03:26.786053 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Dec 16 13:03:26.786064 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Dec 16 13:03:26.786073 kernel: ACPI: PM-Timer IO Port: 0x408 Dec 16 13:03:26.786082 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Dec 16 13:03:26.786092 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 13:03:26.786101 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 13:03:26.786110 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 13:03:26.786120 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Dec 16 13:03:26.786131 kernel: TSC deadline timer available Dec 16 13:03:26.786140 kernel: CPU topo: Max. logical packages: 1 Dec 16 13:03:26.786149 kernel: CPU topo: Max. logical dies: 1 Dec 16 13:03:26.786158 kernel: CPU topo: Max. dies per package: 1 Dec 16 13:03:26.786167 kernel: CPU topo: Max. threads per core: 2 Dec 16 13:03:26.786176 kernel: CPU topo: Num. cores per package: 1 Dec 16 13:03:26.786186 kernel: CPU topo: Num. threads per package: 2 Dec 16 13:03:26.786195 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 16 13:03:26.786206 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Dec 16 13:03:26.786215 kernel: Booting paravirtualized kernel on Hyper-V Dec 16 13:03:26.786224 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 13:03:26.786234 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 13:03:26.786243 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 16 13:03:26.786252 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 16 13:03:26.786261 kernel: pcpu-alloc: [0] 0 1 Dec 16 13:03:26.786272 kernel: Hyper-V: PV spinlocks enabled Dec 16 13:03:26.786281 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 13:03:26.786292 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 16 13:03:26.786302 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 16 13:03:26.786312 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 13:03:26.786321 kernel: Fallback order for Node 0: 0 Dec 16 13:03:26.786332 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Dec 16 13:03:26.786354 kernel: Policy zone: Normal Dec 16 13:03:26.786364 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 13:03:26.786373 kernel: software IO TLB: area num 2. Dec 16 13:03:26.786382 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 13:03:26.786392 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 13:03:26.786401 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 13:03:26.786412 kernel: Dynamic Preempt: voluntary Dec 16 13:03:26.786421 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 13:03:26.786432 kernel: rcu: RCU event tracing is enabled. Dec 16 13:03:26.786449 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 13:03:26.786461 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 13:03:26.786471 kernel: Rude variant of Tasks RCU enabled. Dec 16 13:03:26.786481 kernel: Tracing variant of Tasks RCU enabled. Dec 16 13:03:26.786491 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 13:03:26.786502 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 13:03:26.786511 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:03:26.786523 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:03:26.786533 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:03:26.786542 kernel: Using NULL legacy PIC Dec 16 13:03:26.786553 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Dec 16 13:03:26.786563 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 13:03:26.786573 kernel: Console: colour dummy device 80x25 Dec 16 13:03:26.786583 kernel: printk: legacy console [tty1] enabled Dec 16 13:03:26.786593 kernel: printk: legacy console [ttyS0] enabled Dec 16 13:03:26.786602 kernel: printk: legacy bootconsole [earlyser0] disabled Dec 16 13:03:26.786612 kernel: ACPI: Core revision 20240827 Dec 16 13:03:26.786622 kernel: Failed to register legacy timer interrupt Dec 16 13:03:26.786634 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 13:03:26.786643 kernel: x2apic enabled Dec 16 13:03:26.786653 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 13:03:26.786663 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Dec 16 13:03:26.786673 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 16 13:03:26.786683 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Dec 16 13:03:26.786693 kernel: Hyper-V: Using IPI hypercalls Dec 16 13:03:26.786705 kernel: APIC: send_IPI() replaced with hv_send_ipi() Dec 16 13:03:26.786715 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Dec 16 13:03:26.786726 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Dec 16 13:03:26.786735 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Dec 16 13:03:26.786744 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Dec 16 13:03:26.786754 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Dec 16 13:03:26.786764 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Dec 16 13:03:26.786776 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300000) Dec 16 13:03:26.786786 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 13:03:26.786795 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 16 13:03:26.786805 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 16 13:03:26.786814 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 13:03:26.786823 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 13:03:26.786832 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 13:03:26.786841 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 16 13:03:26.786853 kernel: RETBleed: Vulnerable Dec 16 13:03:26.786862 kernel: Speculative Store Bypass: Vulnerable Dec 16 13:03:26.786871 kernel: active return thunk: its_return_thunk Dec 16 13:03:26.786880 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 13:03:26.786888 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 13:03:26.786897 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 13:03:26.786905 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 13:03:26.786915 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 16 13:03:26.786925 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 16 13:03:26.786934 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 16 13:03:26.786946 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Dec 16 13:03:26.786956 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Dec 16 13:03:26.786966 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Dec 16 13:03:26.786976 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 13:03:26.786985 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 16 13:03:26.786995 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 16 13:03:26.787005 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 16 13:03:26.787014 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Dec 16 13:03:26.787024 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Dec 16 13:03:26.787034 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Dec 16 13:03:26.787044 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Dec 16 13:03:26.787055 kernel: Freeing SMP alternatives memory: 32K Dec 16 13:03:26.787065 kernel: pid_max: default: 32768 minimum: 301 Dec 16 13:03:26.787075 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 13:03:26.787085 kernel: landlock: Up and running. Dec 16 13:03:26.787095 kernel: SELinux: Initializing. Dec 16 13:03:26.787104 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 13:03:26.787114 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 13:03:26.787124 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Dec 16 13:03:26.787134 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Dec 16 13:03:26.787144 kernel: signal: max sigframe size: 11952 Dec 16 13:03:26.787157 kernel: rcu: Hierarchical SRCU implementation. Dec 16 13:03:26.787167 kernel: rcu: Max phase no-delay instances is 400. Dec 16 13:03:26.787177 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 13:03:26.787188 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 16 13:03:26.787198 kernel: smp: Bringing up secondary CPUs ... Dec 16 13:03:26.787208 kernel: smpboot: x86: Booting SMP configuration: Dec 16 13:03:26.787218 kernel: .... node #0, CPUs: #1 Dec 16 13:03:26.787231 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 13:03:26.787241 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Dec 16 13:03:26.787252 kernel: Memory: 8095704K/8383228K available (14336K kernel code, 2444K rwdata, 29892K rodata, 15464K init, 2576K bss, 281564K reserved, 0K cma-reserved) Dec 16 13:03:26.787263 kernel: devtmpfs: initialized Dec 16 13:03:26.787273 kernel: x86/mm: Memory block size: 128MB Dec 16 13:03:26.787283 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Dec 16 13:03:26.787293 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 13:03:26.787306 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 13:03:26.787316 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 13:03:26.787326 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 13:03:26.787349 kernel: audit: initializing netlink subsys (disabled) Dec 16 13:03:26.787367 kernel: audit: type=2000 audit(1765890201.106:1): state=initialized audit_enabled=0 res=1 Dec 16 13:03:26.787376 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 13:03:26.787385 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 13:03:26.787396 kernel: cpuidle: using governor menu Dec 16 13:03:26.787404 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 13:03:26.787412 kernel: dca service started, version 1.12.1 Dec 16 13:03:26.787422 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Dec 16 13:03:26.787429 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Dec 16 13:03:26.787438 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 13:03:26.787447 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 13:03:26.787458 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 13:03:26.787466 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 13:03:26.787476 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 13:03:26.787486 kernel: ACPI: Added _OSI(Module Device) Dec 16 13:03:26.787495 kernel: ACPI: Added _OSI(Processor Device) Dec 16 13:03:26.787504 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 13:03:26.787514 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 13:03:26.787525 kernel: ACPI: Interpreter enabled Dec 16 13:03:26.787534 kernel: ACPI: PM: (supports S0 S5) Dec 16 13:03:26.787544 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 13:03:26.787554 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 13:03:26.787565 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 16 13:03:26.787575 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Dec 16 13:03:26.787585 kernel: iommu: Default domain type: Translated Dec 16 13:03:26.787595 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 13:03:26.787606 kernel: efivars: Registered efivars operations Dec 16 13:03:26.787616 kernel: PCI: Using ACPI for IRQ routing Dec 16 13:03:26.787627 kernel: PCI: System does not support PCI Dec 16 13:03:26.787636 kernel: vgaarb: loaded Dec 16 13:03:26.787644 kernel: clocksource: Switched to clocksource tsc-early Dec 16 13:03:26.787653 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 13:03:26.787661 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 13:03:26.787671 kernel: pnp: PnP ACPI init Dec 16 13:03:26.787679 kernel: pnp: PnP ACPI: found 3 devices Dec 16 13:03:26.787688 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 13:03:26.787698 kernel: NET: Registered PF_INET protocol family Dec 16 13:03:26.787707 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 13:03:26.787716 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 16 13:03:26.787726 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 13:03:26.787738 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 13:03:26.787747 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 16 13:03:26.787755 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 16 13:03:26.787764 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 16 13:03:26.787774 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 16 13:03:26.787784 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 13:03:26.787795 kernel: NET: Registered PF_XDP protocol family Dec 16 13:03:26.787813 kernel: PCI: CLS 0 bytes, default 64 Dec 16 13:03:26.787822 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 16 13:03:26.787831 kernel: software IO TLB: mapped [mem 0x000000003a9ba000-0x000000003e9ba000] (64MB) Dec 16 13:03:26.787840 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Dec 16 13:03:26.787850 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Dec 16 13:03:26.787859 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Dec 16 13:03:26.787869 kernel: clocksource: Switched to clocksource tsc Dec 16 13:03:26.787880 kernel: Initialise system trusted keyrings Dec 16 13:03:26.787889 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 16 13:03:26.787898 kernel: Key type asymmetric registered Dec 16 13:03:26.787907 kernel: Asymmetric key parser 'x509' registered Dec 16 13:03:26.787915 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 13:03:26.787925 kernel: io scheduler mq-deadline registered Dec 16 13:03:26.787934 kernel: io scheduler kyber registered Dec 16 13:03:26.787944 kernel: io scheduler bfq registered Dec 16 13:03:26.787954 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 13:03:26.787963 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 13:03:26.787973 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:03:26.787984 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 16 13:03:26.787994 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:03:26.788003 kernel: i8042: PNP: No PS/2 controller found. Dec 16 13:03:26.788180 kernel: rtc_cmos 00:02: registered as rtc0 Dec 16 13:03:26.788287 kernel: rtc_cmos 00:02: setting system clock to 2025-12-16T13:03:23 UTC (1765890203) Dec 16 13:03:26.788438 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Dec 16 13:03:26.788453 kernel: intel_pstate: Intel P-state driver initializing Dec 16 13:03:26.788463 kernel: efifb: probing for efifb Dec 16 13:03:26.788474 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 16 13:03:26.788488 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 16 13:03:26.788499 kernel: efifb: scrolling: redraw Dec 16 13:03:26.788509 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 13:03:26.788521 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 13:03:26.788532 kernel: fb0: EFI VGA frame buffer device Dec 16 13:03:26.788543 kernel: pstore: Using crash dump compression: deflate Dec 16 13:03:26.788553 kernel: pstore: Registered efi_pstore as persistent store backend Dec 16 13:03:26.788566 kernel: NET: Registered PF_INET6 protocol family Dec 16 13:03:26.788577 kernel: Segment Routing with IPv6 Dec 16 13:03:26.788586 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 13:03:26.788597 kernel: NET: Registered PF_PACKET protocol family Dec 16 13:03:26.788607 kernel: Key type dns_resolver registered Dec 16 13:03:26.788617 kernel: IPI shorthand broadcast: enabled Dec 16 13:03:26.788627 kernel: sched_clock: Marking stable (2078004619, 89777691)->(2471860286, -304077976) Dec 16 13:03:26.788638 kernel: registered taskstats version 1 Dec 16 13:03:26.788649 kernel: Loading compiled-in X.509 certificates Dec 16 13:03:26.788660 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: b90706f42f055ab9f35fc8fc29156d877adb12c4' Dec 16 13:03:26.788670 kernel: Demotion targets for Node 0: null Dec 16 13:03:26.788680 kernel: Key type .fscrypt registered Dec 16 13:03:26.788690 kernel: Key type fscrypt-provisioning registered Dec 16 13:03:26.788699 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 13:03:26.788708 kernel: ima: Allocated hash algorithm: sha1 Dec 16 13:03:26.788720 kernel: ima: No architecture policies found Dec 16 13:03:26.788730 kernel: clk: Disabling unused clocks Dec 16 13:03:26.788740 kernel: Freeing unused kernel image (initmem) memory: 15464K Dec 16 13:03:26.788750 kernel: Write protecting the kernel read-only data: 45056k Dec 16 13:03:26.788761 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Dec 16 13:03:26.788771 kernel: Run /init as init process Dec 16 13:03:26.788781 kernel: with arguments: Dec 16 13:03:26.788793 kernel: /init Dec 16 13:03:26.788802 kernel: with environment: Dec 16 13:03:26.788812 kernel: HOME=/ Dec 16 13:03:26.788822 kernel: TERM=linux Dec 16 13:03:26.788832 kernel: hv_vmbus: Vmbus version:5.3 Dec 16 13:03:26.788842 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 16 13:03:26.788852 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 16 13:03:26.788865 kernel: PTP clock support registered Dec 16 13:03:26.788875 kernel: hv_utils: Registering HyperV Utility Driver Dec 16 13:03:26.788886 kernel: hv_vmbus: registering driver hv_utils Dec 16 13:03:26.788896 kernel: hv_utils: Shutdown IC version 3.2 Dec 16 13:03:26.788907 kernel: hv_utils: Heartbeat IC version 3.0 Dec 16 13:03:26.788917 kernel: hv_utils: TimeSync IC version 4.0 Dec 16 13:03:26.788927 kernel: SCSI subsystem initialized Dec 16 13:03:26.788937 kernel: hv_vmbus: registering driver hv_pci Dec 16 13:03:26.789117 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Dec 16 13:03:26.789239 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Dec 16 13:03:26.789462 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Dec 16 13:03:26.789580 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Dec 16 13:03:26.789736 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Dec 16 13:03:26.789859 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Dec 16 13:03:26.789978 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Dec 16 13:03:26.790104 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Dec 16 13:03:26.790116 kernel: hv_vmbus: registering driver hv_storvsc Dec 16 13:03:26.790257 kernel: scsi host0: storvsc_host_t Dec 16 13:03:26.790419 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Dec 16 13:03:26.790434 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 13:03:26.790445 kernel: hv_vmbus: registering driver hid_hyperv Dec 16 13:03:26.790455 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 16 13:03:26.790581 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 16 13:03:26.790595 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 16 13:03:26.790609 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 16 13:03:26.790723 kernel: nvme nvme0: pci function c05b:00:00.0 Dec 16 13:03:26.790856 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Dec 16 13:03:26.790950 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 16 13:03:26.790962 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:03:26.791089 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 16 13:03:26.791101 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 13:03:26.791223 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 16 13:03:26.791236 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 13:03:26.791246 kernel: device-mapper: uevent: version 1.0.3 Dec 16 13:03:26.791256 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 13:03:26.791266 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 16 13:03:26.791293 kernel: raid6: avx512x4 gen() 32925 MB/s Dec 16 13:03:26.791305 kernel: raid6: avx512x2 gen() 31850 MB/s Dec 16 13:03:26.791315 kernel: raid6: avx512x1 gen() 25595 MB/s Dec 16 13:03:26.791325 kernel: raid6: avx2x4 gen() 29305 MB/s Dec 16 13:03:26.791333 kernel: raid6: avx2x2 gen() 31757 MB/s Dec 16 13:03:26.791353 kernel: raid6: avx2x1 gen() 18895 MB/s Dec 16 13:03:26.791363 kernel: raid6: using algorithm avx512x4 gen() 32925 MB/s Dec 16 13:03:26.791375 kernel: raid6: .... xor() 5398 MB/s, rmw enabled Dec 16 13:03:26.791385 kernel: raid6: using avx512x2 recovery algorithm Dec 16 13:03:26.791395 kernel: xor: automatically using best checksumming function avx Dec 16 13:03:26.791405 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 13:03:26.791414 kernel: BTRFS: device fsid ea73a94a-fb20-4d45-8448-4c6f4c422a4f devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (915) Dec 16 13:03:26.791425 kernel: BTRFS info (device dm-0): first mount of filesystem ea73a94a-fb20-4d45-8448-4c6f4c422a4f Dec 16 13:03:26.791435 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:03:26.791446 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 13:03:26.791456 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 13:03:26.791466 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 13:03:26.791476 kernel: loop: module loaded Dec 16 13:03:26.791486 kernel: loop0: detected capacity change from 0 to 100136 Dec 16 13:03:26.791494 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 13:03:26.791505 systemd[1]: Successfully made /usr/ read-only. Dec 16 13:03:26.791520 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:03:26.791530 systemd[1]: Detected virtualization microsoft. Dec 16 13:03:26.791541 systemd[1]: Detected architecture x86-64. Dec 16 13:03:26.791550 systemd[1]: Running in initrd. Dec 16 13:03:26.791563 systemd[1]: No hostname configured, using default hostname. Dec 16 13:03:26.791573 systemd[1]: Hostname set to . Dec 16 13:03:26.791585 systemd[1]: Initializing machine ID from random generator. Dec 16 13:03:26.791595 systemd[1]: Queued start job for default target initrd.target. Dec 16 13:03:26.791606 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:03:26.791617 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:03:26.791627 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:03:26.791639 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 13:03:26.791652 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:03:26.791664 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 13:03:26.791676 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 13:03:26.791694 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:03:26.791707 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:03:26.791717 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:03:26.791728 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:03:26.791739 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:03:26.791750 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:03:26.791760 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:03:26.791773 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:03:26.791785 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:03:26.791796 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 13:03:26.791807 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 13:03:26.791818 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 13:03:26.791829 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:03:26.791840 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:03:26.791856 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:03:26.791867 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:03:26.791879 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 13:03:26.791891 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 13:03:26.791903 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:03:26.791913 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 13:03:26.791924 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 13:03:26.791939 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 13:03:26.791950 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:03:26.791961 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:03:26.791973 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:03:26.792011 systemd-journald[1050]: Collecting audit messages is enabled. Dec 16 13:03:26.792039 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 13:03:26.792054 systemd-journald[1050]: Journal started Dec 16 13:03:26.792079 systemd-journald[1050]: Runtime Journal (/run/log/journal/55c68710005948bdb671ab55bc796970) is 8M, max 158.5M, 150.5M free. Dec 16 13:03:26.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.802680 kernel: audit: type=1130 audit(1765890206.795:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.802721 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:03:26.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.806035 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:03:26.822575 kernel: audit: type=1130 audit(1765890206.804:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.822599 kernel: audit: type=1130 audit(1765890206.810:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.822611 kernel: audit: type=1130 audit(1765890206.816:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.811764 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 13:03:26.824831 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:03:26.830998 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:03:26.878370 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 13:03:26.910631 systemd-tmpfiles[1065]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 13:03:26.914782 kernel: Bridge firewalling registered Dec 16 13:03:26.914239 systemd-modules-load[1055]: Inserted module 'br_netfilter' Dec 16 13:03:26.916251 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:03:26.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.922958 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:03:26.923380 kernel: audit: type=1130 audit(1765890206.918:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.929364 kernel: audit: type=1130 audit(1765890206.922:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.928776 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:03:26.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.932353 kernel: audit: type=1130 audit(1765890206.927:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.934460 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:03:26.935307 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:03:26.961897 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:03:26.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.966356 kernel: audit: type=1130 audit(1765890206.961:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.005713 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:03:27.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.014335 kernel: audit: type=1130 audit(1765890207.005:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.014385 kernel: audit: type=1334 audit(1765890207.006:11): prog-id=6 op=LOAD Dec 16 13:03:27.006000 audit: BPF prog-id=6 op=LOAD Dec 16 13:03:27.013825 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:03:27.037398 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:03:27.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.043633 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 13:03:27.050464 kernel: audit: type=1130 audit(1765890207.039:12): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.120530 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:03:27.127477 kernel: audit: type=1130 audit(1765890207.120:13): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.129608 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 13:03:27.216912 systemd-resolved[1078]: Positive Trust Anchors: Dec 16 13:03:27.216933 systemd-resolved[1078]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:03:27.216938 systemd-resolved[1078]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 13:03:27.216980 systemd-resolved[1078]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:03:27.256316 dracut-cmdline[1093]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 16 13:03:27.285596 systemd-resolved[1078]: Defaulting to hostname 'linux'. Dec 16 13:03:27.288073 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:03:27.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.290924 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:03:27.306395 kernel: audit: type=1130 audit(1765890207.289:14): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.357354 kernel: Loading iSCSI transport class v2.0-870. Dec 16 13:03:27.422359 kernel: iscsi: registered transport (tcp) Dec 16 13:03:27.475583 kernel: iscsi: registered transport (qla4xxx) Dec 16 13:03:27.475637 kernel: QLogic iSCSI HBA Driver Dec 16 13:03:27.524752 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:03:27.538414 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:03:27.545439 kernel: audit: type=1130 audit(1765890207.537:15): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.544898 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:03:27.578615 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 13:03:27.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.582477 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 13:03:27.588451 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 13:03:27.615698 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:03:27.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.619000 audit: BPF prog-id=7 op=LOAD Dec 16 13:03:27.619000 audit: BPF prog-id=8 op=LOAD Dec 16 13:03:27.621553 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:03:27.653059 systemd-udevd[1330]: Using default interface naming scheme 'v257'. Dec 16 13:03:27.664998 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:03:27.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.669288 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 13:03:27.677589 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:03:27.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.682000 audit: BPF prog-id=9 op=LOAD Dec 16 13:03:27.684444 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:03:27.711663 dracut-pre-trigger[1423]: rd.md=0: removing MD RAID activation Dec 16 13:03:27.736177 systemd-networkd[1431]: lo: Link UP Dec 16 13:03:27.736183 systemd-networkd[1431]: lo: Gained carrier Dec 16 13:03:27.737623 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:03:27.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.740955 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:03:27.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.746542 systemd[1]: Reached target network.target - Network. Dec 16 13:03:27.749526 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:03:27.797714 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:03:27.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.806832 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 13:03:27.871014 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:03:27.871201 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:03:27.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.875437 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:03:27.882691 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:03:27.896361 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#220 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 13:03:27.919565 kernel: hv_vmbus: registering driver hv_netvsc Dec 16 13:03:27.932432 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8db7e167 (unnamed net_device) (uninitialized): VF slot 1 added Dec 16 13:03:27.953371 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 13:03:27.954946 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:03:27.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.961579 systemd-networkd[1431]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 13:03:27.962629 systemd-networkd[1431]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:03:27.964320 systemd-networkd[1431]: eth0: Link UP Dec 16 13:03:27.964464 systemd-networkd[1431]: eth0: Gained carrier Dec 16 13:03:27.964480 systemd-networkd[1431]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 13:03:27.986410 systemd-networkd[1431]: eth0: DHCPv4 address 10.200.4.43/24, gateway 10.200.4.1 acquired from 168.63.129.16 Dec 16 13:03:28.010360 kernel: AES CTR mode by8 optimization enabled Dec 16 13:03:28.120364 kernel: nvme nvme0: using unchecked data buffer Dec 16 13:03:28.220432 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Dec 16 13:03:28.224551 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 13:03:28.336593 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Dec 16 13:03:28.347535 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Dec 16 13:03:28.388406 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Dec 16 13:03:28.459176 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 13:03:28.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:28.461791 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:03:28.467409 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:03:28.470266 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:03:28.527119 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 13:03:28.547117 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:03:28.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:28.952258 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Dec 16 13:03:28.952614 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Dec 16 13:03:28.954827 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Dec 16 13:03:28.956124 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Dec 16 13:03:28.961588 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Dec 16 13:03:28.965497 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Dec 16 13:03:28.970356 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Dec 16 13:03:28.972404 kernel: pci 7870:00:00.0: enabling Extended Tags Dec 16 13:03:28.991660 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Dec 16 13:03:28.991908 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Dec 16 13:03:28.995411 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Dec 16 13:03:29.014725 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Dec 16 13:03:29.026355 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Dec 16 13:03:29.029984 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8db7e167 eth0: VF registering: eth1 Dec 16 13:03:29.030165 kernel: mana 7870:00:00.0 eth1: joined to eth0 Dec 16 13:03:29.035129 systemd-networkd[1431]: eth1: Interface name change detected, renamed to enP30832s1. Dec 16 13:03:29.037464 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Dec 16 13:03:29.135385 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Dec 16 13:03:29.138445 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 16 13:03:29.138718 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8db7e167 eth0: Data path switched to VF: enP30832s1 Dec 16 13:03:29.140275 systemd-networkd[1431]: enP30832s1: Link UP Dec 16 13:03:29.141449 systemd-networkd[1431]: enP30832s1: Gained carrier Dec 16 13:03:29.542862 disk-uuid[1608]: Warning: The kernel is still using the old partition table. Dec 16 13:03:29.542862 disk-uuid[1608]: The new table will be used at the next reboot or after you Dec 16 13:03:29.542862 disk-uuid[1608]: run partprobe(8) or kpartx(8) Dec 16 13:03:29.542862 disk-uuid[1608]: The operation has completed successfully. Dec 16 13:03:29.551331 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 13:03:29.551478 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 13:03:29.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:29.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:29.556069 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 13:03:29.612739 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1654) Dec 16 13:03:29.612846 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 13:03:29.614142 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:03:29.636844 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:03:29.636886 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 16 13:03:29.637931 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:03:29.645273 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 13:03:29.644532 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 13:03:29.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:29.649075 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 13:03:29.772507 systemd-networkd[1431]: eth0: Gained IPv6LL Dec 16 13:03:30.787794 ignition[1673]: Ignition 2.22.0 Dec 16 13:03:30.787808 ignition[1673]: Stage: fetch-offline Dec 16 13:03:30.787948 ignition[1673]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:03:30.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:30.790689 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:03:30.787958 ignition[1673]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:03:30.794796 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 13:03:30.788060 ignition[1673]: parsed url from cmdline: "" Dec 16 13:03:30.788063 ignition[1673]: no config URL provided Dec 16 13:03:30.788069 ignition[1673]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:03:30.788076 ignition[1673]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:03:30.788082 ignition[1673]: failed to fetch config: resource requires networking Dec 16 13:03:30.788264 ignition[1673]: Ignition finished successfully Dec 16 13:03:30.831152 ignition[1680]: Ignition 2.22.0 Dec 16 13:03:30.831163 ignition[1680]: Stage: fetch Dec 16 13:03:30.831421 ignition[1680]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:03:30.831436 ignition[1680]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:03:30.831535 ignition[1680]: parsed url from cmdline: "" Dec 16 13:03:30.831538 ignition[1680]: no config URL provided Dec 16 13:03:30.831544 ignition[1680]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:03:30.831549 ignition[1680]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:03:30.831575 ignition[1680]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 16 13:03:30.891468 ignition[1680]: GET result: OK Dec 16 13:03:30.891614 ignition[1680]: config has been read from IMDS userdata Dec 16 13:03:30.892371 ignition[1680]: parsing config with SHA512: 90d28420c0b56e9665b0dfcd6f18671508222f3acf60ee16d4f12b5c7a4610b3c979a5b10541bdd3c84b9711ff9db0b6453d666420beea3ab925251b49ea6c1e Dec 16 13:03:30.896737 unknown[1680]: fetched base config from "system" Dec 16 13:03:30.897017 ignition[1680]: fetch: fetch complete Dec 16 13:03:30.896757 unknown[1680]: fetched base config from "system" Dec 16 13:03:30.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:30.897021 ignition[1680]: fetch: fetch passed Dec 16 13:03:30.896762 unknown[1680]: fetched user config from "azure" Dec 16 13:03:30.897068 ignition[1680]: Ignition finished successfully Dec 16 13:03:30.899077 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 13:03:30.905471 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 13:03:30.932873 ignition[1686]: Ignition 2.22.0 Dec 16 13:03:30.932882 ignition[1686]: Stage: kargs Dec 16 13:03:30.933125 ignition[1686]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:03:30.935376 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 13:03:30.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:30.933132 ignition[1686]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:03:30.943702 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 13:03:30.934087 ignition[1686]: kargs: kargs passed Dec 16 13:03:30.934144 ignition[1686]: Ignition finished successfully Dec 16 13:03:30.970050 ignition[1692]: Ignition 2.22.0 Dec 16 13:03:30.970060 ignition[1692]: Stage: disks Dec 16 13:03:30.972534 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 13:03:30.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:30.970284 ignition[1692]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:03:30.974869 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 13:03:30.970292 ignition[1692]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:03:30.978419 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 13:03:30.971390 ignition[1692]: disks: disks passed Dec 16 13:03:30.980587 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:03:30.971434 ignition[1692]: Ignition finished successfully Dec 16 13:03:30.983392 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:03:30.987380 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:03:30.992207 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 13:03:31.114858 systemd-fsck[1700]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Dec 16 13:03:31.119229 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 13:03:31.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:31.127899 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 13:03:31.504366 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 7cac6192-738c-43cc-9341-24f71d091e91 r/w with ordered data mode. Quota mode: none. Dec 16 13:03:31.505493 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 13:03:31.506501 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 13:03:31.546437 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:03:31.550715 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 13:03:31.565466 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 16 13:03:31.568444 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 13:03:31.568718 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:03:31.573158 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 13:03:31.583151 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1709) Dec 16 13:03:31.583374 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 13:03:31.588389 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 13:03:31.588434 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:03:31.595624 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:03:31.595652 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 16 13:03:31.596846 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:03:31.599481 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:03:32.216505 coreos-metadata[1711]: Dec 16 13:03:32.216 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 13:03:32.226976 coreos-metadata[1711]: Dec 16 13:03:32.226 INFO Fetch successful Dec 16 13:03:32.229462 coreos-metadata[1711]: Dec 16 13:03:32.228 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 16 13:03:32.239593 coreos-metadata[1711]: Dec 16 13:03:32.239 INFO Fetch successful Dec 16 13:03:32.243446 coreos-metadata[1711]: Dec 16 13:03:32.241 INFO wrote hostname ci-4515.1.0-a-5ae2bb3665 to /sysroot/etc/hostname Dec 16 13:03:32.251610 kernel: kauditd_printk_skb: 22 callbacks suppressed Dec 16 13:03:32.251647 kernel: audit: type=1130 audit(1765890212.245:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:32.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:32.242392 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 13:03:32.343449 initrd-setup-root[1739]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 13:03:32.410128 initrd-setup-root[1746]: cut: /sysroot/etc/group: No such file or directory Dec 16 13:03:32.445981 initrd-setup-root[1753]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 13:03:32.464833 initrd-setup-root[1760]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 13:03:33.390084 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 13:03:33.398637 kernel: audit: type=1130 audit(1765890213.393:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:33.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:33.399772 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 13:03:33.404334 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 13:03:33.440360 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 13:03:33.444370 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 13:03:33.455535 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 13:03:33.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:33.467374 kernel: audit: type=1130 audit(1765890213.460:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:33.476834 ignition[1829]: INFO : Ignition 2.22.0 Dec 16 13:03:33.476834 ignition[1829]: INFO : Stage: mount Dec 16 13:03:33.480422 ignition[1829]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:03:33.480422 ignition[1829]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:03:33.480422 ignition[1829]: INFO : mount: mount passed Dec 16 13:03:33.480422 ignition[1829]: INFO : Ignition finished successfully Dec 16 13:03:33.493367 kernel: audit: type=1130 audit(1765890213.481:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:33.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:33.480120 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 13:03:33.486449 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 13:03:33.502250 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:03:33.525355 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1841) Dec 16 13:03:33.525390 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 13:03:33.527851 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:03:33.533586 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:03:33.533618 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 16 13:03:33.533627 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:03:33.535069 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:03:33.566898 ignition[1858]: INFO : Ignition 2.22.0 Dec 16 13:03:33.566898 ignition[1858]: INFO : Stage: files Dec 16 13:03:33.569727 ignition[1858]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:03:33.569727 ignition[1858]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:03:33.569727 ignition[1858]: DEBUG : files: compiled without relabeling support, skipping Dec 16 13:03:33.569727 ignition[1858]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 13:03:33.569727 ignition[1858]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 13:03:33.604225 ignition[1858]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 13:03:33.606136 ignition[1858]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 13:03:33.609438 ignition[1858]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 13:03:33.606233 unknown[1858]: wrote ssh authorized keys file for user: core Dec 16 13:03:33.637837 ignition[1858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:03:33.642426 ignition[1858]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 16 13:04:03.654053 ignition[1858]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET error: Get "https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz": dial tcp 13.107.246.64:443: i/o timeout Dec 16 13:04:03.854518 ignition[1858]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #2 Dec 16 13:04:11.069838 ignition[1858]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 13:04:11.138266 ignition[1858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:04:11.143502 ignition[1858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 13:04:11.143502 ignition[1858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 13:04:11.143502 ignition[1858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:04:11.143502 ignition[1858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:04:11.143502 ignition[1858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:04:11.143502 ignition[1858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:04:11.143502 ignition[1858]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:04:11.143502 ignition[1858]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:04:11.170378 ignition[1858]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:04:11.170378 ignition[1858]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:04:11.170378 ignition[1858]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:04:11.170378 ignition[1858]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:04:11.170378 ignition[1858]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:04:11.170378 ignition[1858]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Dec 16 13:04:11.467845 ignition[1858]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 13:04:11.681158 ignition[1858]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:04:11.681158 ignition[1858]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 13:04:11.740489 ignition[1858]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:04:11.747312 ignition[1858]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:04:11.747312 ignition[1858]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 13:04:11.752427 ignition[1858]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 13:04:11.752427 ignition[1858]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 13:04:11.752427 ignition[1858]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:04:11.752427 ignition[1858]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:04:11.752427 ignition[1858]: INFO : files: files passed Dec 16 13:04:11.752427 ignition[1858]: INFO : Ignition finished successfully Dec 16 13:04:11.772730 kernel: audit: type=1130 audit(1765890251.760:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:11.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:11.759150 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 13:04:11.765744 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 13:04:11.771477 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 13:04:11.799387 initrd-setup-root-after-ignition[1889]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:04:11.799387 initrd-setup-root-after-ignition[1889]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:04:11.814715 kernel: audit: type=1130 audit(1765890251.802:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:11.814748 kernel: audit: type=1131 audit(1765890251.802:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:11.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:11.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:11.815179 initrd-setup-root-after-ignition[1894]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:04:11.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:11.801867 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 13:04:11.826930 kernel: audit: type=1130 audit(1765890251.815:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:11.801979 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 13:04:11.811839 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:04:11.816847 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 13:04:11.830471 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 13:04:11.877791 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 13:04:11.877888 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 13:04:11.891652 kernel: audit: type=1130 audit(1765890251.880:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:11.891714 kernel: audit: type=1131 audit(1765890251.880:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:11.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:11.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:11.885175 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 13:04:11.893466 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 13:04:11.898680 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 13:04:11.899334 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 13:04:11.923673 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:04:11.933424 kernel: audit: type=1130 audit(1765890251.923:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:11.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:11.930453 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 13:04:11.948730 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:04:11.948875 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:04:11.951482 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:04:11.955535 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 13:04:11.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:11.958463 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 13:04:11.968068 kernel: audit: type=1131 audit(1765890251.960:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:11.958585 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:04:11.966742 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 13:04:11.970515 systemd[1]: Stopped target basic.target - Basic System. Dec 16 13:04:11.974512 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 13:04:11.977420 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:04:11.980245 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 13:04:11.984486 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:04:11.988413 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 13:04:11.992644 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:04:11.997632 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 13:04:12.001929 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 13:04:12.004837 systemd[1]: Stopped target swap.target - Swaps. Dec 16 13:04:12.009077 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 13:04:12.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.009190 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:04:12.018245 kernel: audit: type=1131 audit(1765890252.012:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.018277 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:04:12.021079 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:04:12.024437 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 13:04:12.024789 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:04:12.028294 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 13:04:12.028423 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 13:04:12.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.048354 kernel: audit: type=1131 audit(1765890252.042:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.048384 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 13:04:12.048516 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:04:12.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.053539 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 13:04:12.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.053664 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 13:04:12.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.057531 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 16 13:04:12.057669 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 13:04:12.062011 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 13:04:12.074568 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 13:04:12.077469 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 13:04:12.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.077661 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:04:12.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.081980 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 13:04:12.082086 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:04:12.086499 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 13:04:12.086623 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:04:12.102443 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 13:04:12.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.111486 ignition[1914]: INFO : Ignition 2.22.0 Dec 16 13:04:12.111486 ignition[1914]: INFO : Stage: umount Dec 16 13:04:12.111486 ignition[1914]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:04:12.111486 ignition[1914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:04:12.111486 ignition[1914]: INFO : umount: umount passed Dec 16 13:04:12.111486 ignition[1914]: INFO : Ignition finished successfully Dec 16 13:04:12.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.104755 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 13:04:12.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.114385 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 13:04:12.114490 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 13:04:12.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.117841 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 13:04:12.117930 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 13:04:12.120435 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 13:04:12.120474 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 13:04:12.128904 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 13:04:12.129689 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 13:04:12.132743 systemd[1]: Stopped target network.target - Network. Dec 16 13:04:12.135317 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 13:04:12.135760 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:04:12.137698 systemd[1]: Stopped target paths.target - Path Units. Dec 16 13:04:12.142831 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 13:04:12.142889 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:04:12.148983 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 13:04:12.150659 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 13:04:12.155415 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 13:04:12.155446 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:04:12.159877 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 13:04:12.159941 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:04:12.163208 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 13:04:12.163239 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 13:04:12.170637 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 13:04:12.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.170698 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 13:04:12.181516 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 13:04:12.181561 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 13:04:12.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.187846 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 13:04:12.190548 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 13:04:12.198758 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 13:04:12.200043 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 13:04:12.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.204154 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 13:04:12.205300 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 13:04:12.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.210000 audit: BPF prog-id=9 op=UNLOAD Dec 16 13:04:12.211808 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 13:04:12.212000 audit: BPF prog-id=6 op=UNLOAD Dec 16 13:04:12.214872 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 13:04:12.214908 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:04:12.220172 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 13:04:12.226404 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 13:04:12.226850 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:04:12.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.234447 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:04:12.234502 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:04:12.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.236811 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 13:04:12.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.236856 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 13:04:12.240435 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:04:12.245397 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 13:04:12.251305 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 13:04:12.251470 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:04:12.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.256679 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 13:04:12.256718 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 13:04:12.262833 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 13:04:12.262873 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:04:12.265369 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 13:04:12.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.265422 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:04:12.270569 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 13:04:12.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.270611 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 13:04:12.271881 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 13:04:12.271925 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:04:12.280833 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 13:04:12.283089 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 13:04:12.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.284148 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:04:12.286667 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 13:04:12.286707 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:04:12.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.298447 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 13:04:12.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.298505 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:04:12.301290 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 13:04:12.301334 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:04:12.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.310201 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:04:12.310260 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:04:12.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.313459 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 13:04:12.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.313546 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 13:04:12.341396 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8db7e167 eth0: Data path switched from VF: enP30832s1 Dec 16 13:04:12.341663 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 16 13:04:12.343572 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 13:04:12.344870 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 13:04:12.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.351282 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 13:04:12.351835 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 13:04:12.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.357215 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 13:04:12.359511 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 13:04:12.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:12.359551 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 13:04:12.364077 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 13:04:12.381143 systemd[1]: Switching root. Dec 16 13:04:12.464968 systemd-journald[1050]: Journal stopped Dec 16 13:04:16.990377 systemd-journald[1050]: Received SIGTERM from PID 1 (systemd). Dec 16 13:04:16.990400 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 13:04:16.990413 kernel: SELinux: policy capability open_perms=1 Dec 16 13:04:16.990424 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 13:04:16.990432 kernel: SELinux: policy capability always_check_network=0 Dec 16 13:04:16.990439 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 13:04:16.990445 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 13:04:16.990453 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 13:04:16.990458 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 13:04:16.990464 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 13:04:16.990470 systemd[1]: Successfully loaded SELinux policy in 209.341ms. Dec 16 13:04:16.990476 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.951ms. Dec 16 13:04:16.990483 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:04:16.990491 systemd[1]: Detected virtualization microsoft. Dec 16 13:04:16.990498 systemd[1]: Detected architecture x86-64. Dec 16 13:04:16.990509 systemd[1]: Detected first boot. Dec 16 13:04:16.990520 systemd[1]: Hostname set to . Dec 16 13:04:16.990530 systemd[1]: Initializing machine ID from random generator. Dec 16 13:04:16.990536 zram_generator::config[1958]: No configuration found. Dec 16 13:04:16.990543 kernel: Guest personality initialized and is inactive Dec 16 13:04:16.990548 kernel: VMCI host device registered (name=vmci, major=10, minor=259) Dec 16 13:04:16.990554 kernel: Initialized host personality Dec 16 13:04:16.990559 kernel: NET: Registered PF_VSOCK protocol family Dec 16 13:04:16.990565 systemd[1]: Populated /etc with preset unit settings. Dec 16 13:04:16.990572 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 13:04:16.990578 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 13:04:16.990585 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 13:04:16.990594 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 13:04:16.990601 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 13:04:16.990612 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 13:04:16.990623 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 13:04:16.990633 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 13:04:16.990641 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 13:04:16.990647 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 13:04:16.990653 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 13:04:16.990660 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:04:16.990669 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:04:16.990675 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 13:04:16.990681 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 13:04:16.990687 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 13:04:16.990699 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:04:16.990709 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 13:04:16.990726 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:04:16.990735 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:04:16.990741 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 13:04:16.990747 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 13:04:16.990753 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 13:04:16.990760 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 13:04:16.990766 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:04:16.990775 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:04:16.990785 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 13:04:16.990795 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:04:16.990804 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:04:16.990812 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 13:04:16.990818 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 13:04:16.990826 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 13:04:16.990833 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 13:04:16.990842 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 13:04:16.990852 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:04:16.990868 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 13:04:16.990879 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 13:04:16.990886 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:04:16.990894 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:04:16.990902 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 13:04:16.990916 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 13:04:16.990928 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 13:04:16.990939 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 13:04:16.990947 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:04:16.990954 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 13:04:16.990961 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 13:04:16.990968 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 13:04:16.990976 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 13:04:16.990990 systemd[1]: Reached target machines.target - Containers. Dec 16 13:04:16.991006 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 13:04:16.991014 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:04:16.991022 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:04:16.991029 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 13:04:16.991037 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:04:16.991051 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:04:16.991065 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:04:16.991073 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 13:04:16.991081 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:04:16.991088 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 13:04:16.991101 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 13:04:16.991115 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 13:04:16.991127 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 13:04:16.991139 kernel: kauditd_printk_skb: 53 callbacks suppressed Dec 16 13:04:16.991147 kernel: audit: type=1131 audit(1765890256.821:105): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:16.991155 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 13:04:16.991169 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:04:16.991183 kernel: audit: type=1131 audit(1765890256.832:106): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:16.991196 kernel: audit: type=1334 audit(1765890256.836:107): prog-id=14 op=UNLOAD Dec 16 13:04:16.991203 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:04:16.991210 kernel: audit: type=1334 audit(1765890256.836:108): prog-id=13 op=UNLOAD Dec 16 13:04:16.991218 kernel: audit: type=1334 audit(1765890256.837:109): prog-id=15 op=LOAD Dec 16 13:04:16.991229 kernel: audit: type=1334 audit(1765890256.839:110): prog-id=16 op=LOAD Dec 16 13:04:16.991241 kernel: audit: type=1334 audit(1765890256.840:111): prog-id=17 op=LOAD Dec 16 13:04:16.991256 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:04:16.991269 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:04:16.991277 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 13:04:16.991285 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 13:04:16.991295 kernel: fuse: init (API version 7.41) Dec 16 13:04:16.991309 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:04:16.991322 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:04:16.991345 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 13:04:16.991354 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 13:04:16.991361 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 13:04:16.991375 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 13:04:16.991388 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 13:04:16.991398 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 13:04:16.991405 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 13:04:16.991415 kernel: audit: type=1130 audit(1765890256.954:112): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:16.991422 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:04:16.991429 kernel: audit: type=1130 audit(1765890256.961:113): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:16.991436 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 13:04:16.991444 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 13:04:16.991457 kernel: audit: type=1130 audit(1765890256.971:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:16.991471 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:04:16.991483 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:04:16.991507 systemd-journald[2045]: Collecting audit messages is enabled. Dec 16 13:04:16.991534 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:04:16.991543 systemd-journald[2045]: Journal started Dec 16 13:04:16.991567 systemd-journald[2045]: Runtime Journal (/run/log/journal/3a7d5b105bf44ac6b91b1d199a835c36) is 8M, max 158.5M, 150.5M free. Dec 16 13:04:16.595000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 13:04:16.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:16.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:16.836000 audit: BPF prog-id=14 op=UNLOAD Dec 16 13:04:16.836000 audit: BPF prog-id=13 op=UNLOAD Dec 16 13:04:16.837000 audit: BPF prog-id=15 op=LOAD Dec 16 13:04:16.839000 audit: BPF prog-id=16 op=LOAD Dec 16 13:04:16.840000 audit: BPF prog-id=17 op=LOAD Dec 16 13:04:16.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:16.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:16.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:16.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:16.986000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 13:04:16.986000 audit[2045]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fffad538bb0 a2=4000 a3=0 items=0 ppid=1 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:16.986000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 13:04:16.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:16.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:16.454900 systemd[1]: Queued start job for default target multi-user.target. Dec 16 13:04:16.466067 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 16 13:04:16.466473 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 13:04:16.995437 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:04:16.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:16.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:16.999681 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:04:17.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:17.002399 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 13:04:17.002606 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 13:04:17.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:17.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:17.008015 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:04:17.008217 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:04:17.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:17.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:17.012946 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:04:17.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:17.015489 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:04:17.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:17.019205 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 13:04:17.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:17.021760 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 13:04:17.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:17.042386 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:04:17.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:17.046385 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:04:17.052073 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 13:04:17.058437 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 13:04:17.064436 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 13:04:17.068162 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 13:04:17.068314 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:04:17.070693 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 13:04:17.075245 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:04:17.075393 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 13:04:17.077882 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 13:04:17.083326 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 13:04:17.085462 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:04:17.088271 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 13:04:17.091563 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:04:17.093966 kernel: ACPI: bus type drm_connector registered Dec 16 13:04:17.093461 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:04:17.097755 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 13:04:17.101518 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:04:17.105935 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:04:17.106191 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:04:17.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:17.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:17.109197 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 13:04:17.112644 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 13:04:17.128435 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 13:04:17.131040 systemd-journald[2045]: Time spent on flushing to /var/log/journal/3a7d5b105bf44ac6b91b1d199a835c36 is 16.442ms for 1140 entries. Dec 16 13:04:17.131040 systemd-journald[2045]: System Journal (/var/log/journal/3a7d5b105bf44ac6b91b1d199a835c36) is 8M, max 2.2G, 2.2G free. Dec 16 13:04:17.162138 systemd-journald[2045]: Received client request to flush runtime journal. Dec 16 13:04:17.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:17.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:17.132573 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 13:04:17.136750 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 13:04:17.151437 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:04:17.165038 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 13:04:17.168368 kernel: loop1: detected capacity change from 0 to 219144 Dec 16 13:04:17.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:17.206667 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 13:04:17.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:17.216725 systemd-tmpfiles[2101]: ACLs are not supported, ignoring. Dec 16 13:04:17.216742 systemd-tmpfiles[2101]: ACLs are not supported, ignoring. Dec 16 13:04:17.220674 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:04:17.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:17.224588 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 13:04:17.237377 kernel: loop2: detected capacity change from 0 to 27736 Dec 16 13:04:17.432012 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 13:04:17.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:17.434000 audit: BPF prog-id=18 op=LOAD Dec 16 13:04:17.434000 audit: BPF prog-id=19 op=LOAD Dec 16 13:04:17.434000 audit: BPF prog-id=20 op=LOAD Dec 16 13:04:17.436171 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 13:04:17.439000 audit: BPF prog-id=21 op=LOAD Dec 16 13:04:17.443504 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:04:17.447471 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:04:17.468466 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 13:04:17.473548 systemd-tmpfiles[2121]: ACLs are not supported, ignoring. Dec 16 13:04:17.473567 systemd-tmpfiles[2121]: ACLs are not supported, ignoring. Dec 16 13:04:17.476939 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:04:17.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:17.563000 audit: BPF prog-id=22 op=LOAD Dec 16 13:04:17.563000 audit: BPF prog-id=23 op=LOAD Dec 16 13:04:17.563000 audit: BPF prog-id=24 op=LOAD Dec 16 13:04:17.565753 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 13:04:17.567000 audit: BPF prog-id=25 op=LOAD Dec 16 13:04:17.567000 audit: BPF prog-id=26 op=LOAD Dec 16 13:04:17.567000 audit: BPF prog-id=27 op=LOAD Dec 16 13:04:17.571164 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 13:04:17.611266 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 13:04:17.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:17.624431 systemd-nsresourced[2124]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 13:04:17.626356 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 13:04:17.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:17.668359 kernel: loop3: detected capacity change from 0 to 111544 Dec 16 13:04:17.708160 systemd-oomd[2119]: No swap; memory pressure usage will be degraded Dec 16 13:04:17.710552 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 13:04:17.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:17.764535 systemd-resolved[2120]: Positive Trust Anchors: Dec 16 13:04:17.764547 systemd-resolved[2120]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:04:17.764553 systemd-resolved[2120]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 13:04:17.764587 systemd-resolved[2120]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:04:17.811090 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 13:04:17.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:17.811000 audit: BPF prog-id=8 op=UNLOAD Dec 16 13:04:17.811000 audit: BPF prog-id=7 op=UNLOAD Dec 16 13:04:17.811000 audit: BPF prog-id=28 op=LOAD Dec 16 13:04:17.811000 audit: BPF prog-id=29 op=LOAD Dec 16 13:04:17.813812 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:04:17.841912 systemd-udevd[2142]: Using default interface naming scheme 'v257'. Dec 16 13:04:17.923090 systemd-resolved[2120]: Using system hostname 'ci-4515.1.0-a-5ae2bb3665'. Dec 16 13:04:17.924511 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:04:17.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:17.926053 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:04:18.025213 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:04:18.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:18.029000 audit: BPF prog-id=30 op=LOAD Dec 16 13:04:18.032297 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:04:18.103719 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 13:04:18.116371 kernel: loop4: detected capacity change from 0 to 119256 Dec 16 13:04:18.141373 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#251 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 13:04:18.146002 systemd-networkd[2150]: lo: Link UP Dec 16 13:04:18.146012 systemd-networkd[2150]: lo: Gained carrier Dec 16 13:04:18.147728 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:04:18.150099 systemd-networkd[2150]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 13:04:18.150109 systemd-networkd[2150]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:04:18.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:18.150463 systemd[1]: Reached target network.target - Network. Dec 16 13:04:18.156125 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 13:04:18.156394 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Dec 16 13:04:18.160505 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 13:04:18.167380 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 16 13:04:18.171071 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8db7e167 eth0: Data path switched to VF: enP30832s1 Dec 16 13:04:18.177390 kernel: hv_vmbus: registering driver hyperv_fb Dec 16 13:04:18.174433 systemd-networkd[2150]: enP30832s1: Link UP Dec 16 13:04:18.174531 systemd-networkd[2150]: eth0: Link UP Dec 16 13:04:18.174534 systemd-networkd[2150]: eth0: Gained carrier Dec 16 13:04:18.174549 systemd-networkd[2150]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 13:04:18.179640 systemd-networkd[2150]: enP30832s1: Gained carrier Dec 16 13:04:18.184408 kernel: hv_vmbus: registering driver hv_balloon Dec 16 13:04:18.190727 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 13:04:18.191968 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 16 13:04:18.192000 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 16 13:04:18.188414 systemd-networkd[2150]: eth0: DHCPv4 address 10.200.4.43/24, gateway 10.200.4.1 acquired from 168.63.129.16 Dec 16 13:04:18.194062 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 16 13:04:18.195368 kernel: Console: switching to colour dummy device 80x25 Dec 16 13:04:18.199361 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 13:04:18.208534 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 13:04:18.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:18.287602 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:04:18.292356 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:04:18.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:18.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:18.292578 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:04:18.302537 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:04:18.405535 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:04:18.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:18.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:18.406463 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:04:18.412471 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:04:18.436631 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Dec 16 13:04:18.502663 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Dec 16 13:04:18.507460 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 13:04:18.529372 kernel: loop5: detected capacity change from 0 to 219144 Dec 16 13:04:18.557383 kernel: loop6: detected capacity change from 0 to 27736 Dec 16 13:04:18.568355 kernel: loop7: detected capacity change from 0 to 111544 Dec 16 13:04:18.572742 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 13:04:18.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:18.583377 kernel: loop1: detected capacity change from 0 to 119256 Dec 16 13:04:18.614744 (sd-merge)[2230]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Dec 16 13:04:18.617668 (sd-merge)[2230]: Merged extensions into '/usr'. Dec 16 13:04:18.621536 systemd[1]: Reload requested from client PID 2100 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 13:04:18.621553 systemd[1]: Reloading... Dec 16 13:04:18.674367 zram_generator::config[2266]: No configuration found. Dec 16 13:04:18.888823 systemd[1]: Reloading finished in 266 ms. Dec 16 13:04:18.911709 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:04:18.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:18.914984 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 13:04:18.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:18.923221 systemd[1]: Starting ensure-sysext.service... Dec 16 13:04:18.927095 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:04:18.932000 audit: BPF prog-id=31 op=LOAD Dec 16 13:04:18.932000 audit: BPF prog-id=15 op=UNLOAD Dec 16 13:04:18.932000 audit: BPF prog-id=32 op=LOAD Dec 16 13:04:18.932000 audit: BPF prog-id=33 op=LOAD Dec 16 13:04:18.932000 audit: BPF prog-id=16 op=UNLOAD Dec 16 13:04:18.932000 audit: BPF prog-id=17 op=UNLOAD Dec 16 13:04:18.933000 audit: BPF prog-id=34 op=LOAD Dec 16 13:04:18.933000 audit: BPF prog-id=18 op=UNLOAD Dec 16 13:04:18.933000 audit: BPF prog-id=35 op=LOAD Dec 16 13:04:18.933000 audit: BPF prog-id=36 op=LOAD Dec 16 13:04:18.933000 audit: BPF prog-id=19 op=UNLOAD Dec 16 13:04:18.933000 audit: BPF prog-id=20 op=UNLOAD Dec 16 13:04:18.934000 audit: BPF prog-id=37 op=LOAD Dec 16 13:04:18.934000 audit: BPF prog-id=25 op=UNLOAD Dec 16 13:04:18.934000 audit: BPF prog-id=38 op=LOAD Dec 16 13:04:18.934000 audit: BPF prog-id=39 op=LOAD Dec 16 13:04:18.934000 audit: BPF prog-id=26 op=UNLOAD Dec 16 13:04:18.934000 audit: BPF prog-id=27 op=UNLOAD Dec 16 13:04:18.935000 audit: BPF prog-id=40 op=LOAD Dec 16 13:04:18.935000 audit: BPF prog-id=21 op=UNLOAD Dec 16 13:04:18.936000 audit: BPF prog-id=41 op=LOAD Dec 16 13:04:18.936000 audit: BPF prog-id=22 op=UNLOAD Dec 16 13:04:18.936000 audit: BPF prog-id=42 op=LOAD Dec 16 13:04:18.936000 audit: BPF prog-id=43 op=LOAD Dec 16 13:04:18.937000 audit: BPF prog-id=23 op=UNLOAD Dec 16 13:04:18.937000 audit: BPF prog-id=24 op=UNLOAD Dec 16 13:04:18.938000 audit: BPF prog-id=44 op=LOAD Dec 16 13:04:18.938000 audit: BPF prog-id=30 op=UNLOAD Dec 16 13:04:18.939000 audit: BPF prog-id=45 op=LOAD Dec 16 13:04:18.939000 audit: BPF prog-id=46 op=LOAD Dec 16 13:04:18.939000 audit: BPF prog-id=28 op=UNLOAD Dec 16 13:04:18.939000 audit: BPF prog-id=29 op=UNLOAD Dec 16 13:04:18.949834 systemd[1]: Reload requested from client PID 2327 ('systemctl') (unit ensure-sysext.service)... Dec 16 13:04:18.949930 systemd[1]: Reloading... Dec 16 13:04:18.966606 systemd-tmpfiles[2328]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 13:04:18.966640 systemd-tmpfiles[2328]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 13:04:18.966935 systemd-tmpfiles[2328]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 13:04:18.968147 systemd-tmpfiles[2328]: ACLs are not supported, ignoring. Dec 16 13:04:18.968245 systemd-tmpfiles[2328]: ACLs are not supported, ignoring. Dec 16 13:04:18.990444 systemd-tmpfiles[2328]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:04:18.990453 systemd-tmpfiles[2328]: Skipping /boot Dec 16 13:04:18.999093 systemd-tmpfiles[2328]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:04:18.999185 systemd-tmpfiles[2328]: Skipping /boot Dec 16 13:04:19.018357 zram_generator::config[2361]: No configuration found. Dec 16 13:04:19.190746 systemd[1]: Reloading finished in 240 ms. Dec 16 13:04:19.203000 audit: BPF prog-id=47 op=LOAD Dec 16 13:04:19.203000 audit: BPF prog-id=41 op=UNLOAD Dec 16 13:04:19.203000 audit: BPF prog-id=48 op=LOAD Dec 16 13:04:19.203000 audit: BPF prog-id=49 op=LOAD Dec 16 13:04:19.203000 audit: BPF prog-id=42 op=UNLOAD Dec 16 13:04:19.203000 audit: BPF prog-id=43 op=UNLOAD Dec 16 13:04:19.204000 audit: BPF prog-id=50 op=LOAD Dec 16 13:04:19.204000 audit: BPF prog-id=44 op=UNLOAD Dec 16 13:04:19.205000 audit: BPF prog-id=51 op=LOAD Dec 16 13:04:19.205000 audit: BPF prog-id=40 op=UNLOAD Dec 16 13:04:19.205000 audit: BPF prog-id=52 op=LOAD Dec 16 13:04:19.205000 audit: BPF prog-id=31 op=UNLOAD Dec 16 13:04:19.205000 audit: BPF prog-id=53 op=LOAD Dec 16 13:04:19.205000 audit: BPF prog-id=54 op=LOAD Dec 16 13:04:19.205000 audit: BPF prog-id=32 op=UNLOAD Dec 16 13:04:19.205000 audit: BPF prog-id=33 op=UNLOAD Dec 16 13:04:19.206000 audit: BPF prog-id=55 op=LOAD Dec 16 13:04:19.206000 audit: BPF prog-id=34 op=UNLOAD Dec 16 13:04:19.206000 audit: BPF prog-id=56 op=LOAD Dec 16 13:04:19.206000 audit: BPF prog-id=57 op=LOAD Dec 16 13:04:19.206000 audit: BPF prog-id=35 op=UNLOAD Dec 16 13:04:19.206000 audit: BPF prog-id=36 op=UNLOAD Dec 16 13:04:19.207000 audit: BPF prog-id=58 op=LOAD Dec 16 13:04:19.207000 audit: BPF prog-id=37 op=UNLOAD Dec 16 13:04:19.207000 audit: BPF prog-id=59 op=LOAD Dec 16 13:04:19.207000 audit: BPF prog-id=60 op=LOAD Dec 16 13:04:19.207000 audit: BPF prog-id=38 op=UNLOAD Dec 16 13:04:19.207000 audit: BPF prog-id=39 op=UNLOAD Dec 16 13:04:19.207000 audit: BPF prog-id=61 op=LOAD Dec 16 13:04:19.207000 audit: BPF prog-id=62 op=LOAD Dec 16 13:04:19.207000 audit: BPF prog-id=45 op=UNLOAD Dec 16 13:04:19.207000 audit: BPF prog-id=46 op=UNLOAD Dec 16 13:04:19.215586 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:04:19.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:19.224551 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:04:19.234195 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 13:04:19.237967 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 13:04:19.242686 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 13:04:19.246393 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 13:04:19.251094 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:04:19.251244 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:04:19.256434 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:04:19.259686 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:04:19.271496 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:04:19.273320 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:04:19.273618 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 13:04:19.273719 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:04:19.273807 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:04:19.275572 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:04:19.275793 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:04:19.277000 audit[2426]: SYSTEM_BOOT pid=2426 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 13:04:19.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:19.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:19.279584 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:04:19.282523 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:04:19.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:19.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:19.285588 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:04:19.285849 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:04:19.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:19.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:19.295404 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:04:19.295592 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:04:19.297646 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:04:19.300522 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:04:19.305686 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:04:19.306174 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:04:19.306398 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 13:04:19.306545 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:04:19.306681 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:04:19.310420 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:04:19.313467 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:04:19.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:19.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:19.317854 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 13:04:19.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:19.321099 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:04:19.321313 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:04:19.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:19.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:19.331842 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:04:19.332021 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:04:19.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:19.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:19.341036 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:04:19.341390 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:04:19.342524 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:04:19.354050 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:04:19.358518 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:04:19.361311 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:04:19.363552 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:04:19.363748 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 13:04:19.363858 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:04:19.364016 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 13:04:19.365723 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:04:19.367705 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 13:04:19.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:19.370870 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:04:19.372455 systemd-networkd[2150]: eth0: Gained IPv6LL Dec 16 13:04:19.373643 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:04:19.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:19.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:19.376925 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 13:04:19.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:19.380115 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:04:19.380243 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:04:19.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:19.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:19.383772 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:04:19.383908 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:04:19.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:19.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:19.386707 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:04:19.386843 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:04:19.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:19.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:19.391850 systemd[1]: Finished ensure-sysext.service. Dec 16 13:04:19.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:19.395146 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 13:04:19.396698 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:04:19.396745 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:04:19.640000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 13:04:19.640000 audit[2470]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffec6030da0 a2=420 a3=0 items=0 ppid=2422 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:19.640000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 13:04:19.642210 augenrules[2470]: No rules Dec 16 13:04:19.642473 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:04:19.642726 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:04:20.173153 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 13:04:20.174969 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 13:04:25.411269 ldconfig[2424]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 13:04:25.424861 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 13:04:25.429101 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 13:04:25.445541 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 13:04:25.448595 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:04:25.451635 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 13:04:25.454415 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 13:04:25.455839 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 13:04:25.457227 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 13:04:25.458367 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 13:04:25.459851 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 13:04:25.462436 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 13:04:25.465391 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 13:04:25.468410 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 13:04:25.468445 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:04:25.469598 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:04:25.473671 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 13:04:25.478527 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 13:04:25.483137 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 13:04:25.486513 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 13:04:25.488100 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 13:04:25.495833 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 13:04:25.499724 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 13:04:25.502998 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 13:04:25.507368 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:04:25.510397 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:04:25.513440 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:04:25.513467 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:04:25.530286 systemd[1]: Starting chronyd.service - NTP client/server... Dec 16 13:04:25.534241 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 13:04:25.540118 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 13:04:25.544489 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 13:04:25.547691 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 13:04:25.551401 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 13:04:25.556485 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 13:04:25.559544 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 13:04:25.561047 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 13:04:25.562987 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Dec 16 13:04:25.565608 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 16 13:04:25.567442 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 16 13:04:25.573430 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:04:25.576610 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 13:04:25.587945 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 13:04:25.595000 jq[2493]: false Dec 16 13:04:25.595459 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 13:04:25.598919 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 13:04:25.601974 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 13:04:25.608698 extend-filesystems[2494]: Found /dev/nvme0n1p6 Dec 16 13:04:25.611534 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 13:04:25.614465 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 13:04:25.614860 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 13:04:25.617103 google_oslogin_nss_cache[2495]: oslogin_cache_refresh[2495]: Refreshing passwd entry cache Dec 16 13:04:25.617895 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 13:04:25.617996 oslogin_cache_refresh[2495]: Refreshing passwd entry cache Dec 16 13:04:25.620574 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 13:04:25.625314 KVP[2496]: KVP starting; pid is:2496 Dec 16 13:04:25.626143 chronyd[2485]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Dec 16 13:04:25.628912 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 13:04:25.631454 KVP[2496]: KVP LIC Version: 3.1 Dec 16 13:04:25.631139 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 13:04:25.631598 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 13:04:25.631676 chronyd[2485]: Timezone right/UTC failed leap second check, ignoring Dec 16 13:04:25.631811 chronyd[2485]: Loaded seccomp filter (level 2) Dec 16 13:04:25.632387 kernel: hv_utils: KVP IC version 4.0 Dec 16 13:04:25.636622 systemd[1]: Started chronyd.service - NTP client/server. Dec 16 13:04:25.639779 extend-filesystems[2494]: Found /dev/nvme0n1p9 Dec 16 13:04:25.645650 extend-filesystems[2494]: Checking size of /dev/nvme0n1p9 Dec 16 13:04:25.641878 oslogin_cache_refresh[2495]: Failure getting users, quitting Dec 16 13:04:25.647197 google_oslogin_nss_cache[2495]: oslogin_cache_refresh[2495]: Failure getting users, quitting Dec 16 13:04:25.647197 google_oslogin_nss_cache[2495]: oslogin_cache_refresh[2495]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:04:25.647197 google_oslogin_nss_cache[2495]: oslogin_cache_refresh[2495]: Refreshing group entry cache Dec 16 13:04:25.641899 oslogin_cache_refresh[2495]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:04:25.641947 oslogin_cache_refresh[2495]: Refreshing group entry cache Dec 16 13:04:25.649281 jq[2510]: true Dec 16 13:04:25.656474 google_oslogin_nss_cache[2495]: oslogin_cache_refresh[2495]: Failure getting groups, quitting Dec 16 13:04:25.656529 google_oslogin_nss_cache[2495]: oslogin_cache_refresh[2495]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:04:25.656475 oslogin_cache_refresh[2495]: Failure getting groups, quitting Dec 16 13:04:25.656484 oslogin_cache_refresh[2495]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:04:25.660136 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 13:04:25.661442 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 13:04:25.684143 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 13:04:25.684933 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 13:04:25.693882 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 13:04:25.694508 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 13:04:25.707361 jq[2525]: true Dec 16 13:04:25.709369 update_engine[2508]: I20251216 13:04:25.709286 2508 main.cc:92] Flatcar Update Engine starting Dec 16 13:04:25.713522 extend-filesystems[2494]: Resized partition /dev/nvme0n1p9 Dec 16 13:04:25.724498 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 13:04:25.740625 tar[2515]: linux-amd64/LICENSE Dec 16 13:04:25.740797 tar[2515]: linux-amd64/helm Dec 16 13:04:25.749965 extend-filesystems[2555]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 13:04:25.761703 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 6359552 to 6376955 blocks Dec 16 13:04:25.820614 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 6376955 Dec 16 13:04:25.773754 systemd-logind[2506]: New seat seat0. Dec 16 13:04:25.821097 systemd-logind[2506]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 16 13:04:25.822681 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 13:04:25.843612 extend-filesystems[2555]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 16 13:04:25.843612 extend-filesystems[2555]: old_desc_blocks = 4, new_desc_blocks = 4 Dec 16 13:04:25.843612 extend-filesystems[2555]: The filesystem on /dev/nvme0n1p9 is now 6376955 (4k) blocks long. Dec 16 13:04:25.854757 extend-filesystems[2494]: Resized filesystem in /dev/nvme0n1p9 Dec 16 13:04:25.854158 dbus-daemon[2488]: [system] SELinux support is enabled Dec 16 13:04:25.845911 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 13:04:25.846141 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 13:04:25.858637 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 13:04:25.865951 bash[2570]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:04:25.868414 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 13:04:25.872361 update_engine[2508]: I20251216 13:04:25.872231 2508 update_check_scheduler.cc:74] Next update check in 2m32s Dec 16 13:04:25.872899 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 13:04:25.872977 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 13:04:25.873001 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 13:04:25.878296 dbus-daemon[2488]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 13:04:25.878472 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 13:04:25.878497 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 13:04:25.880972 systemd[1]: Started update-engine.service - Update Engine. Dec 16 13:04:25.900799 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 13:04:25.970313 coreos-metadata[2487]: Dec 16 13:04:25.968 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 13:04:25.987751 coreos-metadata[2487]: Dec 16 13:04:25.987 INFO Fetch successful Dec 16 13:04:25.987751 coreos-metadata[2487]: Dec 16 13:04:25.987 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 16 13:04:25.992310 coreos-metadata[2487]: Dec 16 13:04:25.991 INFO Fetch successful Dec 16 13:04:25.992310 coreos-metadata[2487]: Dec 16 13:04:25.992 INFO Fetching http://168.63.129.16/machine/471729c5-2de3-45be-8ff9-427ffd747401/6ef24510%2D0c3e%2D4977%2Daed7%2D6fd74ef9c804.%5Fci%2D4515.1.0%2Da%2D5ae2bb3665?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 16 13:04:25.995520 coreos-metadata[2487]: Dec 16 13:04:25.995 INFO Fetch successful Dec 16 13:04:25.995641 coreos-metadata[2487]: Dec 16 13:04:25.995 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 16 13:04:26.006006 coreos-metadata[2487]: Dec 16 13:04:26.005 INFO Fetch successful Dec 16 13:04:26.064398 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 13:04:26.069080 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 13:04:26.124015 sshd_keygen[2552]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 13:04:26.179865 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 13:04:26.188634 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 13:04:26.194200 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 16 13:04:26.229039 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 13:04:26.229455 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 13:04:26.238606 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 13:04:26.265509 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 16 13:04:26.283569 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 13:04:26.288318 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 13:04:26.298526 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 13:04:26.300662 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 13:04:26.320378 locksmithd[2596]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 13:04:26.321560 tar[2515]: linux-amd64/README.md Dec 16 13:04:26.333943 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 13:04:26.953546 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:04:26.966811 (kubelet)[2643]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:04:26.973409 containerd[2540]: time="2025-12-16T13:04:26Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 13:04:26.974132 containerd[2540]: time="2025-12-16T13:04:26.974092240Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 13:04:26.985130 containerd[2540]: time="2025-12-16T13:04:26.985093340Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.66µs" Dec 16 13:04:26.985130 containerd[2540]: time="2025-12-16T13:04:26.985123425Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 13:04:26.985227 containerd[2540]: time="2025-12-16T13:04:26.985165469Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 13:04:26.985227 containerd[2540]: time="2025-12-16T13:04:26.985178751Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 13:04:26.985346 containerd[2540]: time="2025-12-16T13:04:26.985318834Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 13:04:26.985371 containerd[2540]: time="2025-12-16T13:04:26.985348102Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:04:26.985423 containerd[2540]: time="2025-12-16T13:04:26.985407305Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:04:26.985423 containerd[2540]: time="2025-12-16T13:04:26.985418661Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:04:26.985630 containerd[2540]: time="2025-12-16T13:04:26.985610492Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:04:26.985630 containerd[2540]: time="2025-12-16T13:04:26.985624791Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:04:26.985688 containerd[2540]: time="2025-12-16T13:04:26.985637090Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:04:26.985688 containerd[2540]: time="2025-12-16T13:04:26.985645866Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 13:04:26.985796 containerd[2540]: time="2025-12-16T13:04:26.985778169Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 13:04:26.985796 containerd[2540]: time="2025-12-16T13:04:26.985790289Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 13:04:26.985867 containerd[2540]: time="2025-12-16T13:04:26.985852574Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 13:04:26.986006 containerd[2540]: time="2025-12-16T13:04:26.985989691Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:04:26.986033 containerd[2540]: time="2025-12-16T13:04:26.986017378Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:04:26.986033 containerd[2540]: time="2025-12-16T13:04:26.986028428Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 13:04:26.986072 containerd[2540]: time="2025-12-16T13:04:26.986060651Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 13:04:26.986269 containerd[2540]: time="2025-12-16T13:04:26.986253489Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 13:04:26.986318 containerd[2540]: time="2025-12-16T13:04:26.986304528Z" level=info msg="metadata content store policy set" policy=shared Dec 16 13:04:27.004969 containerd[2540]: time="2025-12-16T13:04:27.004628977Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 13:04:27.004969 containerd[2540]: time="2025-12-16T13:04:27.004688799Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 13:04:27.005072 containerd[2540]: time="2025-12-16T13:04:27.005030199Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 13:04:27.005095 containerd[2540]: time="2025-12-16T13:04:27.005055139Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 13:04:27.005095 containerd[2540]: time="2025-12-16T13:04:27.005089150Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 13:04:27.005133 containerd[2540]: time="2025-12-16T13:04:27.005103930Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 13:04:27.005133 containerd[2540]: time="2025-12-16T13:04:27.005116923Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 13:04:27.005133 containerd[2540]: time="2025-12-16T13:04:27.005128070Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 13:04:27.005208 containerd[2540]: time="2025-12-16T13:04:27.005160450Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 13:04:27.005208 containerd[2540]: time="2025-12-16T13:04:27.005173649Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 13:04:27.005208 containerd[2540]: time="2025-12-16T13:04:27.005187779Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 13:04:27.005266 containerd[2540]: time="2025-12-16T13:04:27.005206182Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 13:04:27.005458 containerd[2540]: time="2025-12-16T13:04:27.005386494Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 13:04:27.005458 containerd[2540]: time="2025-12-16T13:04:27.005410394Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 13:04:27.005626 containerd[2540]: time="2025-12-16T13:04:27.005614023Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 13:04:27.005707 containerd[2540]: time="2025-12-16T13:04:27.005696590Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 13:04:27.005823 containerd[2540]: time="2025-12-16T13:04:27.005758134Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 13:04:27.005823 containerd[2540]: time="2025-12-16T13:04:27.005772916Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 13:04:27.005823 containerd[2540]: time="2025-12-16T13:04:27.005796146Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 13:04:27.005823 containerd[2540]: time="2025-12-16T13:04:27.005807829Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 13:04:27.005946 containerd[2540]: time="2025-12-16T13:04:27.005936438Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 13:04:27.005985 containerd[2540]: time="2025-12-16T13:04:27.005977976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 13:04:27.006035 containerd[2540]: time="2025-12-16T13:04:27.006027963Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 13:04:27.006073 containerd[2540]: time="2025-12-16T13:04:27.006066854Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 13:04:27.006186 containerd[2540]: time="2025-12-16T13:04:27.006116356Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 13:04:27.006186 containerd[2540]: time="2025-12-16T13:04:27.006143292Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 13:04:27.006272 containerd[2540]: time="2025-12-16T13:04:27.006261909Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 13:04:27.006322 containerd[2540]: time="2025-12-16T13:04:27.006315524Z" level=info msg="Start snapshots syncer" Dec 16 13:04:27.006394 containerd[2540]: time="2025-12-16T13:04:27.006380694Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 13:04:27.007711 containerd[2540]: time="2025-12-16T13:04:27.006715839Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 13:04:27.007711 containerd[2540]: time="2025-12-16T13:04:27.006778666Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 13:04:27.007926 containerd[2540]: time="2025-12-16T13:04:27.006837404Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 13:04:27.007926 containerd[2540]: time="2025-12-16T13:04:27.006956209Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 13:04:27.007926 containerd[2540]: time="2025-12-16T13:04:27.006978146Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 13:04:27.007926 containerd[2540]: time="2025-12-16T13:04:27.006990126Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 13:04:27.007926 containerd[2540]: time="2025-12-16T13:04:27.007001528Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 13:04:27.007926 containerd[2540]: time="2025-12-16T13:04:27.007020549Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 13:04:27.007926 containerd[2540]: time="2025-12-16T13:04:27.007033225Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 13:04:27.007926 containerd[2540]: time="2025-12-16T13:04:27.007045249Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 13:04:27.007926 containerd[2540]: time="2025-12-16T13:04:27.007056140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 13:04:27.007926 containerd[2540]: time="2025-12-16T13:04:27.007068681Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 13:04:27.007926 containerd[2540]: time="2025-12-16T13:04:27.007094274Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:04:27.007926 containerd[2540]: time="2025-12-16T13:04:27.007111274Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:04:27.007926 containerd[2540]: time="2025-12-16T13:04:27.007122196Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:04:27.008185 containerd[2540]: time="2025-12-16T13:04:27.007133627Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:04:27.008185 containerd[2540]: time="2025-12-16T13:04:27.007149955Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 13:04:27.008185 containerd[2540]: time="2025-12-16T13:04:27.007164802Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 13:04:27.008185 containerd[2540]: time="2025-12-16T13:04:27.007177489Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 13:04:27.008185 containerd[2540]: time="2025-12-16T13:04:27.007194338Z" level=info msg="runtime interface created" Dec 16 13:04:27.008185 containerd[2540]: time="2025-12-16T13:04:27.007200566Z" level=info msg="created NRI interface" Dec 16 13:04:27.008185 containerd[2540]: time="2025-12-16T13:04:27.007209239Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 13:04:27.008185 containerd[2540]: time="2025-12-16T13:04:27.007220208Z" level=info msg="Connect containerd service" Dec 16 13:04:27.008185 containerd[2540]: time="2025-12-16T13:04:27.007247655Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 13:04:27.008382 containerd[2540]: time="2025-12-16T13:04:27.008192632Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:04:27.493090 kubelet[2643]: E1216 13:04:27.493035 2643 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:04:27.495180 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:04:27.495297 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:04:27.495979 systemd[1]: kubelet.service: Consumed 864ms CPU time, 258M memory peak. Dec 16 13:04:27.972833 containerd[2540]: time="2025-12-16T13:04:27.972763392Z" level=info msg="Start subscribing containerd event" Dec 16 13:04:27.973246 containerd[2540]: time="2025-12-16T13:04:27.973054661Z" level=info msg="Start recovering state" Dec 16 13:04:27.973246 containerd[2540]: time="2025-12-16T13:04:27.973085679Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 13:04:27.973246 containerd[2540]: time="2025-12-16T13:04:27.973126559Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 13:04:27.973246 containerd[2540]: time="2025-12-16T13:04:27.973215906Z" level=info msg="Start event monitor" Dec 16 13:04:27.973246 containerd[2540]: time="2025-12-16T13:04:27.973231332Z" level=info msg="Start cni network conf syncer for default" Dec 16 13:04:27.973563 containerd[2540]: time="2025-12-16T13:04:27.973411726Z" level=info msg="Start streaming server" Dec 16 13:04:27.973563 containerd[2540]: time="2025-12-16T13:04:27.973425751Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 13:04:27.973563 containerd[2540]: time="2025-12-16T13:04:27.973434151Z" level=info msg="runtime interface starting up..." Dec 16 13:04:27.973563 containerd[2540]: time="2025-12-16T13:04:27.973442275Z" level=info msg="starting plugins..." Dec 16 13:04:27.973563 containerd[2540]: time="2025-12-16T13:04:27.973458198Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 13:04:27.974214 containerd[2540]: time="2025-12-16T13:04:27.974034988Z" level=info msg="containerd successfully booted in 1.001705s" Dec 16 13:04:27.974240 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 13:04:27.978881 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 13:04:27.982918 systemd[1]: Startup finished in 4.613s (kernel) + 47.839s (initrd) + 14.321s (userspace) = 1min 6.773s. Dec 16 13:04:28.199485 waagent[2624]: 2025-12-16T13:04:28.199379Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Dec 16 13:04:28.200685 waagent[2624]: 2025-12-16T13:04:28.200232Z INFO Daemon Daemon OS: flatcar 4515.1.0 Dec 16 13:04:28.202000 waagent[2624]: 2025-12-16T13:04:28.201927Z INFO Daemon Daemon Python: 3.11.13 Dec 16 13:04:28.203215 waagent[2624]: 2025-12-16T13:04:28.203146Z INFO Daemon Daemon Run daemon Dec 16 13:04:28.204325 waagent[2624]: 2025-12-16T13:04:28.204247Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4515.1.0' Dec 16 13:04:28.205709 waagent[2624]: 2025-12-16T13:04:28.204738Z INFO Daemon Daemon Using waagent for provisioning Dec 16 13:04:28.205709 waagent[2624]: 2025-12-16T13:04:28.204955Z INFO Daemon Daemon Activate resource disk Dec 16 13:04:28.205709 waagent[2624]: 2025-12-16T13:04:28.205058Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 16 13:04:28.216177 waagent[2624]: 2025-12-16T13:04:28.206795Z INFO Daemon Daemon Found device: None Dec 16 13:04:28.216177 waagent[2624]: 2025-12-16T13:04:28.206898Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 16 13:04:28.216177 waagent[2624]: 2025-12-16T13:04:28.207126Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 16 13:04:28.216177 waagent[2624]: 2025-12-16T13:04:28.207785Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 13:04:28.216177 waagent[2624]: 2025-12-16T13:04:28.208063Z INFO Daemon Daemon Running default provisioning handler Dec 16 13:04:28.227762 waagent[2624]: 2025-12-16T13:04:28.216283Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 16 13:04:28.227762 waagent[2624]: 2025-12-16T13:04:28.217662Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 16 13:04:28.227762 waagent[2624]: 2025-12-16T13:04:28.217944Z INFO Daemon Daemon cloud-init is enabled: False Dec 16 13:04:28.227762 waagent[2624]: 2025-12-16T13:04:28.218191Z INFO Daemon Daemon Copying ovf-env.xml Dec 16 13:04:28.309364 waagent[2624]: 2025-12-16T13:04:28.307499Z INFO Daemon Daemon Successfully mounted dvd Dec 16 13:04:28.363751 waagent[2624]: 2025-12-16T13:04:28.363710Z INFO Daemon Daemon Detect protocol endpoint Dec 16 13:04:28.364105 waagent[2624]: 2025-12-16T13:04:28.364083Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 13:04:28.364566 waagent[2624]: 2025-12-16T13:04:28.364546Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 16 13:04:28.364871 waagent[2624]: 2025-12-16T13:04:28.364852Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 16 13:04:28.365209 waagent[2624]: 2025-12-16T13:04:28.365190Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 16 13:04:28.365326 waagent[2624]: 2025-12-16T13:04:28.365312Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 16 13:04:28.367755 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 16 13:04:28.401822 waagent[2624]: 2025-12-16T13:04:28.401790Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 16 13:04:28.402790 waagent[2624]: 2025-12-16T13:04:28.402216Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 16 13:04:28.402790 waagent[2624]: 2025-12-16T13:04:28.402302Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 16 13:04:28.549455 waagent[2624]: 2025-12-16T13:04:28.548280Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 16 13:04:28.549455 waagent[2624]: 2025-12-16T13:04:28.548853Z INFO Daemon Daemon Forcing an update of the goal state. Dec 16 13:04:28.552777 waagent[2624]: 2025-12-16T13:04:28.552730Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 13:04:28.566014 waagent[2624]: 2025-12-16T13:04:28.565981Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Dec 16 13:04:28.567568 waagent[2624]: 2025-12-16T13:04:28.566897Z INFO Daemon Dec 16 13:04:28.567568 waagent[2624]: 2025-12-16T13:04:28.567035Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 3487d14c-6779-40ba-aead-cbb46a446cec eTag: 1132125187542939244 source: Fabric] Dec 16 13:04:28.567568 waagent[2624]: 2025-12-16T13:04:28.567299Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 16 13:04:28.573320 waagent[2624]: 2025-12-16T13:04:28.567754Z INFO Daemon Dec 16 13:04:28.573320 waagent[2624]: 2025-12-16T13:04:28.568190Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 16 13:04:28.576495 waagent[2624]: 2025-12-16T13:04:28.576472Z INFO Daemon Daemon Downloading artifacts profile blob Dec 16 13:04:28.589694 login[2629]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Dec 16 13:04:28.591190 login[2630]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 16 13:04:28.600587 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 13:04:28.603555 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 13:04:28.606178 systemd-logind[2506]: New session 2 of user core. Dec 16 13:04:28.626910 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 13:04:28.629241 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 13:04:28.654630 waagent[2624]: 2025-12-16T13:04:28.654582Z INFO Daemon Downloaded certificate {'thumbprint': 'E1C4F9C963FEFF30BCB12913A23AF3DEAF4BCB02', 'hasPrivateKey': True} Dec 16 13:04:28.656710 waagent[2624]: 2025-12-16T13:04:28.656678Z INFO Daemon Fetch goal state completed Dec 16 13:04:28.658302 (systemd)[2680]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 13:04:28.660561 systemd-logind[2506]: New session c1 of user core. Dec 16 13:04:28.665975 waagent[2624]: 2025-12-16T13:04:28.665927Z INFO Daemon Daemon Starting provisioning Dec 16 13:04:28.667540 waagent[2624]: 2025-12-16T13:04:28.667491Z INFO Daemon Daemon Handle ovf-env.xml. Dec 16 13:04:28.669081 waagent[2624]: 2025-12-16T13:04:28.667895Z INFO Daemon Daemon Set hostname [ci-4515.1.0-a-5ae2bb3665] Dec 16 13:04:28.672960 waagent[2624]: 2025-12-16T13:04:28.671897Z INFO Daemon Daemon Publish hostname [ci-4515.1.0-a-5ae2bb3665] Dec 16 13:04:28.672960 waagent[2624]: 2025-12-16T13:04:28.672280Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 16 13:04:28.672960 waagent[2624]: 2025-12-16T13:04:28.672563Z INFO Daemon Daemon Primary interface is [eth0] Dec 16 13:04:28.683307 systemd-networkd[2150]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 13:04:28.683545 systemd-networkd[2150]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:04:28.683674 systemd-networkd[2150]: eth0: DHCP lease lost Dec 16 13:04:28.704797 waagent[2624]: 2025-12-16T13:04:28.700423Z INFO Daemon Daemon Create user account if not exists Dec 16 13:04:28.704797 waagent[2624]: 2025-12-16T13:04:28.701706Z INFO Daemon Daemon User core already exists, skip useradd Dec 16 13:04:28.704797 waagent[2624]: 2025-12-16T13:04:28.701806Z INFO Daemon Daemon Configure sudoer Dec 16 13:04:28.708784 waagent[2624]: 2025-12-16T13:04:28.708739Z INFO Daemon Daemon Configure sshd Dec 16 13:04:28.711399 systemd-networkd[2150]: eth0: DHCPv4 address 10.200.4.43/24, gateway 10.200.4.1 acquired from 168.63.129.16 Dec 16 13:04:28.714237 waagent[2624]: 2025-12-16T13:04:28.714197Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 16 13:04:28.718112 waagent[2624]: 2025-12-16T13:04:28.717615Z INFO Daemon Daemon Deploy ssh public key. Dec 16 13:04:28.788437 systemd[2680]: Queued start job for default target default.target. Dec 16 13:04:28.795176 systemd[2680]: Created slice app.slice - User Application Slice. Dec 16 13:04:28.795210 systemd[2680]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 13:04:28.795223 systemd[2680]: Reached target paths.target - Paths. Dec 16 13:04:28.795259 systemd[2680]: Reached target timers.target - Timers. Dec 16 13:04:28.796169 systemd[2680]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 13:04:28.798472 systemd[2680]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 13:04:28.807857 systemd[2680]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 13:04:28.807939 systemd[2680]: Reached target sockets.target - Sockets. Dec 16 13:04:28.808465 systemd[2680]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 13:04:28.808555 systemd[2680]: Reached target basic.target - Basic System. Dec 16 13:04:28.808604 systemd[2680]: Reached target default.target - Main User Target. Dec 16 13:04:28.808629 systemd[2680]: Startup finished in 143ms. Dec 16 13:04:28.808813 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 13:04:28.815614 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 13:04:29.590180 login[2629]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 16 13:04:29.595139 systemd-logind[2506]: New session 1 of user core. Dec 16 13:04:29.604522 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 13:04:29.813775 waagent[2624]: 2025-12-16T13:04:29.813716Z INFO Daemon Daemon Provisioning complete Dec 16 13:04:29.824177 waagent[2624]: 2025-12-16T13:04:29.824135Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 16 13:04:29.825516 waagent[2624]: 2025-12-16T13:04:29.825482Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 16 13:04:29.827524 waagent[2624]: 2025-12-16T13:04:29.827495Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Dec 16 13:04:29.936404 waagent[2722]: 2025-12-16T13:04:29.936316Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Dec 16 13:04:29.936721 waagent[2722]: 2025-12-16T13:04:29.936436Z INFO ExtHandler ExtHandler OS: flatcar 4515.1.0 Dec 16 13:04:29.936721 waagent[2722]: 2025-12-16T13:04:29.936478Z INFO ExtHandler ExtHandler Python: 3.11.13 Dec 16 13:04:29.936721 waagent[2722]: 2025-12-16T13:04:29.936520Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Dec 16 13:04:29.979594 waagent[2722]: 2025-12-16T13:04:29.979543Z INFO ExtHandler ExtHandler Distro: flatcar-4515.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Dec 16 13:04:29.979740 waagent[2722]: 2025-12-16T13:04:29.979697Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 13:04:29.979796 waagent[2722]: 2025-12-16T13:04:29.979770Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 13:04:29.985942 waagent[2722]: 2025-12-16T13:04:29.985892Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 13:04:29.991677 waagent[2722]: 2025-12-16T13:04:29.991642Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Dec 16 13:04:29.992017 waagent[2722]: 2025-12-16T13:04:29.991989Z INFO ExtHandler Dec 16 13:04:29.992060 waagent[2722]: 2025-12-16T13:04:29.992039Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 10a45bbf-251c-40b1-9232-0ce26dcdc533 eTag: 1132125187542939244 source: Fabric] Dec 16 13:04:29.992250 waagent[2722]: 2025-12-16T13:04:29.992226Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 16 13:04:29.992597 waagent[2722]: 2025-12-16T13:04:29.992571Z INFO ExtHandler Dec 16 13:04:29.992631 waagent[2722]: 2025-12-16T13:04:29.992612Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 16 13:04:29.997438 waagent[2722]: 2025-12-16T13:04:29.997408Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 16 13:04:30.069102 waagent[2722]: 2025-12-16T13:04:30.069054Z INFO ExtHandler Downloaded certificate {'thumbprint': 'E1C4F9C963FEFF30BCB12913A23AF3DEAF4BCB02', 'hasPrivateKey': True} Dec 16 13:04:30.069458 waagent[2722]: 2025-12-16T13:04:30.069432Z INFO ExtHandler Fetch goal state completed Dec 16 13:04:30.084920 waagent[2722]: 2025-12-16T13:04:30.084877Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.3 30 Sep 2025 (Library: OpenSSL 3.4.3 30 Sep 2025) Dec 16 13:04:30.088967 waagent[2722]: 2025-12-16T13:04:30.088922Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2722 Dec 16 13:04:30.089071 waagent[2722]: 2025-12-16T13:04:30.089048Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 16 13:04:30.089305 waagent[2722]: 2025-12-16T13:04:30.089281Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Dec 16 13:04:30.090386 waagent[2722]: 2025-12-16T13:04:30.090330Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4515.1.0', '', 'Flatcar Container Linux by Kinvolk'] Dec 16 13:04:30.090712 waagent[2722]: 2025-12-16T13:04:30.090679Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4515.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 16 13:04:30.090834 waagent[2722]: 2025-12-16T13:04:30.090809Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 16 13:04:30.091229 waagent[2722]: 2025-12-16T13:04:30.091201Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 16 13:04:30.118934 waagent[2722]: 2025-12-16T13:04:30.118908Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 16 13:04:30.119075 waagent[2722]: 2025-12-16T13:04:30.119054Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 16 13:04:30.125058 waagent[2722]: 2025-12-16T13:04:30.124701Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 16 13:04:30.129975 systemd[1]: Reload requested from client PID 2737 ('systemctl') (unit waagent.service)... Dec 16 13:04:30.129988 systemd[1]: Reloading... Dec 16 13:04:30.205394 zram_generator::config[2779]: No configuration found. Dec 16 13:04:30.394649 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#80 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 13:04:30.403371 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#83 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Dec 16 13:04:30.404292 systemd[1]: Reloading finished in 274 ms. Dec 16 13:04:30.418254 waagent[2722]: 2025-12-16T13:04:30.417368Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 16 13:04:30.418254 waagent[2722]: 2025-12-16T13:04:30.417559Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 16 13:04:30.425358 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#86 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Dec 16 13:04:30.439356 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#88 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Dec 16 13:04:30.454361 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#91 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Dec 16 13:04:30.460359 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#93 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Dec 16 13:04:31.111590 waagent[2722]: 2025-12-16T13:04:31.111488Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 16 13:04:31.112007 waagent[2722]: 2025-12-16T13:04:31.111946Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 16 13:04:31.112913 waagent[2722]: 2025-12-16T13:04:31.112729Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 16 13:04:31.112913 waagent[2722]: 2025-12-16T13:04:31.112872Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 13:04:31.113130 waagent[2722]: 2025-12-16T13:04:31.113107Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 13:04:31.113384 waagent[2722]: 2025-12-16T13:04:31.113358Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 16 13:04:31.113655 waagent[2722]: 2025-12-16T13:04:31.113603Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 16 13:04:31.113866 waagent[2722]: 2025-12-16T13:04:31.113838Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 16 13:04:31.113938 waagent[2722]: 2025-12-16T13:04:31.113907Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 16 13:04:31.113938 waagent[2722]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 16 13:04:31.113938 waagent[2722]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Dec 16 13:04:31.113938 waagent[2722]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 16 13:04:31.113938 waagent[2722]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 16 13:04:31.113938 waagent[2722]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 13:04:31.113938 waagent[2722]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 13:04:31.114088 waagent[2722]: 2025-12-16T13:04:31.113944Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 16 13:04:31.114448 waagent[2722]: 2025-12-16T13:04:31.114328Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 13:04:31.114506 waagent[2722]: 2025-12-16T13:04:31.114426Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 13:04:31.114864 waagent[2722]: 2025-12-16T13:04:31.114836Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 16 13:04:31.114922 waagent[2722]: 2025-12-16T13:04:31.114904Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 16 13:04:31.115158 waagent[2722]: 2025-12-16T13:04:31.115105Z INFO EnvHandler ExtHandler Configure routes Dec 16 13:04:31.115274 waagent[2722]: 2025-12-16T13:04:31.115256Z INFO EnvHandler ExtHandler Gateway:None Dec 16 13:04:31.115518 waagent[2722]: 2025-12-16T13:04:31.115498Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 16 13:04:31.115964 waagent[2722]: 2025-12-16T13:04:31.115936Z INFO EnvHandler ExtHandler Routes:None Dec 16 13:04:31.131309 waagent[2722]: 2025-12-16T13:04:31.131269Z INFO ExtHandler ExtHandler Dec 16 13:04:31.131405 waagent[2722]: 2025-12-16T13:04:31.131329Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 5cd4367b-6988-4770-85ab-48dc54e180c3 correlation d537ec8c-c943-4013-8890-56d4f043ecc7 created: 2025-12-16T13:02:57.355560Z] Dec 16 13:04:31.131642 waagent[2722]: 2025-12-16T13:04:31.131617Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 16 13:04:31.132011 waagent[2722]: 2025-12-16T13:04:31.131989Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Dec 16 13:04:31.207196 waagent[2722]: 2025-12-16T13:04:31.207013Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Dec 16 13:04:31.207196 waagent[2722]: Try `iptables -h' or 'iptables --help' for more information.) Dec 16 13:04:31.207463 waagent[2722]: 2025-12-16T13:04:31.207434Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: EA6C1FA0-BDE4-4CC2-A539-5C14341036E8;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Dec 16 13:04:31.239484 waagent[2722]: 2025-12-16T13:04:31.239431Z INFO MonitorHandler ExtHandler Network interfaces: Dec 16 13:04:31.239484 waagent[2722]: Executing ['ip', '-a', '-o', 'link']: Dec 16 13:04:31.239484 waagent[2722]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 16 13:04:31.239484 waagent[2722]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:b7:e1:67 brd ff:ff:ff:ff:ff:ff\ alias Network Device\ altname enx7ced8db7e167 Dec 16 13:04:31.239484 waagent[2722]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:b7:e1:67 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Dec 16 13:04:31.239484 waagent[2722]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 16 13:04:31.239484 waagent[2722]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 16 13:04:31.239484 waagent[2722]: 2: eth0 inet 10.200.4.43/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 16 13:04:31.239484 waagent[2722]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 16 13:04:31.239484 waagent[2722]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 16 13:04:31.239484 waagent[2722]: 2: eth0 inet6 fe80::7eed:8dff:feb7:e167/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 16 13:04:31.285522 waagent[2722]: 2025-12-16T13:04:31.285475Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 16 13:04:31.285522 waagent[2722]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:04:31.285522 waagent[2722]: pkts bytes target prot opt in out source destination Dec 16 13:04:31.285522 waagent[2722]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:04:31.285522 waagent[2722]: pkts bytes target prot opt in out source destination Dec 16 13:04:31.285522 waagent[2722]: Chain OUTPUT (policy ACCEPT 2 packets, 104 bytes) Dec 16 13:04:31.285522 waagent[2722]: pkts bytes target prot opt in out source destination Dec 16 13:04:31.285522 waagent[2722]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 13:04:31.285522 waagent[2722]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 13:04:31.285522 waagent[2722]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 13:04:31.288038 waagent[2722]: 2025-12-16T13:04:31.287989Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 16 13:04:31.288038 waagent[2722]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:04:31.288038 waagent[2722]: pkts bytes target prot opt in out source destination Dec 16 13:04:31.288038 waagent[2722]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:04:31.288038 waagent[2722]: pkts bytes target prot opt in out source destination Dec 16 13:04:31.288038 waagent[2722]: Chain OUTPUT (policy ACCEPT 2 packets, 104 bytes) Dec 16 13:04:31.288038 waagent[2722]: pkts bytes target prot opt in out source destination Dec 16 13:04:31.288038 waagent[2722]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 13:04:31.288038 waagent[2722]: 4 595 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 13:04:31.288038 waagent[2722]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 13:04:37.653216 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 13:04:37.655296 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:04:38.050526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:04:38.053760 (kubelet)[2884]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:04:38.089533 kubelet[2884]: E1216 13:04:38.089496 2884 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:04:38.092581 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:04:38.092733 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:04:38.093118 systemd[1]: kubelet.service: Consumed 149ms CPU time, 110.2M memory peak. Dec 16 13:04:43.705232 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 13:04:43.706571 systemd[1]: Started sshd@0-10.200.4.43:22-10.200.16.10:43298.service - OpenSSH per-connection server daemon (10.200.16.10:43298). Dec 16 13:04:44.353314 sshd[2892]: Accepted publickey for core from 10.200.16.10 port 43298 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:04:44.354762 sshd-session[2892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:44.359413 systemd-logind[2506]: New session 3 of user core. Dec 16 13:04:44.370518 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 13:04:44.739789 systemd[1]: Started sshd@1-10.200.4.43:22-10.200.16.10:43302.service - OpenSSH per-connection server daemon (10.200.16.10:43302). Dec 16 13:04:45.248202 sshd[2898]: Accepted publickey for core from 10.200.16.10 port 43302 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:04:45.249635 sshd-session[2898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:45.254632 systemd-logind[2506]: New session 4 of user core. Dec 16 13:04:45.263536 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 13:04:45.534040 sshd[2901]: Connection closed by 10.200.16.10 port 43302 Dec 16 13:04:45.534894 sshd-session[2898]: pam_unix(sshd:session): session closed for user core Dec 16 13:04:45.538598 systemd[1]: sshd@1-10.200.4.43:22-10.200.16.10:43302.service: Deactivated successfully. Dec 16 13:04:45.540242 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 13:04:45.540948 systemd-logind[2506]: Session 4 logged out. Waiting for processes to exit. Dec 16 13:04:45.542213 systemd-logind[2506]: Removed session 4. Dec 16 13:04:45.638114 systemd[1]: Started sshd@2-10.200.4.43:22-10.200.16.10:43308.service - OpenSSH per-connection server daemon (10.200.16.10:43308). Dec 16 13:04:46.149820 sshd[2907]: Accepted publickey for core from 10.200.16.10 port 43308 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:04:46.151227 sshd-session[2907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:46.156114 systemd-logind[2506]: New session 5 of user core. Dec 16 13:04:46.165529 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 13:04:46.433803 sshd[2910]: Connection closed by 10.200.16.10 port 43308 Dec 16 13:04:46.434439 sshd-session[2907]: pam_unix(sshd:session): session closed for user core Dec 16 13:04:46.438377 systemd[1]: sshd@2-10.200.4.43:22-10.200.16.10:43308.service: Deactivated successfully. Dec 16 13:04:46.440152 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 13:04:46.440963 systemd-logind[2506]: Session 5 logged out. Waiting for processes to exit. Dec 16 13:04:46.442222 systemd-logind[2506]: Removed session 5. Dec 16 13:04:46.538040 systemd[1]: Started sshd@3-10.200.4.43:22-10.200.16.10:43320.service - OpenSSH per-connection server daemon (10.200.16.10:43320). Dec 16 13:04:47.047720 sshd[2916]: Accepted publickey for core from 10.200.16.10 port 43320 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:04:47.049096 sshd-session[2916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:47.054243 systemd-logind[2506]: New session 6 of user core. Dec 16 13:04:47.060556 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 13:04:47.334710 sshd[2919]: Connection closed by 10.200.16.10 port 43320 Dec 16 13:04:47.335531 sshd-session[2916]: pam_unix(sshd:session): session closed for user core Dec 16 13:04:47.339151 systemd[1]: sshd@3-10.200.4.43:22-10.200.16.10:43320.service: Deactivated successfully. Dec 16 13:04:47.341013 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 13:04:47.341850 systemd-logind[2506]: Session 6 logged out. Waiting for processes to exit. Dec 16 13:04:47.343132 systemd-logind[2506]: Removed session 6. Dec 16 13:04:47.444699 systemd[1]: Started sshd@4-10.200.4.43:22-10.200.16.10:43324.service - OpenSSH per-connection server daemon (10.200.16.10:43324). Dec 16 13:04:47.958141 sshd[2925]: Accepted publickey for core from 10.200.16.10 port 43324 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:04:47.959554 sshd-session[2925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:47.964840 systemd-logind[2506]: New session 7 of user core. Dec 16 13:04:47.974520 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 13:04:48.153144 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 13:04:48.155017 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:04:48.309435 sudo[2929]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 13:04:48.309671 sudo[2929]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:04:48.339489 sudo[2929]: pam_unix(sudo:session): session closed for user root Dec 16 13:04:48.434196 sshd[2928]: Connection closed by 10.200.16.10 port 43324 Dec 16 13:04:48.434969 sshd-session[2925]: pam_unix(sshd:session): session closed for user core Dec 16 13:04:48.438995 systemd[1]: sshd@4-10.200.4.43:22-10.200.16.10:43324.service: Deactivated successfully. Dec 16 13:04:48.440935 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 13:04:48.441878 systemd-logind[2506]: Session 7 logged out. Waiting for processes to exit. Dec 16 13:04:48.443232 systemd-logind[2506]: Removed session 7. Dec 16 13:04:48.546641 systemd[1]: Started sshd@5-10.200.4.43:22-10.200.16.10:43328.service - OpenSSH per-connection server daemon (10.200.16.10:43328). Dec 16 13:04:48.639366 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:04:48.642452 (kubelet)[2946]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:04:48.686767 kubelet[2946]: E1216 13:04:48.686728 2946 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:04:48.688469 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:04:48.688604 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:04:48.688947 systemd[1]: kubelet.service: Consumed 144ms CPU time, 108.6M memory peak. Dec 16 13:04:49.053800 sshd[2938]: Accepted publickey for core from 10.200.16.10 port 43328 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:04:49.055272 sshd-session[2938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:49.060606 systemd-logind[2506]: New session 8 of user core. Dec 16 13:04:49.069544 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 13:04:49.247249 sudo[2955]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 13:04:49.247485 sudo[2955]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:04:49.254771 sudo[2955]: pam_unix(sudo:session): session closed for user root Dec 16 13:04:49.260304 sudo[2954]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 13:04:49.260559 sudo[2954]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:04:49.269422 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:04:49.295000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 13:04:49.297883 kernel: kauditd_printk_skb: 150 callbacks suppressed Dec 16 13:04:49.298047 kernel: audit: type=1305 audit(1765890289.295:261): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 13:04:49.298126 augenrules[2977]: No rules Dec 16 13:04:49.295000 audit[2977]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe61709360 a2=420 a3=0 items=0 ppid=2958 pid=2977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:49.302249 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:04:49.303505 kernel: audit: type=1300 audit(1765890289.295:261): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe61709360 a2=420 a3=0 items=0 ppid=2958 pid=2977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:49.303603 kernel: audit: type=1327 audit(1765890289.295:261): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 13:04:49.295000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 13:04:49.303980 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:04:49.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:49.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:49.309651 kernel: audit: type=1130 audit(1765890289.303:262): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:49.309685 kernel: audit: type=1131 audit(1765890289.303:263): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:49.309752 kernel: audit: type=1106 audit(1765890289.307:264): pid=2954 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:04:49.307000 audit[2954]: USER_END pid=2954 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:04:49.309180 sudo[2954]: pam_unix(sudo:session): session closed for user root Dec 16 13:04:49.313428 kernel: audit: type=1104 audit(1765890289.309:265): pid=2954 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:04:49.309000 audit[2954]: CRED_DISP pid=2954 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:04:49.407919 sshd[2953]: Connection closed by 10.200.16.10 port 43328 Dec 16 13:04:49.408411 sshd-session[2938]: pam_unix(sshd:session): session closed for user core Dec 16 13:04:49.408000 audit[2938]: USER_END pid=2938 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:04:49.413712 systemd-logind[2506]: Session 8 logged out. Waiting for processes to exit. Dec 16 13:04:49.418517 kernel: audit: type=1106 audit(1765890289.408:266): pid=2938 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:04:49.418578 kernel: audit: type=1104 audit(1765890289.408:267): pid=2938 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:04:49.408000 audit[2938]: CRED_DISP pid=2938 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:04:49.418482 chronyd[2485]: Selected source PHC0 Dec 16 13:04:49.415087 systemd[1]: sshd@5-10.200.4.43:22-10.200.16.10:43328.service: Deactivated successfully. Dec 16 13:04:49.418859 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 13:04:49.422725 kernel: audit: type=1131 audit(1765890289.413:268): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.4.43:22-10.200.16.10:43328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:49.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.4.43:22-10.200.16.10:43328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:49.423274 systemd-logind[2506]: Removed session 8. Dec 16 13:04:49.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.4.43:22-10.200.16.10:43342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:49.517964 systemd[1]: Started sshd@6-10.200.4.43:22-10.200.16.10:43342.service - OpenSSH per-connection server daemon (10.200.16.10:43342). Dec 16 13:04:50.022000 audit[2986]: USER_ACCT pid=2986 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:04:50.023738 sshd[2986]: Accepted publickey for core from 10.200.16.10 port 43342 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:04:50.023000 audit[2986]: CRED_ACQ pid=2986 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:04:50.023000 audit[2986]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffedf7fa780 a2=3 a3=0 items=0 ppid=1 pid=2986 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:50.023000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:04:50.025038 sshd-session[2986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:50.030444 systemd-logind[2506]: New session 9 of user core. Dec 16 13:04:50.036544 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 13:04:50.037000 audit[2986]: USER_START pid=2986 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:04:50.038000 audit[2989]: CRED_ACQ pid=2989 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:04:50.216000 audit[2990]: USER_ACCT pid=2990 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:04:50.217999 sudo[2990]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 13:04:50.218254 sudo[2990]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:04:50.216000 audit[2990]: CRED_REFR pid=2990 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:04:50.218000 audit[2990]: USER_START pid=2990 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:04:52.017005 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 13:04:52.029634 (dockerd)[3007]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 13:04:53.264984 dockerd[3007]: time="2025-12-16T13:04:53.264916697Z" level=info msg="Starting up" Dec 16 13:04:53.265845 dockerd[3007]: time="2025-12-16T13:04:53.265821254Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 13:04:53.278791 dockerd[3007]: time="2025-12-16T13:04:53.278743604Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 13:04:53.342990 systemd[1]: var-lib-docker-metacopy\x2dcheck2780130741-merged.mount: Deactivated successfully. Dec 16 13:04:53.366563 dockerd[3007]: time="2025-12-16T13:04:53.366520477Z" level=info msg="Loading containers: start." Dec 16 13:04:53.414447 kernel: Initializing XFRM netlink socket Dec 16 13:04:53.458000 audit[3053]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=3053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:53.458000 audit[3053]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc251afa30 a2=0 a3=0 items=0 ppid=3007 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.458000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 13:04:53.459000 audit[3055]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=3055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:53.459000 audit[3055]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd142c6860 a2=0 a3=0 items=0 ppid=3007 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.459000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 13:04:53.461000 audit[3057]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=3057 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:53.461000 audit[3057]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc2cb34b50 a2=0 a3=0 items=0 ppid=3007 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.461000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 13:04:53.463000 audit[3059]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=3059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:53.463000 audit[3059]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffefbd049d0 a2=0 a3=0 items=0 ppid=3007 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.463000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 13:04:53.465000 audit[3061]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=3061 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:53.465000 audit[3061]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff428cc2d0 a2=0 a3=0 items=0 ppid=3007 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.465000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 13:04:53.467000 audit[3063]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=3063 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:53.467000 audit[3063]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc0f6b6230 a2=0 a3=0 items=0 ppid=3007 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.467000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 13:04:53.468000 audit[3065]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=3065 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:53.468000 audit[3065]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd35528e30 a2=0 a3=0 items=0 ppid=3007 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.468000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 13:04:53.470000 audit[3067]: NETFILTER_CFG table=nat:12 family=2 entries=2 op=nft_register_chain pid=3067 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:53.470000 audit[3067]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe03fc5770 a2=0 a3=0 items=0 ppid=3007 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.470000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 13:04:53.498000 audit[3070]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=3070 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:53.498000 audit[3070]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffc37966a60 a2=0 a3=0 items=0 ppid=3007 pid=3070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.498000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 13:04:53.500000 audit[3072]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=3072 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:53.500000 audit[3072]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffeafc268c0 a2=0 a3=0 items=0 ppid=3007 pid=3072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.500000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 13:04:53.502000 audit[3074]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=3074 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:53.502000 audit[3074]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fffd1e75a60 a2=0 a3=0 items=0 ppid=3007 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.502000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 13:04:53.504000 audit[3076]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=3076 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:53.504000 audit[3076]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc5654aaa0 a2=0 a3=0 items=0 ppid=3007 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.504000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 13:04:53.505000 audit[3078]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_rule pid=3078 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:53.505000 audit[3078]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fffc730e730 a2=0 a3=0 items=0 ppid=3007 pid=3078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.505000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 13:04:53.603000 audit[3108]: NETFILTER_CFG table=nat:18 family=10 entries=2 op=nft_register_chain pid=3108 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:53.603000 audit[3108]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd9e791360 a2=0 a3=0 items=0 ppid=3007 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.603000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 13:04:53.606000 audit[3110]: NETFILTER_CFG table=filter:19 family=10 entries=2 op=nft_register_chain pid=3110 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:53.606000 audit[3110]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd05f30a40 a2=0 a3=0 items=0 ppid=3007 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.606000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 13:04:53.607000 audit[3112]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=3112 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:53.607000 audit[3112]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe38a61c60 a2=0 a3=0 items=0 ppid=3007 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.607000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 13:04:53.609000 audit[3114]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=3114 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:53.609000 audit[3114]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc1f4dff10 a2=0 a3=0 items=0 ppid=3007 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.609000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 13:04:53.610000 audit[3116]: NETFILTER_CFG table=filter:22 family=10 entries=1 op=nft_register_chain pid=3116 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:53.610000 audit[3116]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff2e6ecec0 a2=0 a3=0 items=0 ppid=3007 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.610000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 13:04:53.612000 audit[3118]: NETFILTER_CFG table=filter:23 family=10 entries=1 op=nft_register_chain pid=3118 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:53.612000 audit[3118]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd2052c310 a2=0 a3=0 items=0 ppid=3007 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.612000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 13:04:53.614000 audit[3120]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_chain pid=3120 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:53.614000 audit[3120]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc2f631870 a2=0 a3=0 items=0 ppid=3007 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.614000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 13:04:53.616000 audit[3122]: NETFILTER_CFG table=nat:25 family=10 entries=2 op=nft_register_chain pid=3122 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:53.616000 audit[3122]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff08ce5390 a2=0 a3=0 items=0 ppid=3007 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.616000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 13:04:53.618000 audit[3124]: NETFILTER_CFG table=nat:26 family=10 entries=2 op=nft_register_chain pid=3124 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:53.618000 audit[3124]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffc85ab98d0 a2=0 a3=0 items=0 ppid=3007 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.618000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 13:04:53.619000 audit[3126]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=3126 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:53.619000 audit[3126]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffce70e93f0 a2=0 a3=0 items=0 ppid=3007 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.619000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 13:04:53.621000 audit[3128]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=3128 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:53.621000 audit[3128]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd9e07f960 a2=0 a3=0 items=0 ppid=3007 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.621000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 13:04:53.623000 audit[3130]: NETFILTER_CFG table=filter:29 family=10 entries=1 op=nft_register_rule pid=3130 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:53.623000 audit[3130]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffda3bca5e0 a2=0 a3=0 items=0 ppid=3007 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.623000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 13:04:53.624000 audit[3132]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_rule pid=3132 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:53.624000 audit[3132]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffc29fa8c50 a2=0 a3=0 items=0 ppid=3007 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.624000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 13:04:53.629000 audit[3137]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=3137 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:53.629000 audit[3137]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdefe14840 a2=0 a3=0 items=0 ppid=3007 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.629000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 13:04:53.630000 audit[3139]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=3139 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:53.630000 audit[3139]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe58760c80 a2=0 a3=0 items=0 ppid=3007 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.630000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 13:04:53.632000 audit[3141]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=3141 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:53.632000 audit[3141]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffcbcf69490 a2=0 a3=0 items=0 ppid=3007 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.632000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 13:04:53.634000 audit[3143]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=3143 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:53.634000 audit[3143]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc27909c10 a2=0 a3=0 items=0 ppid=3007 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.634000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 13:04:53.636000 audit[3145]: NETFILTER_CFG table=filter:35 family=10 entries=1 op=nft_register_rule pid=3145 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:53.636000 audit[3145]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc5910d8b0 a2=0 a3=0 items=0 ppid=3007 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.636000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 13:04:53.638000 audit[3147]: NETFILTER_CFG table=filter:36 family=10 entries=1 op=nft_register_rule pid=3147 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:53.638000 audit[3147]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd1f98e650 a2=0 a3=0 items=0 ppid=3007 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.638000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 13:04:53.721000 audit[3152]: NETFILTER_CFG table=nat:37 family=2 entries=2 op=nft_register_chain pid=3152 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:53.721000 audit[3152]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffc7c110d40 a2=0 a3=0 items=0 ppid=3007 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.721000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 13:04:53.723000 audit[3154]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=3154 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:53.723000 audit[3154]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe20cd1ee0 a2=0 a3=0 items=0 ppid=3007 pid=3154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.723000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 13:04:53.730000 audit[3162]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=3162 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:53.730000 audit[3162]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffc2f992720 a2=0 a3=0 items=0 ppid=3007 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.730000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 13:04:53.734000 audit[3167]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=3167 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:53.734000 audit[3167]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd9efd7550 a2=0 a3=0 items=0 ppid=3007 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.734000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 13:04:53.736000 audit[3169]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=3169 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:53.736000 audit[3169]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffedb860f30 a2=0 a3=0 items=0 ppid=3007 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.736000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 13:04:53.738000 audit[3171]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=3171 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:53.738000 audit[3171]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd709ea140 a2=0 a3=0 items=0 ppid=3007 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.738000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 13:04:53.740000 audit[3173]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_rule pid=3173 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:53.740000 audit[3173]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffdc888dc20 a2=0 a3=0 items=0 ppid=3007 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.740000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 13:04:53.741000 audit[3175]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=3175 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:53.741000 audit[3175]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffddcf30490 a2=0 a3=0 items=0 ppid=3007 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:53.741000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 13:04:53.742412 systemd-networkd[2150]: docker0: Link UP Dec 16 13:04:53.761655 dockerd[3007]: time="2025-12-16T13:04:53.761619555Z" level=info msg="Loading containers: done." Dec 16 13:04:53.900309 dockerd[3007]: time="2025-12-16T13:04:53.900207081Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 13:04:53.900309 dockerd[3007]: time="2025-12-16T13:04:53.900302996Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 13:04:53.900842 dockerd[3007]: time="2025-12-16T13:04:53.900418155Z" level=info msg="Initializing buildkit" Dec 16 13:04:53.953790 dockerd[3007]: time="2025-12-16T13:04:53.953757394Z" level=info msg="Completed buildkit initialization" Dec 16 13:04:53.962034 dockerd[3007]: time="2025-12-16T13:04:53.961997476Z" level=info msg="Daemon has completed initialization" Dec 16 13:04:53.962391 dockerd[3007]: time="2025-12-16T13:04:53.962145771Z" level=info msg="API listen on /run/docker.sock" Dec 16 13:04:53.962378 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 13:04:53.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:54.645226 containerd[2540]: time="2025-12-16T13:04:54.645157841Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 16 13:04:55.612181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1975700276.mount: Deactivated successfully. Dec 16 13:04:56.682222 containerd[2540]: time="2025-12-16T13:04:56.682157147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:56.688197 containerd[2540]: time="2025-12-16T13:04:56.688008378Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=25517145" Dec 16 13:04:56.693571 containerd[2540]: time="2025-12-16T13:04:56.693543874Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:56.699203 containerd[2540]: time="2025-12-16T13:04:56.699144631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:56.700026 containerd[2540]: time="2025-12-16T13:04:56.699812389Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 2.054591299s" Dec 16 13:04:56.700026 containerd[2540]: time="2025-12-16T13:04:56.699852509Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Dec 16 13:04:56.700828 containerd[2540]: time="2025-12-16T13:04:56.700804584Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 16 13:04:57.807362 containerd[2540]: time="2025-12-16T13:04:57.807299676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:57.810114 containerd[2540]: time="2025-12-16T13:04:57.809964435Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21154285" Dec 16 13:04:57.812966 containerd[2540]: time="2025-12-16T13:04:57.812943990Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:57.817647 containerd[2540]: time="2025-12-16T13:04:57.817615920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:57.818420 containerd[2540]: time="2025-12-16T13:04:57.818202713Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.117368843s" Dec 16 13:04:57.818420 containerd[2540]: time="2025-12-16T13:04:57.818250508Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Dec 16 13:04:57.818942 containerd[2540]: time="2025-12-16T13:04:57.818906024Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 16 13:04:58.761409 containerd[2540]: time="2025-12-16T13:04:58.761332859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:58.774980 containerd[2540]: time="2025-12-16T13:04:58.774942106Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=0" Dec 16 13:04:58.780601 containerd[2540]: time="2025-12-16T13:04:58.780544388Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:58.786267 containerd[2540]: time="2025-12-16T13:04:58.786240778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:58.786893 containerd[2540]: time="2025-12-16T13:04:58.786872099Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 967.866905ms" Dec 16 13:04:58.786938 containerd[2540]: time="2025-12-16T13:04:58.786901442Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Dec 16 13:04:58.787656 containerd[2540]: time="2025-12-16T13:04:58.787633658Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 16 13:04:58.903080 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 13:04:58.905104 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:04:59.295052 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:04:59.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:59.296674 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 16 13:04:59.296757 kernel: audit: type=1130 audit(1765890299.294:319): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:59.303677 (kubelet)[3291]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:04:59.334821 kubelet[3291]: E1216 13:04:59.334788 3291 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:04:59.336330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:04:59.336464 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:04:59.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 13:04:59.336824 systemd[1]: kubelet.service: Consumed 142ms CPU time, 108.8M memory peak. Dec 16 13:04:59.349377 kernel: audit: type=1131 audit(1765890299.335:320): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 13:05:05.891482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2385715348.mount: Deactivated successfully. Dec 16 13:05:06.177591 containerd[2540]: time="2025-12-16T13:05:06.177528053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:06.181394 containerd[2540]: time="2025-12-16T13:05:06.181366661Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25961571" Dec 16 13:05:06.185724 containerd[2540]: time="2025-12-16T13:05:06.184828733Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:06.192426 containerd[2540]: time="2025-12-16T13:05:06.192391796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:06.192781 containerd[2540]: time="2025-12-16T13:05:06.192756811Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 7.405093718s" Dec 16 13:05:06.192860 containerd[2540]: time="2025-12-16T13:05:06.192848377Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Dec 16 13:05:06.193527 containerd[2540]: time="2025-12-16T13:05:06.193505749Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 16 13:05:06.334185 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Dec 16 13:05:06.915678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3473637767.mount: Deactivated successfully. Dec 16 13:05:08.058309 containerd[2540]: time="2025-12-16T13:05:08.058238769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:08.062359 containerd[2540]: time="2025-12-16T13:05:08.062290409Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22026995" Dec 16 13:05:08.067222 containerd[2540]: time="2025-12-16T13:05:08.067173815Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:08.074749 containerd[2540]: time="2025-12-16T13:05:08.074695659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:08.075507 containerd[2540]: time="2025-12-16T13:05:08.075327135Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.881790985s" Dec 16 13:05:08.075507 containerd[2540]: time="2025-12-16T13:05:08.075377054Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Dec 16 13:05:08.076130 containerd[2540]: time="2025-12-16T13:05:08.076108805Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 16 13:05:08.593949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3423989828.mount: Deactivated successfully. Dec 16 13:05:08.614054 containerd[2540]: time="2025-12-16T13:05:08.614001584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:08.618998 containerd[2540]: time="2025-12-16T13:05:08.618966841Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Dec 16 13:05:08.623412 containerd[2540]: time="2025-12-16T13:05:08.623362090Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:08.630853 containerd[2540]: time="2025-12-16T13:05:08.630805506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:08.631511 containerd[2540]: time="2025-12-16T13:05:08.631222428Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 555.086291ms" Dec 16 13:05:08.631511 containerd[2540]: time="2025-12-16T13:05:08.631253955Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Dec 16 13:05:08.631957 containerd[2540]: time="2025-12-16T13:05:08.631929908Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 16 13:05:09.328989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3032729568.mount: Deactivated successfully. Dec 16 13:05:09.403468 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 16 13:05:09.406308 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:05:10.209329 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:05:10.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:05:10.215366 kernel: audit: type=1130 audit(1765890310.209:321): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:05:10.222542 (kubelet)[3381]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:05:10.258319 kubelet[3381]: E1216 13:05:10.258266 3381 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:05:10.259796 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:05:10.259929 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:05:10.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 13:05:10.260261 systemd[1]: kubelet.service: Consumed 163ms CPU time, 110.2M memory peak. Dec 16 13:05:10.264362 kernel: audit: type=1131 audit(1765890310.259:322): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 13:05:11.210374 update_engine[2508]: I20251216 13:05:11.209527 2508 update_attempter.cc:509] Updating boot flags... Dec 16 13:05:11.963200 containerd[2540]: time="2025-12-16T13:05:11.963134823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:11.966056 containerd[2540]: time="2025-12-16T13:05:11.966013689Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=62447728" Dec 16 13:05:11.969578 containerd[2540]: time="2025-12-16T13:05:11.969526314Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:11.974443 containerd[2540]: time="2025-12-16T13:05:11.974392502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:11.975385 containerd[2540]: time="2025-12-16T13:05:11.975167285Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.343200579s" Dec 16 13:05:11.975385 containerd[2540]: time="2025-12-16T13:05:11.975202510Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Dec 16 13:05:13.842872 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:05:13.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:05:13.843441 systemd[1]: kubelet.service: Consumed 163ms CPU time, 110.2M memory peak. Dec 16 13:05:13.852790 kernel: audit: type=1130 audit(1765890313.843:323): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:05:13.852868 kernel: audit: type=1131 audit(1765890313.843:324): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:05:13.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:05:13.848520 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:05:13.872979 systemd[1]: Reload requested from client PID 3503 ('systemctl') (unit session-9.scope)... Dec 16 13:05:13.872991 systemd[1]: Reloading... Dec 16 13:05:13.973445 zram_generator::config[3555]: No configuration found. Dec 16 13:05:14.168767 systemd[1]: Reloading finished in 295 ms. Dec 16 13:05:14.235560 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 13:05:14.235642 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 13:05:14.235928 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:05:14.235988 systemd[1]: kubelet.service: Consumed 77ms CPU time, 69.9M memory peak. Dec 16 13:05:14.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 13:05:14.240392 kernel: audit: type=1130 audit(1765890314.235:325): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 13:05:14.240612 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:05:14.241000 audit: BPF prog-id=87 op=LOAD Dec 16 13:05:14.244423 kernel: audit: type=1334 audit(1765890314.241:326): prog-id=87 op=LOAD Dec 16 13:05:14.243000 audit: BPF prog-id=83 op=UNLOAD Dec 16 13:05:14.243000 audit: BPF prog-id=88 op=LOAD Dec 16 13:05:14.247364 kernel: audit: type=1334 audit(1765890314.243:327): prog-id=83 op=UNLOAD Dec 16 13:05:14.247404 kernel: audit: type=1334 audit(1765890314.243:328): prog-id=88 op=LOAD Dec 16 13:05:14.243000 audit: BPF prog-id=89 op=LOAD Dec 16 13:05:14.249854 kernel: audit: type=1334 audit(1765890314.243:329): prog-id=89 op=LOAD Dec 16 13:05:14.243000 audit: BPF prog-id=84 op=UNLOAD Dec 16 13:05:14.251159 kernel: audit: type=1334 audit(1765890314.243:330): prog-id=84 op=UNLOAD Dec 16 13:05:14.243000 audit: BPF prog-id=85 op=UNLOAD Dec 16 13:05:14.244000 audit: BPF prog-id=90 op=LOAD Dec 16 13:05:14.244000 audit: BPF prog-id=73 op=UNLOAD Dec 16 13:05:14.246000 audit: BPF prog-id=91 op=LOAD Dec 16 13:05:14.253000 audit: BPF prog-id=77 op=UNLOAD Dec 16 13:05:14.253000 audit: BPF prog-id=92 op=LOAD Dec 16 13:05:14.253000 audit: BPF prog-id=93 op=LOAD Dec 16 13:05:14.253000 audit: BPF prog-id=78 op=UNLOAD Dec 16 13:05:14.253000 audit: BPF prog-id=79 op=UNLOAD Dec 16 13:05:14.254000 audit: BPF prog-id=94 op=LOAD Dec 16 13:05:14.254000 audit: BPF prog-id=80 op=UNLOAD Dec 16 13:05:14.254000 audit: BPF prog-id=95 op=LOAD Dec 16 13:05:14.254000 audit: BPF prog-id=96 op=LOAD Dec 16 13:05:14.254000 audit: BPF prog-id=81 op=UNLOAD Dec 16 13:05:14.254000 audit: BPF prog-id=82 op=UNLOAD Dec 16 13:05:14.255000 audit: BPF prog-id=97 op=LOAD Dec 16 13:05:14.255000 audit: BPF prog-id=74 op=UNLOAD Dec 16 13:05:14.255000 audit: BPF prog-id=98 op=LOAD Dec 16 13:05:14.255000 audit: BPF prog-id=99 op=LOAD Dec 16 13:05:14.255000 audit: BPF prog-id=75 op=UNLOAD Dec 16 13:05:14.255000 audit: BPF prog-id=76 op=UNLOAD Dec 16 13:05:14.255000 audit: BPF prog-id=100 op=LOAD Dec 16 13:05:14.255000 audit: BPF prog-id=86 op=UNLOAD Dec 16 13:05:14.256000 audit: BPF prog-id=101 op=LOAD Dec 16 13:05:14.256000 audit: BPF prog-id=68 op=UNLOAD Dec 16 13:05:14.257000 audit: BPF prog-id=102 op=LOAD Dec 16 13:05:14.257000 audit: BPF prog-id=103 op=LOAD Dec 16 13:05:14.257000 audit: BPF prog-id=69 op=UNLOAD Dec 16 13:05:14.257000 audit: BPF prog-id=70 op=UNLOAD Dec 16 13:05:14.257000 audit: BPF prog-id=104 op=LOAD Dec 16 13:05:14.257000 audit: BPF prog-id=67 op=UNLOAD Dec 16 13:05:14.258000 audit: BPF prog-id=105 op=LOAD Dec 16 13:05:14.258000 audit: BPF prog-id=106 op=LOAD Dec 16 13:05:14.258000 audit: BPF prog-id=71 op=UNLOAD Dec 16 13:05:14.258000 audit: BPF prog-id=72 op=UNLOAD Dec 16 13:05:14.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:05:14.750689 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:05:14.758594 (kubelet)[3619]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:05:14.798834 kubelet[3619]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:05:14.798834 kubelet[3619]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:05:14.798834 kubelet[3619]: I1216 13:05:14.798596 3619 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:05:15.139826 kubelet[3619]: I1216 13:05:15.139467 3619 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 13:05:15.139826 kubelet[3619]: I1216 13:05:15.139499 3619 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:05:15.139826 kubelet[3619]: I1216 13:05:15.139526 3619 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 13:05:15.139826 kubelet[3619]: I1216 13:05:15.139531 3619 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:05:15.140038 kubelet[3619]: I1216 13:05:15.140004 3619 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:05:15.151372 kubelet[3619]: I1216 13:05:15.151072 3619 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:05:15.152521 kubelet[3619]: E1216 13:05:15.152493 3619 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.4.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.43:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 13:05:15.155657 kubelet[3619]: I1216 13:05:15.155639 3619 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:05:15.157924 kubelet[3619]: I1216 13:05:15.157909 3619 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 13:05:15.158121 kubelet[3619]: I1216 13:05:15.158099 3619 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:05:15.158258 kubelet[3619]: I1216 13:05:15.158121 3619 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515.1.0-a-5ae2bb3665","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:05:15.158401 kubelet[3619]: I1216 13:05:15.158267 3619 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:05:15.158401 kubelet[3619]: I1216 13:05:15.158276 3619 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 13:05:15.158401 kubelet[3619]: I1216 13:05:15.158372 3619 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 13:05:15.163631 kubelet[3619]: I1216 13:05:15.163613 3619 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:05:15.164433 kubelet[3619]: I1216 13:05:15.164413 3619 kubelet.go:475] "Attempting to sync node with API server" Dec 16 13:05:15.164486 kubelet[3619]: I1216 13:05:15.164437 3619 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:05:15.164486 kubelet[3619]: I1216 13:05:15.164470 3619 kubelet.go:387] "Adding apiserver pod source" Dec 16 13:05:15.164531 kubelet[3619]: I1216 13:05:15.164494 3619 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:05:15.169153 kubelet[3619]: E1216 13:05:15.168975 3619 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.4.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-a-5ae2bb3665&limit=500&resourceVersion=0\": dial tcp 10.200.4.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:05:15.170181 kubelet[3619]: E1216 13:05:15.169367 3619 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.4.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:05:15.170660 kubelet[3619]: I1216 13:05:15.170650 3619 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 13:05:15.171132 kubelet[3619]: I1216 13:05:15.171124 3619 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:05:15.171183 kubelet[3619]: I1216 13:05:15.171178 3619 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 13:05:15.171248 kubelet[3619]: W1216 13:05:15.171243 3619 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 13:05:15.174592 kubelet[3619]: I1216 13:05:15.174582 3619 server.go:1262] "Started kubelet" Dec 16 13:05:15.175845 kubelet[3619]: I1216 13:05:15.175831 3619 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:05:15.179642 kubelet[3619]: E1216 13:05:15.178422 3619 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.43:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.43:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4515.1.0-a-5ae2bb3665.1881b3e2b4483709 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4515.1.0-a-5ae2bb3665,UID:ci-4515.1.0-a-5ae2bb3665,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4515.1.0-a-5ae2bb3665,},FirstTimestamp:2025-12-16 13:05:15.174557449 +0000 UTC m=+0.412442822,LastTimestamp:2025-12-16 13:05:15.174557449 +0000 UTC m=+0.412442822,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515.1.0-a-5ae2bb3665,}" Dec 16 13:05:15.180400 kubelet[3619]: I1216 13:05:15.180376 3619 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:05:15.181527 kubelet[3619]: I1216 13:05:15.181510 3619 server.go:310] "Adding debug handlers to kubelet server" Dec 16 13:05:15.182000 audit[3634]: NETFILTER_CFG table=mangle:45 family=2 entries=2 op=nft_register_chain pid=3634 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:15.182000 audit[3634]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff77e470a0 a2=0 a3=0 items=0 ppid=3619 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:15.182000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 13:05:15.184000 audit[3636]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=3636 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:15.184000 audit[3636]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc91680850 a2=0 a3=0 items=0 ppid=3619 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:15.184000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 13:05:15.185431 kubelet[3619]: I1216 13:05:15.185395 3619 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:05:15.185494 kubelet[3619]: I1216 13:05:15.185459 3619 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 13:05:15.185647 kubelet[3619]: I1216 13:05:15.185633 3619 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:05:15.185915 kubelet[3619]: I1216 13:05:15.185898 3619 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:05:15.186267 kubelet[3619]: I1216 13:05:15.186257 3619 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 13:05:15.186538 kubelet[3619]: E1216 13:05:15.186527 3619 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-5ae2bb3665\" not found" Dec 16 13:05:15.187000 audit[3638]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=3638 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:15.187000 audit[3638]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe86b8e980 a2=0 a3=0 items=0 ppid=3619 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:15.187000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 13:05:15.189800 kubelet[3619]: E1216 13:05:15.189621 3619 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-a-5ae2bb3665?timeout=10s\": dial tcp 10.200.4.43:6443: connect: connection refused" interval="200ms" Dec 16 13:05:15.189869 kubelet[3619]: E1216 13:05:15.189821 3619 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:05:15.190687 kubelet[3619]: I1216 13:05:15.190664 3619 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:05:15.190840 kubelet[3619]: I1216 13:05:15.190823 3619 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:05:15.191361 kubelet[3619]: I1216 13:05:15.191190 3619 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 13:05:15.191361 kubelet[3619]: I1216 13:05:15.191257 3619 reconciler.go:29] "Reconciler: start to sync state" Dec 16 13:05:15.192512 kubelet[3619]: E1216 13:05:15.192487 3619 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.4.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:05:15.192000 audit[3640]: NETFILTER_CFG table=filter:48 family=2 entries=2 op=nft_register_chain pid=3640 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:15.192000 audit[3640]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc8e26dde0 a2=0 a3=0 items=0 ppid=3619 pid=3640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:15.192000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 13:05:15.193884 kubelet[3619]: I1216 13:05:15.193672 3619 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:05:15.206962 kubelet[3619]: I1216 13:05:15.206947 3619 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:05:15.206962 kubelet[3619]: I1216 13:05:15.206961 3619 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:05:15.207058 kubelet[3619]: I1216 13:05:15.206976 3619 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:05:15.216577 kubelet[3619]: I1216 13:05:15.216557 3619 policy_none.go:49] "None policy: Start" Dec 16 13:05:15.216577 kubelet[3619]: I1216 13:05:15.216580 3619 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 13:05:15.216666 kubelet[3619]: I1216 13:05:15.216592 3619 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 13:05:15.222728 kubelet[3619]: I1216 13:05:15.222713 3619 policy_none.go:47] "Start" Dec 16 13:05:15.226483 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 13:05:15.240577 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 13:05:15.249380 kernel: kauditd_printk_skb: 48 callbacks suppressed Dec 16 13:05:15.249458 kernel: audit: type=1325 audit(1765890315.243:371): table=filter:49 family=2 entries=1 op=nft_register_rule pid=3646 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:15.243000 audit[3646]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3646 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:15.245226 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 13:05:15.250001 kubelet[3619]: I1216 13:05:15.249957 3619 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 13:05:15.251986 kubelet[3619]: I1216 13:05:15.251970 3619 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 13:05:15.252155 kubelet[3619]: I1216 13:05:15.252071 3619 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 13:05:15.252155 kubelet[3619]: I1216 13:05:15.252097 3619 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 13:05:15.243000 audit[3646]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff4b50dc20 a2=0 a3=0 items=0 ppid=3619 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:15.252505 kubelet[3619]: E1216 13:05:15.252451 3619 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:05:15.253485 kubelet[3619]: E1216 13:05:15.253471 3619 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:05:15.253700 kubelet[3619]: I1216 13:05:15.253693 3619 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:05:15.253764 kubelet[3619]: I1216 13:05:15.253745 3619 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:05:15.255943 kubelet[3619]: I1216 13:05:15.255931 3619 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:05:15.257185 kubelet[3619]: E1216 13:05:15.257162 3619 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.4.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:05:15.259353 kernel: audit: type=1300 audit(1765890315.243:371): arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff4b50dc20 a2=0 a3=0 items=0 ppid=3619 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:15.243000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Dec 16 13:05:15.264429 kubelet[3619]: E1216 13:05:15.261786 3619 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:05:15.264429 kubelet[3619]: E1216 13:05:15.261827 3619 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4515.1.0-a-5ae2bb3665\" not found" Dec 16 13:05:15.250000 audit[3649]: NETFILTER_CFG table=mangle:50 family=10 entries=2 op=nft_register_chain pid=3649 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:15.268130 kernel: audit: type=1327 audit(1765890315.243:371): proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Dec 16 13:05:15.268172 kernel: audit: type=1325 audit(1765890315.250:372): table=mangle:50 family=10 entries=2 op=nft_register_chain pid=3649 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:15.250000 audit[3649]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe34b46670 a2=0 a3=0 items=0 ppid=3619 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:15.272676 kernel: audit: type=1300 audit(1765890315.250:372): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe34b46670 a2=0 a3=0 items=0 ppid=3619 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:15.250000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 13:05:15.275638 kernel: audit: type=1327 audit(1765890315.250:372): proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 13:05:15.251000 audit[3648]: NETFILTER_CFG table=mangle:51 family=2 entries=1 op=nft_register_chain pid=3648 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:15.278434 kernel: audit: type=1325 audit(1765890315.251:373): table=mangle:51 family=2 entries=1 op=nft_register_chain pid=3648 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:15.251000 audit[3648]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe73044160 a2=0 a3=0 items=0 ppid=3619 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:15.283762 kernel: audit: type=1300 audit(1765890315.251:373): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe73044160 a2=0 a3=0 items=0 ppid=3619 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:15.289276 kernel: audit: type=1327 audit(1765890315.251:373): proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 13:05:15.251000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 13:05:15.289404 kernel: audit: type=1325 audit(1765890315.251:374): table=nat:52 family=2 entries=1 op=nft_register_chain pid=3651 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:15.251000 audit[3651]: NETFILTER_CFG table=nat:52 family=2 entries=1 op=nft_register_chain pid=3651 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:15.251000 audit[3651]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe6b0332b0 a2=0 a3=0 items=0 ppid=3619 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:15.251000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 13:05:15.259000 audit[3650]: NETFILTER_CFG table=mangle:53 family=10 entries=1 op=nft_register_chain pid=3650 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:15.259000 audit[3650]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd0241c1e0 a2=0 a3=0 items=0 ppid=3619 pid=3650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:15.259000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 13:05:15.259000 audit[3653]: NETFILTER_CFG table=nat:54 family=10 entries=1 op=nft_register_chain pid=3653 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:15.259000 audit[3653]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffca389fdd0 a2=0 a3=0 items=0 ppid=3619 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:15.259000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 13:05:15.259000 audit[3652]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_chain pid=3652 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:15.259000 audit[3652]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffef471c810 a2=0 a3=0 items=0 ppid=3619 pid=3652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:15.259000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 13:05:15.259000 audit[3654]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=3654 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:15.259000 audit[3654]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe81c5ab20 a2=0 a3=0 items=0 ppid=3619 pid=3654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:15.259000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 13:05:15.355561 kubelet[3619]: I1216 13:05:15.355529 3619 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:15.356047 kubelet[3619]: E1216 13:05:15.356020 3619 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.43:6443/api/v1/nodes\": dial tcp 10.200.4.43:6443: connect: connection refused" node="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:15.367518 systemd[1]: Created slice kubepods-burstable-pod53ce3863ad7cc35986c4ebdcb5f302a6.slice - libcontainer container kubepods-burstable-pod53ce3863ad7cc35986c4ebdcb5f302a6.slice. Dec 16 13:05:15.374944 kubelet[3619]: E1216 13:05:15.374901 3619 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-5ae2bb3665\" not found" node="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:15.378742 systemd[1]: Created slice kubepods-burstable-pod4e775dd3b3e9fd6c3410e47e3470b159.slice - libcontainer container kubepods-burstable-pod4e775dd3b3e9fd6c3410e47e3470b159.slice. Dec 16 13:05:15.380706 kubelet[3619]: E1216 13:05:15.380679 3619 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-5ae2bb3665\" not found" node="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:15.391069 kubelet[3619]: E1216 13:05:15.390847 3619 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-a-5ae2bb3665?timeout=10s\": dial tcp 10.200.4.43:6443: connect: connection refused" interval="400ms" Dec 16 13:05:15.392285 kubelet[3619]: I1216 13:05:15.392266 3619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4e775dd3b3e9fd6c3410e47e3470b159-ca-certs\") pod \"kube-controller-manager-ci-4515.1.0-a-5ae2bb3665\" (UID: \"4e775dd3b3e9fd6c3410e47e3470b159\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:15.392453 kubelet[3619]: I1216 13:05:15.392393 3619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4e775dd3b3e9fd6c3410e47e3470b159-flexvolume-dir\") pod \"kube-controller-manager-ci-4515.1.0-a-5ae2bb3665\" (UID: \"4e775dd3b3e9fd6c3410e47e3470b159\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:15.392547 kubelet[3619]: I1216 13:05:15.392490 3619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53ce3863ad7cc35986c4ebdcb5f302a6-ca-certs\") pod \"kube-apiserver-ci-4515.1.0-a-5ae2bb3665\" (UID: \"53ce3863ad7cc35986c4ebdcb5f302a6\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:15.392643 kubelet[3619]: I1216 13:05:15.392508 3619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4e775dd3b3e9fd6c3410e47e3470b159-k8s-certs\") pod \"kube-controller-manager-ci-4515.1.0-a-5ae2bb3665\" (UID: \"4e775dd3b3e9fd6c3410e47e3470b159\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:15.392643 kubelet[3619]: I1216 13:05:15.392597 3619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4e775dd3b3e9fd6c3410e47e3470b159-kubeconfig\") pod \"kube-controller-manager-ci-4515.1.0-a-5ae2bb3665\" (UID: \"4e775dd3b3e9fd6c3410e47e3470b159\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:15.392643 kubelet[3619]: I1216 13:05:15.392616 3619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4e775dd3b3e9fd6c3410e47e3470b159-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515.1.0-a-5ae2bb3665\" (UID: \"4e775dd3b3e9fd6c3410e47e3470b159\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:15.392783 kubelet[3619]: I1216 13:05:15.392703 3619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/01fc872861879a2de73d72ab6e88698d-kubeconfig\") pod \"kube-scheduler-ci-4515.1.0-a-5ae2bb3665\" (UID: \"01fc872861879a2de73d72ab6e88698d\") " pod="kube-system/kube-scheduler-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:15.392783 kubelet[3619]: I1216 13:05:15.392721 3619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53ce3863ad7cc35986c4ebdcb5f302a6-k8s-certs\") pod \"kube-apiserver-ci-4515.1.0-a-5ae2bb3665\" (UID: \"53ce3863ad7cc35986c4ebdcb5f302a6\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:15.392895 kubelet[3619]: I1216 13:05:15.392822 3619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53ce3863ad7cc35986c4ebdcb5f302a6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515.1.0-a-5ae2bb3665\" (UID: \"53ce3863ad7cc35986c4ebdcb5f302a6\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:15.393248 systemd[1]: Created slice kubepods-burstable-pod01fc872861879a2de73d72ab6e88698d.slice - libcontainer container kubepods-burstable-pod01fc872861879a2de73d72ab6e88698d.slice. Dec 16 13:05:15.395433 kubelet[3619]: E1216 13:05:15.395412 3619 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-5ae2bb3665\" not found" node="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:15.557543 kubelet[3619]: I1216 13:05:15.557511 3619 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:15.557855 kubelet[3619]: E1216 13:05:15.557833 3619 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.43:6443/api/v1/nodes\": dial tcp 10.200.4.43:6443: connect: connection refused" node="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:15.792284 kubelet[3619]: E1216 13:05:15.792249 3619 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-a-5ae2bb3665?timeout=10s\": dial tcp 10.200.4.43:6443: connect: connection refused" interval="800ms" Dec 16 13:05:15.843370 containerd[2540]: time="2025-12-16T13:05:15.843295801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515.1.0-a-5ae2bb3665,Uid:53ce3863ad7cc35986c4ebdcb5f302a6,Namespace:kube-system,Attempt:0,}" Dec 16 13:05:15.887167 containerd[2540]: time="2025-12-16T13:05:15.887124426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515.1.0-a-5ae2bb3665,Uid:4e775dd3b3e9fd6c3410e47e3470b159,Namespace:kube-system,Attempt:0,}" Dec 16 13:05:15.895682 containerd[2540]: time="2025-12-16T13:05:15.895657576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515.1.0-a-5ae2bb3665,Uid:01fc872861879a2de73d72ab6e88698d,Namespace:kube-system,Attempt:0,}" Dec 16 13:05:15.960048 kubelet[3619]: I1216 13:05:15.960027 3619 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:15.960514 kubelet[3619]: E1216 13:05:15.960369 3619 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.43:6443/api/v1/nodes\": dial tcp 10.200.4.43:6443: connect: connection refused" node="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:16.307527 kubelet[3619]: E1216 13:05:16.307491 3619 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.4.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:05:16.404470 kubelet[3619]: E1216 13:05:16.404439 3619 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.4.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-a-5ae2bb3665&limit=500&resourceVersion=0\": dial tcp 10.200.4.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:05:16.593853 kubelet[3619]: E1216 13:05:16.593715 3619 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-a-5ae2bb3665?timeout=10s\": dial tcp 10.200.4.43:6443: connect: connection refused" interval="1.6s" Dec 16 13:05:16.743054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1951057186.mount: Deactivated successfully. Dec 16 13:05:16.749424 kubelet[3619]: E1216 13:05:16.749388 3619 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.4.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:05:16.762290 kubelet[3619]: I1216 13:05:16.762267 3619 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:16.762704 kubelet[3619]: E1216 13:05:16.762676 3619 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.43:6443/api/v1/nodes\": dial tcp 10.200.4.43:6443: connect: connection refused" node="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:16.765658 containerd[2540]: time="2025-12-16T13:05:16.765599331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:05:16.772119 kubelet[3619]: E1216 13:05:16.772093 3619 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.4.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:05:16.775420 containerd[2540]: time="2025-12-16T13:05:16.775189933Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 13:05:16.778856 containerd[2540]: time="2025-12-16T13:05:16.778825468Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:05:16.782569 containerd[2540]: time="2025-12-16T13:05:16.782534811Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:05:16.789327 containerd[2540]: time="2025-12-16T13:05:16.789203345Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 13:05:16.792540 containerd[2540]: time="2025-12-16T13:05:16.792511100Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:05:16.799222 containerd[2540]: time="2025-12-16T13:05:16.799191462Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 13:05:16.800161 containerd[2540]: time="2025-12-16T13:05:16.800124546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:05:16.802372 containerd[2540]: time="2025-12-16T13:05:16.800653989Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 899.75457ms" Dec 16 13:05:16.804880 containerd[2540]: time="2025-12-16T13:05:16.804847372Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 898.127258ms" Dec 16 13:05:16.812308 containerd[2540]: time="2025-12-16T13:05:16.812281013Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 900.270892ms" Dec 16 13:05:16.855085 containerd[2540]: time="2025-12-16T13:05:16.854900772Z" level=info msg="connecting to shim 811a3237a679676b09992f2481207e813a4e3a586c7ae2e371bce3b3db7513da" address="unix:///run/containerd/s/a36c677dcb95e6bdae0d84bb655e55fc2562a35396dd63030d72c20b97e53e48" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:16.876016 containerd[2540]: time="2025-12-16T13:05:16.875612060Z" level=info msg="connecting to shim 60da61bed688bf309d3e08762c9f553b454f2ebf5f05f7e5de6e2305b63c51f5" address="unix:///run/containerd/s/6393770adda7fbb90c81ae0ca47f3d06633a80d55542d129cd29d3908c879dbf" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:16.888588 systemd[1]: Started cri-containerd-811a3237a679676b09992f2481207e813a4e3a586c7ae2e371bce3b3db7513da.scope - libcontainer container 811a3237a679676b09992f2481207e813a4e3a586c7ae2e371bce3b3db7513da. Dec 16 13:05:16.896182 containerd[2540]: time="2025-12-16T13:05:16.896149404Z" level=info msg="connecting to shim 6e64464e8774dd366d5aa169f6d6c09fd1e0db6010065320d217ff872e8fdc61" address="unix:///run/containerd/s/1cc24235daf01f62f54113507a355c5a01bb44891d85d87a59f7c9a9ff6c71e9" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:16.916650 systemd[1]: Started cri-containerd-60da61bed688bf309d3e08762c9f553b454f2ebf5f05f7e5de6e2305b63c51f5.scope - libcontainer container 60da61bed688bf309d3e08762c9f553b454f2ebf5f05f7e5de6e2305b63c51f5. Dec 16 13:05:16.918000 audit: BPF prog-id=107 op=LOAD Dec 16 13:05:16.919000 audit: BPF prog-id=108 op=LOAD Dec 16 13:05:16.919000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3667 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.919000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831316133323337613637393637366230393939326632343831323037 Dec 16 13:05:16.920000 audit: BPF prog-id=108 op=UNLOAD Dec 16 13:05:16.920000 audit[3679]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3667 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831316133323337613637393637366230393939326632343831323037 Dec 16 13:05:16.920000 audit: BPF prog-id=109 op=LOAD Dec 16 13:05:16.920000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3667 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831316133323337613637393637366230393939326632343831323037 Dec 16 13:05:16.920000 audit: BPF prog-id=110 op=LOAD Dec 16 13:05:16.920000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3667 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831316133323337613637393637366230393939326632343831323037 Dec 16 13:05:16.920000 audit: BPF prog-id=110 op=UNLOAD Dec 16 13:05:16.920000 audit[3679]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3667 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831316133323337613637393637366230393939326632343831323037 Dec 16 13:05:16.920000 audit: BPF prog-id=109 op=UNLOAD Dec 16 13:05:16.920000 audit[3679]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3667 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831316133323337613637393637366230393939326632343831323037 Dec 16 13:05:16.920000 audit: BPF prog-id=111 op=LOAD Dec 16 13:05:16.920000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3667 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831316133323337613637393637366230393939326632343831323037 Dec 16 13:05:16.932722 systemd[1]: Started cri-containerd-6e64464e8774dd366d5aa169f6d6c09fd1e0db6010065320d217ff872e8fdc61.scope - libcontainer container 6e64464e8774dd366d5aa169f6d6c09fd1e0db6010065320d217ff872e8fdc61. Dec 16 13:05:16.936000 audit: BPF prog-id=112 op=LOAD Dec 16 13:05:16.937000 audit: BPF prog-id=113 op=LOAD Dec 16 13:05:16.937000 audit[3715]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3692 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630646136316265643638386266333039643365303837363263396635 Dec 16 13:05:16.938000 audit: BPF prog-id=113 op=UNLOAD Dec 16 13:05:16.938000 audit[3715]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3692 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630646136316265643638386266333039643365303837363263396635 Dec 16 13:05:16.939000 audit: BPF prog-id=114 op=LOAD Dec 16 13:05:16.939000 audit[3715]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3692 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630646136316265643638386266333039643365303837363263396635 Dec 16 13:05:16.939000 audit: BPF prog-id=115 op=LOAD Dec 16 13:05:16.939000 audit[3715]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3692 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630646136316265643638386266333039643365303837363263396635 Dec 16 13:05:16.939000 audit: BPF prog-id=115 op=UNLOAD Dec 16 13:05:16.939000 audit[3715]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3692 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630646136316265643638386266333039643365303837363263396635 Dec 16 13:05:16.939000 audit: BPF prog-id=114 op=UNLOAD Dec 16 13:05:16.939000 audit[3715]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3692 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630646136316265643638386266333039643365303837363263396635 Dec 16 13:05:16.939000 audit: BPF prog-id=116 op=LOAD Dec 16 13:05:16.939000 audit[3715]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3692 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630646136316265643638386266333039643365303837363263396635 Dec 16 13:05:16.948000 audit: BPF prog-id=117 op=LOAD Dec 16 13:05:16.950000 audit: BPF prog-id=118 op=LOAD Dec 16 13:05:16.950000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3722 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665363434363465383737346464333636643561613136396636643663 Dec 16 13:05:16.950000 audit: BPF prog-id=118 op=UNLOAD Dec 16 13:05:16.950000 audit[3740]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3722 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665363434363465383737346464333636643561613136396636643663 Dec 16 13:05:16.950000 audit: BPF prog-id=119 op=LOAD Dec 16 13:05:16.950000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3722 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665363434363465383737346464333636643561613136396636643663 Dec 16 13:05:16.951000 audit: BPF prog-id=120 op=LOAD Dec 16 13:05:16.951000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3722 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.951000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665363434363465383737346464333636643561613136396636643663 Dec 16 13:05:16.951000 audit: BPF prog-id=120 op=UNLOAD Dec 16 13:05:16.951000 audit[3740]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3722 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.951000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665363434363465383737346464333636643561613136396636643663 Dec 16 13:05:16.951000 audit: BPF prog-id=119 op=UNLOAD Dec 16 13:05:16.951000 audit[3740]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3722 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.951000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665363434363465383737346464333636643561613136396636643663 Dec 16 13:05:16.952000 audit: BPF prog-id=121 op=LOAD Dec 16 13:05:16.952000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3722 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.952000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665363434363465383737346464333636643561613136396636643663 Dec 16 13:05:16.973274 containerd[2540]: time="2025-12-16T13:05:16.972793308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515.1.0-a-5ae2bb3665,Uid:53ce3863ad7cc35986c4ebdcb5f302a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"811a3237a679676b09992f2481207e813a4e3a586c7ae2e371bce3b3db7513da\"" Dec 16 13:05:16.985548 containerd[2540]: time="2025-12-16T13:05:16.985096026Z" level=info msg="CreateContainer within sandbox \"811a3237a679676b09992f2481207e813a4e3a586c7ae2e371bce3b3db7513da\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 13:05:17.010201 containerd[2540]: time="2025-12-16T13:05:17.010175076Z" level=info msg="Container 309e9df2101234f0924f81acc7564222ed14f2ce73e5b8026e4ccfa1b02c15cc: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:17.014206 containerd[2540]: time="2025-12-16T13:05:17.014180998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515.1.0-a-5ae2bb3665,Uid:4e775dd3b3e9fd6c3410e47e3470b159,Namespace:kube-system,Attempt:0,} returns sandbox id \"60da61bed688bf309d3e08762c9f553b454f2ebf5f05f7e5de6e2305b63c51f5\"" Dec 16 13:05:17.022055 containerd[2540]: time="2025-12-16T13:05:17.021827640Z" level=info msg="CreateContainer within sandbox \"60da61bed688bf309d3e08762c9f553b454f2ebf5f05f7e5de6e2305b63c51f5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 13:05:17.031476 containerd[2540]: time="2025-12-16T13:05:17.031452924Z" level=info msg="CreateContainer within sandbox \"811a3237a679676b09992f2481207e813a4e3a586c7ae2e371bce3b3db7513da\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"309e9df2101234f0924f81acc7564222ed14f2ce73e5b8026e4ccfa1b02c15cc\"" Dec 16 13:05:17.031578 containerd[2540]: time="2025-12-16T13:05:17.031557179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515.1.0-a-5ae2bb3665,Uid:01fc872861879a2de73d72ab6e88698d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e64464e8774dd366d5aa169f6d6c09fd1e0db6010065320d217ff872e8fdc61\"" Dec 16 13:05:17.032083 containerd[2540]: time="2025-12-16T13:05:17.032063408Z" level=info msg="StartContainer for \"309e9df2101234f0924f81acc7564222ed14f2ce73e5b8026e4ccfa1b02c15cc\"" Dec 16 13:05:17.032923 containerd[2540]: time="2025-12-16T13:05:17.032898916Z" level=info msg="connecting to shim 309e9df2101234f0924f81acc7564222ed14f2ce73e5b8026e4ccfa1b02c15cc" address="unix:///run/containerd/s/a36c677dcb95e6bdae0d84bb655e55fc2562a35396dd63030d72c20b97e53e48" protocol=ttrpc version=3 Dec 16 13:05:17.042764 containerd[2540]: time="2025-12-16T13:05:17.042742658Z" level=info msg="CreateContainer within sandbox \"6e64464e8774dd366d5aa169f6d6c09fd1e0db6010065320d217ff872e8fdc61\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 13:05:17.051541 systemd[1]: Started cri-containerd-309e9df2101234f0924f81acc7564222ed14f2ce73e5b8026e4ccfa1b02c15cc.scope - libcontainer container 309e9df2101234f0924f81acc7564222ed14f2ce73e5b8026e4ccfa1b02c15cc. Dec 16 13:05:17.056076 containerd[2540]: time="2025-12-16T13:05:17.056043844Z" level=info msg="Container 5b9e89460e486d75cc8929bdf7d629dcc7a77ab835d18d8608dee0d8d8e77d4e: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:17.062000 audit: BPF prog-id=122 op=LOAD Dec 16 13:05:17.062000 audit: BPF prog-id=123 op=LOAD Dec 16 13:05:17.062000 audit[3796]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3667 pid=3796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:17.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330396539646632313031323334663039323466383161636337353634 Dec 16 13:05:17.062000 audit: BPF prog-id=123 op=UNLOAD Dec 16 13:05:17.062000 audit[3796]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3667 pid=3796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:17.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330396539646632313031323334663039323466383161636337353634 Dec 16 13:05:17.062000 audit: BPF prog-id=124 op=LOAD Dec 16 13:05:17.062000 audit[3796]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3667 pid=3796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:17.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330396539646632313031323334663039323466383161636337353634 Dec 16 13:05:17.062000 audit: BPF prog-id=125 op=LOAD Dec 16 13:05:17.062000 audit[3796]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3667 pid=3796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:17.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330396539646632313031323334663039323466383161636337353634 Dec 16 13:05:17.062000 audit: BPF prog-id=125 op=UNLOAD Dec 16 13:05:17.062000 audit[3796]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3667 pid=3796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:17.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330396539646632313031323334663039323466383161636337353634 Dec 16 13:05:17.062000 audit: BPF prog-id=124 op=UNLOAD Dec 16 13:05:17.062000 audit[3796]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3667 pid=3796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:17.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330396539646632313031323334663039323466383161636337353634 Dec 16 13:05:17.062000 audit: BPF prog-id=126 op=LOAD Dec 16 13:05:17.062000 audit[3796]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3667 pid=3796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:17.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330396539646632313031323334663039323466383161636337353634 Dec 16 13:05:17.076359 containerd[2540]: time="2025-12-16T13:05:17.076033173Z" level=info msg="CreateContainer within sandbox \"60da61bed688bf309d3e08762c9f553b454f2ebf5f05f7e5de6e2305b63c51f5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5b9e89460e486d75cc8929bdf7d629dcc7a77ab835d18d8608dee0d8d8e77d4e\"" Dec 16 13:05:17.076643 containerd[2540]: time="2025-12-16T13:05:17.076625720Z" level=info msg="Container fa236d1f842d30f4ab184f8c6ba82af4172adc09c8ba75dab7ce439801c99dec: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:17.076854 containerd[2540]: time="2025-12-16T13:05:17.076839623Z" level=info msg="StartContainer for \"5b9e89460e486d75cc8929bdf7d629dcc7a77ab835d18d8608dee0d8d8e77d4e\"" Dec 16 13:05:17.079634 containerd[2540]: time="2025-12-16T13:05:17.079571538Z" level=info msg="connecting to shim 5b9e89460e486d75cc8929bdf7d629dcc7a77ab835d18d8608dee0d8d8e77d4e" address="unix:///run/containerd/s/6393770adda7fbb90c81ae0ca47f3d06633a80d55542d129cd29d3908c879dbf" protocol=ttrpc version=3 Dec 16 13:05:17.103702 systemd[1]: Started cri-containerd-5b9e89460e486d75cc8929bdf7d629dcc7a77ab835d18d8608dee0d8d8e77d4e.scope - libcontainer container 5b9e89460e486d75cc8929bdf7d629dcc7a77ab835d18d8608dee0d8d8e77d4e. Dec 16 13:05:17.111129 containerd[2540]: time="2025-12-16T13:05:17.111047136Z" level=info msg="CreateContainer within sandbox \"6e64464e8774dd366d5aa169f6d6c09fd1e0db6010065320d217ff872e8fdc61\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fa236d1f842d30f4ab184f8c6ba82af4172adc09c8ba75dab7ce439801c99dec\"" Dec 16 13:05:17.114966 containerd[2540]: time="2025-12-16T13:05:17.114937992Z" level=info msg="StartContainer for \"fa236d1f842d30f4ab184f8c6ba82af4172adc09c8ba75dab7ce439801c99dec\"" Dec 16 13:05:17.116056 containerd[2540]: time="2025-12-16T13:05:17.116024132Z" level=info msg="connecting to shim fa236d1f842d30f4ab184f8c6ba82af4172adc09c8ba75dab7ce439801c99dec" address="unix:///run/containerd/s/1cc24235daf01f62f54113507a355c5a01bb44891d85d87a59f7c9a9ff6c71e9" protocol=ttrpc version=3 Dec 16 13:05:17.123431 containerd[2540]: time="2025-12-16T13:05:17.122419189Z" level=info msg="StartContainer for \"309e9df2101234f0924f81acc7564222ed14f2ce73e5b8026e4ccfa1b02c15cc\" returns successfully" Dec 16 13:05:17.130000 audit: BPF prog-id=127 op=LOAD Dec 16 13:05:17.130000 audit: BPF prog-id=128 op=LOAD Dec 16 13:05:17.130000 audit[3817]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228238 a2=98 a3=0 items=0 ppid=3692 pid=3817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:17.130000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562396538393436306534383664373563633839323962646637643632 Dec 16 13:05:17.130000 audit: BPF prog-id=128 op=UNLOAD Dec 16 13:05:17.130000 audit[3817]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3692 pid=3817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:17.130000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562396538393436306534383664373563633839323962646637643632 Dec 16 13:05:17.132000 audit: BPF prog-id=129 op=LOAD Dec 16 13:05:17.132000 audit[3817]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228488 a2=98 a3=0 items=0 ppid=3692 pid=3817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:17.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562396538393436306534383664373563633839323962646637643632 Dec 16 13:05:17.133000 audit: BPF prog-id=130 op=LOAD Dec 16 13:05:17.133000 audit[3817]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000228218 a2=98 a3=0 items=0 ppid=3692 pid=3817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:17.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562396538393436306534383664373563633839323962646637643632 Dec 16 13:05:17.133000 audit: BPF prog-id=130 op=UNLOAD Dec 16 13:05:17.133000 audit[3817]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3692 pid=3817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:17.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562396538393436306534383664373563633839323962646637643632 Dec 16 13:05:17.133000 audit: BPF prog-id=129 op=UNLOAD Dec 16 13:05:17.133000 audit[3817]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3692 pid=3817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:17.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562396538393436306534383664373563633839323962646637643632 Dec 16 13:05:17.133000 audit: BPF prog-id=131 op=LOAD Dec 16 13:05:17.133000 audit[3817]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002286e8 a2=98 a3=0 items=0 ppid=3692 pid=3817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:17.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562396538393436306534383664373563633839323962646637643632 Dec 16 13:05:17.140698 systemd[1]: Started cri-containerd-fa236d1f842d30f4ab184f8c6ba82af4172adc09c8ba75dab7ce439801c99dec.scope - libcontainer container fa236d1f842d30f4ab184f8c6ba82af4172adc09c8ba75dab7ce439801c99dec. Dec 16 13:05:17.153000 audit: BPF prog-id=132 op=LOAD Dec 16 13:05:17.154000 audit: BPF prog-id=133 op=LOAD Dec 16 13:05:17.154000 audit[3844]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017e238 a2=98 a3=0 items=0 ppid=3722 pid=3844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:17.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661323336643166383432643330663461623138346638633662613832 Dec 16 13:05:17.154000 audit: BPF prog-id=133 op=UNLOAD Dec 16 13:05:17.154000 audit[3844]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3722 pid=3844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:17.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661323336643166383432643330663461623138346638633662613832 Dec 16 13:05:17.154000 audit: BPF prog-id=134 op=LOAD Dec 16 13:05:17.154000 audit[3844]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017e488 a2=98 a3=0 items=0 ppid=3722 pid=3844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:17.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661323336643166383432643330663461623138346638633662613832 Dec 16 13:05:17.154000 audit: BPF prog-id=135 op=LOAD Dec 16 13:05:17.154000 audit[3844]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017e218 a2=98 a3=0 items=0 ppid=3722 pid=3844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:17.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661323336643166383432643330663461623138346638633662613832 Dec 16 13:05:17.154000 audit: BPF prog-id=135 op=UNLOAD Dec 16 13:05:17.154000 audit[3844]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3722 pid=3844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:17.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661323336643166383432643330663461623138346638633662613832 Dec 16 13:05:17.154000 audit: BPF prog-id=134 op=UNLOAD Dec 16 13:05:17.154000 audit[3844]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3722 pid=3844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:17.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661323336643166383432643330663461623138346638633662613832 Dec 16 13:05:17.154000 audit: BPF prog-id=136 op=LOAD Dec 16 13:05:17.154000 audit[3844]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017e6e8 a2=98 a3=0 items=0 ppid=3722 pid=3844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:17.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661323336643166383432643330663461623138346638633662613832 Dec 16 13:05:17.158662 kubelet[3619]: E1216 13:05:17.158619 3619 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.4.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.43:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 13:05:17.207920 containerd[2540]: time="2025-12-16T13:05:17.207889616Z" level=info msg="StartContainer for \"5b9e89460e486d75cc8929bdf7d629dcc7a77ab835d18d8608dee0d8d8e77d4e\" returns successfully" Dec 16 13:05:17.292888 containerd[2540]: time="2025-12-16T13:05:17.292848431Z" level=info msg="StartContainer for \"fa236d1f842d30f4ab184f8c6ba82af4172adc09c8ba75dab7ce439801c99dec\" returns successfully" Dec 16 13:05:17.301016 kubelet[3619]: E1216 13:05:17.300910 3619 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-5ae2bb3665\" not found" node="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:17.304613 kubelet[3619]: E1216 13:05:17.304593 3619 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-5ae2bb3665\" not found" node="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:18.310461 kubelet[3619]: E1216 13:05:18.310420 3619 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-5ae2bb3665\" not found" node="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:18.312368 kubelet[3619]: E1216 13:05:18.311130 3619 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-5ae2bb3665\" not found" node="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:18.364752 kubelet[3619]: I1216 13:05:18.364730 3619 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:18.885436 kubelet[3619]: E1216 13:05:18.885382 3619 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4515.1.0-a-5ae2bb3665\" not found" node="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:19.009121 kubelet[3619]: E1216 13:05:19.009003 3619 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4515.1.0-a-5ae2bb3665.1881b3e2b4483709 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4515.1.0-a-5ae2bb3665,UID:ci-4515.1.0-a-5ae2bb3665,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4515.1.0-a-5ae2bb3665,},FirstTimestamp:2025-12-16 13:05:15.174557449 +0000 UTC m=+0.412442822,LastTimestamp:2025-12-16 13:05:15.174557449 +0000 UTC m=+0.412442822,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515.1.0-a-5ae2bb3665,}" Dec 16 13:05:19.079721 kubelet[3619]: I1216 13:05:19.079676 3619 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:19.087914 kubelet[3619]: I1216 13:05:19.087883 3619 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:19.108098 kubelet[3619]: E1216 13:05:19.107942 3619 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-a-5ae2bb3665\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:19.108098 kubelet[3619]: I1216 13:05:19.107966 3619 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:19.109994 kubelet[3619]: E1216 13:05:19.109841 3619 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515.1.0-a-5ae2bb3665\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:19.109994 kubelet[3619]: I1216 13:05:19.109865 3619 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:19.111527 kubelet[3619]: E1216 13:05:19.111488 3619 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515.1.0-a-5ae2bb3665\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:19.170869 kubelet[3619]: I1216 13:05:19.170841 3619 apiserver.go:52] "Watching apiserver" Dec 16 13:05:19.191455 kubelet[3619]: I1216 13:05:19.191431 3619 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 13:05:19.310495 kubelet[3619]: I1216 13:05:19.310477 3619 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:19.312014 kubelet[3619]: E1216 13:05:19.311815 3619 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515.1.0-a-5ae2bb3665\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:20.311731 kubelet[3619]: I1216 13:05:20.311695 3619 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:20.322709 kubelet[3619]: I1216 13:05:20.322523 3619 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 13:05:21.217510 systemd[1]: Reload requested from client PID 3901 ('systemctl') (unit session-9.scope)... Dec 16 13:05:21.217527 systemd[1]: Reloading... Dec 16 13:05:21.312388 zram_generator::config[3951]: No configuration found. Dec 16 13:05:21.520952 systemd[1]: Reloading finished in 303 ms. Dec 16 13:05:21.549044 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:05:21.560309 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 13:05:21.560630 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:05:21.562304 kernel: kauditd_printk_skb: 146 callbacks suppressed Dec 16 13:05:21.562378 kernel: audit: type=1131 audit(1765890321.559:427): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:05:21.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:05:21.560699 systemd[1]: kubelet.service: Consumed 770ms CPU time, 125M memory peak. Dec 16 13:05:21.569534 kernel: audit: type=1334 audit(1765890321.561:428): prog-id=137 op=LOAD Dec 16 13:05:21.561000 audit: BPF prog-id=137 op=LOAD Dec 16 13:05:21.564480 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:05:21.571842 kernel: audit: type=1334 audit(1765890321.561:429): prog-id=94 op=UNLOAD Dec 16 13:05:21.561000 audit: BPF prog-id=94 op=UNLOAD Dec 16 13:05:21.573906 kernel: audit: type=1334 audit(1765890321.561:430): prog-id=138 op=LOAD Dec 16 13:05:21.561000 audit: BPF prog-id=138 op=LOAD Dec 16 13:05:21.561000 audit: BPF prog-id=139 op=LOAD Dec 16 13:05:21.561000 audit: BPF prog-id=95 op=UNLOAD Dec 16 13:05:21.561000 audit: BPF prog-id=96 op=UNLOAD Dec 16 13:05:21.561000 audit: BPF prog-id=140 op=LOAD Dec 16 13:05:21.561000 audit: BPF prog-id=104 op=UNLOAD Dec 16 13:05:21.566000 audit: BPF prog-id=141 op=LOAD Dec 16 13:05:21.566000 audit: BPF prog-id=87 op=UNLOAD Dec 16 13:05:21.566000 audit: BPF prog-id=142 op=LOAD Dec 16 13:05:21.566000 audit: BPF prog-id=143 op=LOAD Dec 16 13:05:21.566000 audit: BPF prog-id=88 op=UNLOAD Dec 16 13:05:21.566000 audit: BPF prog-id=89 op=UNLOAD Dec 16 13:05:21.566000 audit: BPF prog-id=144 op=LOAD Dec 16 13:05:21.566000 audit: BPF prog-id=97 op=UNLOAD Dec 16 13:05:21.566000 audit: BPF prog-id=145 op=LOAD Dec 16 13:05:21.566000 audit: BPF prog-id=146 op=LOAD Dec 16 13:05:21.574354 kernel: audit: type=1334 audit(1765890321.561:431): prog-id=139 op=LOAD Dec 16 13:05:21.574378 kernel: audit: type=1334 audit(1765890321.561:432): prog-id=95 op=UNLOAD Dec 16 13:05:21.574395 kernel: audit: type=1334 audit(1765890321.561:433): prog-id=96 op=UNLOAD Dec 16 13:05:21.574410 kernel: audit: type=1334 audit(1765890321.561:434): prog-id=140 op=LOAD Dec 16 13:05:21.574427 kernel: audit: type=1334 audit(1765890321.561:435): prog-id=104 op=UNLOAD Dec 16 13:05:21.574440 kernel: audit: type=1334 audit(1765890321.566:436): prog-id=141 op=LOAD Dec 16 13:05:21.566000 audit: BPF prog-id=98 op=UNLOAD Dec 16 13:05:21.566000 audit: BPF prog-id=99 op=UNLOAD Dec 16 13:05:21.569000 audit: BPF prog-id=147 op=LOAD Dec 16 13:05:21.569000 audit: BPF prog-id=148 op=LOAD Dec 16 13:05:21.569000 audit: BPF prog-id=105 op=UNLOAD Dec 16 13:05:21.569000 audit: BPF prog-id=106 op=UNLOAD Dec 16 13:05:21.570000 audit: BPF prog-id=149 op=LOAD Dec 16 13:05:21.570000 audit: BPF prog-id=90 op=UNLOAD Dec 16 13:05:21.572000 audit: BPF prog-id=150 op=LOAD Dec 16 13:05:21.572000 audit: BPF prog-id=91 op=UNLOAD Dec 16 13:05:21.572000 audit: BPF prog-id=151 op=LOAD Dec 16 13:05:21.572000 audit: BPF prog-id=152 op=LOAD Dec 16 13:05:21.572000 audit: BPF prog-id=92 op=UNLOAD Dec 16 13:05:21.572000 audit: BPF prog-id=93 op=UNLOAD Dec 16 13:05:21.575000 audit: BPF prog-id=153 op=LOAD Dec 16 13:05:21.575000 audit: BPF prog-id=101 op=UNLOAD Dec 16 13:05:21.575000 audit: BPF prog-id=154 op=LOAD Dec 16 13:05:21.575000 audit: BPF prog-id=155 op=LOAD Dec 16 13:05:21.575000 audit: BPF prog-id=102 op=UNLOAD Dec 16 13:05:21.576000 audit: BPF prog-id=103 op=UNLOAD Dec 16 13:05:21.577000 audit: BPF prog-id=156 op=LOAD Dec 16 13:05:21.578000 audit: BPF prog-id=100 op=UNLOAD Dec 16 13:05:23.588603 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:05:23.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:05:23.597576 (kubelet)[4018]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:05:23.641827 kubelet[4018]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:05:23.641827 kubelet[4018]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:05:23.642056 kubelet[4018]: I1216 13:05:23.641858 4018 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:05:23.646684 kubelet[4018]: I1216 13:05:23.646658 4018 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 13:05:23.646684 kubelet[4018]: I1216 13:05:23.646676 4018 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:05:23.646806 kubelet[4018]: I1216 13:05:23.646700 4018 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 13:05:23.646806 kubelet[4018]: I1216 13:05:23.646707 4018 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:05:23.646914 kubelet[4018]: I1216 13:05:23.646900 4018 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:05:23.647880 kubelet[4018]: I1216 13:05:23.647861 4018 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 13:05:23.654019 kubelet[4018]: I1216 13:05:23.653989 4018 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:05:23.659577 kubelet[4018]: I1216 13:05:23.659558 4018 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:05:23.661444 kubelet[4018]: I1216 13:05:23.661430 4018 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 13:05:23.661946 kubelet[4018]: I1216 13:05:23.661685 4018 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:05:23.661946 kubelet[4018]: I1216 13:05:23.661707 4018 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515.1.0-a-5ae2bb3665","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:05:23.661946 kubelet[4018]: I1216 13:05:23.661821 4018 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:05:23.661946 kubelet[4018]: I1216 13:05:23.661829 4018 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 13:05:23.662096 kubelet[4018]: I1216 13:05:23.661847 4018 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 13:05:23.662565 kubelet[4018]: I1216 13:05:23.662557 4018 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:05:23.662729 kubelet[4018]: I1216 13:05:23.662723 4018 kubelet.go:475] "Attempting to sync node with API server" Dec 16 13:05:23.662764 kubelet[4018]: I1216 13:05:23.662760 4018 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:05:23.662817 kubelet[4018]: I1216 13:05:23.662813 4018 kubelet.go:387] "Adding apiserver pod source" Dec 16 13:05:23.662864 kubelet[4018]: I1216 13:05:23.662860 4018 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:05:23.664643 kubelet[4018]: I1216 13:05:23.664621 4018 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 13:05:23.665309 kubelet[4018]: I1216 13:05:23.665299 4018 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:05:23.665417 kubelet[4018]: I1216 13:05:23.665410 4018 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 13:05:23.670593 kubelet[4018]: I1216 13:05:23.670579 4018 server.go:1262] "Started kubelet" Dec 16 13:05:23.672543 kubelet[4018]: I1216 13:05:23.672531 4018 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:05:23.681961 kubelet[4018]: I1216 13:05:23.681630 4018 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:05:23.682740 kubelet[4018]: I1216 13:05:23.682723 4018 server.go:310] "Adding debug handlers to kubelet server" Dec 16 13:05:23.690642 kubelet[4018]: I1216 13:05:23.690494 4018 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:05:23.690642 kubelet[4018]: I1216 13:05:23.690548 4018 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 13:05:23.690740 kubelet[4018]: I1216 13:05:23.690688 4018 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:05:23.690964 kubelet[4018]: I1216 13:05:23.690951 4018 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:05:23.692121 kubelet[4018]: I1216 13:05:23.692107 4018 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 13:05:23.692561 kubelet[4018]: E1216 13:05:23.692467 4018 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-5ae2bb3665\" not found" Dec 16 13:05:23.693497 kubelet[4018]: I1216 13:05:23.693483 4018 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 13:05:23.693854 kubelet[4018]: I1216 13:05:23.693841 4018 reconciler.go:29] "Reconciler: start to sync state" Dec 16 13:05:23.697164 kubelet[4018]: I1216 13:05:23.697147 4018 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:05:23.697536 kubelet[4018]: I1216 13:05:23.697496 4018 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:05:23.698534 kubelet[4018]: E1216 13:05:23.698517 4018 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:05:23.704393 kubelet[4018]: I1216 13:05:23.704368 4018 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 13:05:23.708298 kubelet[4018]: I1216 13:05:23.707988 4018 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 13:05:23.708298 kubelet[4018]: I1216 13:05:23.708008 4018 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 13:05:23.708298 kubelet[4018]: I1216 13:05:23.708030 4018 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 13:05:23.708298 kubelet[4018]: E1216 13:05:23.708072 4018 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:05:23.711334 kubelet[4018]: I1216 13:05:23.711077 4018 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:05:23.763887 kubelet[4018]: I1216 13:05:23.763871 4018 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:05:23.763961 kubelet[4018]: I1216 13:05:23.763953 4018 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:05:23.764016 kubelet[4018]: I1216 13:05:23.764011 4018 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:05:23.764178 kubelet[4018]: I1216 13:05:23.764171 4018 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 13:05:23.764225 kubelet[4018]: I1216 13:05:23.764208 4018 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 13:05:23.764259 kubelet[4018]: I1216 13:05:23.764254 4018 policy_none.go:49] "None policy: Start" Dec 16 13:05:23.765356 kubelet[4018]: I1216 13:05:23.764295 4018 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 13:05:23.765356 kubelet[4018]: I1216 13:05:23.764306 4018 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 13:05:23.765356 kubelet[4018]: I1216 13:05:23.764443 4018 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 16 13:05:23.765356 kubelet[4018]: I1216 13:05:23.764452 4018 policy_none.go:47] "Start" Dec 16 13:05:23.771594 kubelet[4018]: E1216 13:05:23.771539 4018 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:05:23.772295 kubelet[4018]: I1216 13:05:23.772284 4018 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:05:23.773530 kubelet[4018]: I1216 13:05:23.773499 4018 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:05:23.773866 kubelet[4018]: I1216 13:05:23.773838 4018 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:05:23.777311 kubelet[4018]: E1216 13:05:23.777296 4018 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:05:23.809090 kubelet[4018]: I1216 13:05:23.809072 4018 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:23.811054 kubelet[4018]: I1216 13:05:23.809278 4018 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:23.811216 kubelet[4018]: I1216 13:05:23.809395 4018 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:23.824191 kubelet[4018]: I1216 13:05:23.824174 4018 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 13:05:23.824642 kubelet[4018]: I1216 13:05:23.824316 4018 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 13:05:23.824922 kubelet[4018]: I1216 13:05:23.824356 4018 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 13:05:23.825148 kubelet[4018]: E1216 13:05:23.825108 4018 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515.1.0-a-5ae2bb3665\" already exists" pod="kube-system/kube-scheduler-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:23.877412 kubelet[4018]: I1216 13:05:23.877321 4018 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:23.891152 kubelet[4018]: I1216 13:05:23.891102 4018 kubelet_node_status.go:124] "Node was previously registered" node="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:23.891541 kubelet[4018]: I1216 13:05:23.891388 4018 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:23.894942 kubelet[4018]: I1216 13:05:23.894435 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53ce3863ad7cc35986c4ebdcb5f302a6-ca-certs\") pod \"kube-apiserver-ci-4515.1.0-a-5ae2bb3665\" (UID: \"53ce3863ad7cc35986c4ebdcb5f302a6\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:23.894942 kubelet[4018]: I1216 13:05:23.894463 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53ce3863ad7cc35986c4ebdcb5f302a6-k8s-certs\") pod \"kube-apiserver-ci-4515.1.0-a-5ae2bb3665\" (UID: \"53ce3863ad7cc35986c4ebdcb5f302a6\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:23.894942 kubelet[4018]: I1216 13:05:23.894612 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4e775dd3b3e9fd6c3410e47e3470b159-flexvolume-dir\") pod \"kube-controller-manager-ci-4515.1.0-a-5ae2bb3665\" (UID: \"4e775dd3b3e9fd6c3410e47e3470b159\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:23.894942 kubelet[4018]: I1216 13:05:23.894630 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4e775dd3b3e9fd6c3410e47e3470b159-k8s-certs\") pod \"kube-controller-manager-ci-4515.1.0-a-5ae2bb3665\" (UID: \"4e775dd3b3e9fd6c3410e47e3470b159\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:23.894942 kubelet[4018]: I1216 13:05:23.894762 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4e775dd3b3e9fd6c3410e47e3470b159-kubeconfig\") pod \"kube-controller-manager-ci-4515.1.0-a-5ae2bb3665\" (UID: \"4e775dd3b3e9fd6c3410e47e3470b159\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:23.895100 kubelet[4018]: I1216 13:05:23.894779 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4e775dd3b3e9fd6c3410e47e3470b159-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515.1.0-a-5ae2bb3665\" (UID: \"4e775dd3b3e9fd6c3410e47e3470b159\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:23.895100 kubelet[4018]: I1216 13:05:23.894898 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53ce3863ad7cc35986c4ebdcb5f302a6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515.1.0-a-5ae2bb3665\" (UID: \"53ce3863ad7cc35986c4ebdcb5f302a6\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:23.895100 kubelet[4018]: I1216 13:05:23.894916 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4e775dd3b3e9fd6c3410e47e3470b159-ca-certs\") pod \"kube-controller-manager-ci-4515.1.0-a-5ae2bb3665\" (UID: \"4e775dd3b3e9fd6c3410e47e3470b159\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:23.895100 kubelet[4018]: I1216 13:05:23.894934 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/01fc872861879a2de73d72ab6e88698d-kubeconfig\") pod \"kube-scheduler-ci-4515.1.0-a-5ae2bb3665\" (UID: \"01fc872861879a2de73d72ab6e88698d\") " pod="kube-system/kube-scheduler-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:24.663583 kubelet[4018]: I1216 13:05:24.663538 4018 apiserver.go:52] "Watching apiserver" Dec 16 13:05:24.694205 kubelet[4018]: I1216 13:05:24.694172 4018 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 13:05:24.746634 kubelet[4018]: I1216 13:05:24.746557 4018 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:24.747787 kubelet[4018]: I1216 13:05:24.746828 4018 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:24.759371 kubelet[4018]: I1216 13:05:24.759333 4018 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 13:05:24.759469 kubelet[4018]: E1216 13:05:24.759409 4018 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-a-5ae2bb3665\" already exists" pod="kube-system/kube-apiserver-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:24.764848 kubelet[4018]: I1216 13:05:24.764274 4018 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 13:05:24.764848 kubelet[4018]: E1216 13:05:24.764633 4018 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515.1.0-a-5ae2bb3665\" already exists" pod="kube-system/kube-scheduler-ci-4515.1.0-a-5ae2bb3665" Dec 16 13:05:24.765210 kubelet[4018]: I1216 13:05:24.765146 4018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-5ae2bb3665" podStartSLOduration=1.765133297 podStartE2EDuration="1.765133297s" podCreationTimestamp="2025-12-16 13:05:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:05:24.764859933 +0000 UTC m=+1.163825642" watchObservedRunningTime="2025-12-16 13:05:24.765133297 +0000 UTC m=+1.164099007" Dec 16 13:05:24.835411 kubelet[4018]: I1216 13:05:24.835371 4018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4515.1.0-a-5ae2bb3665" podStartSLOduration=4.835310883 podStartE2EDuration="4.835310883s" podCreationTimestamp="2025-12-16 13:05:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:05:24.776483266 +0000 UTC m=+1.175449000" watchObservedRunningTime="2025-12-16 13:05:24.835310883 +0000 UTC m=+1.234276588" Dec 16 13:05:24.860688 kubelet[4018]: I1216 13:05:24.860646 4018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4515.1.0-a-5ae2bb3665" podStartSLOduration=1.86063281 podStartE2EDuration="1.86063281s" podCreationTimestamp="2025-12-16 13:05:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:05:24.836086839 +0000 UTC m=+1.235052546" watchObservedRunningTime="2025-12-16 13:05:24.86063281 +0000 UTC m=+1.259598684" Dec 16 13:05:26.773020 kubelet[4018]: I1216 13:05:26.772973 4018 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 13:05:26.773808 containerd[2540]: time="2025-12-16T13:05:26.773334021Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 13:05:26.774050 kubelet[4018]: I1216 13:05:26.773987 4018 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 13:05:27.472690 systemd[1]: Created slice kubepods-besteffort-poddefd329e_536c_4773_85dc_9c888e247667.slice - libcontainer container kubepods-besteffort-poddefd329e_536c_4773_85dc_9c888e247667.slice. Dec 16 13:05:27.515556 kubelet[4018]: I1216 13:05:27.515507 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/defd329e-536c-4773-85dc-9c888e247667-xtables-lock\") pod \"kube-proxy-lbh7g\" (UID: \"defd329e-536c-4773-85dc-9c888e247667\") " pod="kube-system/kube-proxy-lbh7g" Dec 16 13:05:27.515556 kubelet[4018]: I1216 13:05:27.515555 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh4sm\" (UniqueName: \"kubernetes.io/projected/defd329e-536c-4773-85dc-9c888e247667-kube-api-access-vh4sm\") pod \"kube-proxy-lbh7g\" (UID: \"defd329e-536c-4773-85dc-9c888e247667\") " pod="kube-system/kube-proxy-lbh7g" Dec 16 13:05:27.515728 kubelet[4018]: I1216 13:05:27.515574 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/defd329e-536c-4773-85dc-9c888e247667-kube-proxy\") pod \"kube-proxy-lbh7g\" (UID: \"defd329e-536c-4773-85dc-9c888e247667\") " pod="kube-system/kube-proxy-lbh7g" Dec 16 13:05:27.515728 kubelet[4018]: I1216 13:05:27.515589 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/defd329e-536c-4773-85dc-9c888e247667-lib-modules\") pod \"kube-proxy-lbh7g\" (UID: \"defd329e-536c-4773-85dc-9c888e247667\") " pod="kube-system/kube-proxy-lbh7g" Dec 16 13:05:27.788071 containerd[2540]: time="2025-12-16T13:05:27.787950387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lbh7g,Uid:defd329e-536c-4773-85dc-9c888e247667,Namespace:kube-system,Attempt:0,}" Dec 16 13:05:27.839243 containerd[2540]: time="2025-12-16T13:05:27.838883836Z" level=info msg="connecting to shim 89b12a65b7a7232dcebdc03adaf2a3e7523e6b07c986e35b28f6185f5a4ccfee" address="unix:///run/containerd/s/ed3c6e1024af67eaaaac563ca1e5d47a2353db501a661714b5fc690c055d30fc" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:27.863561 systemd[1]: Started cri-containerd-89b12a65b7a7232dcebdc03adaf2a3e7523e6b07c986e35b28f6185f5a4ccfee.scope - libcontainer container 89b12a65b7a7232dcebdc03adaf2a3e7523e6b07c986e35b28f6185f5a4ccfee. Dec 16 13:05:27.874535 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 16 13:05:27.874641 kernel: audit: type=1334 audit(1765890327.870:469): prog-id=157 op=LOAD Dec 16 13:05:27.870000 audit: BPF prog-id=157 op=LOAD Dec 16 13:05:27.873000 audit: BPF prog-id=158 op=LOAD Dec 16 13:05:27.873000 audit[4086]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4075 pid=4086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:27.882000 kernel: audit: type=1334 audit(1765890327.873:470): prog-id=158 op=LOAD Dec 16 13:05:27.882045 kernel: audit: type=1300 audit(1765890327.873:470): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4075 pid=4086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:27.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839623132613635623761373233326463656264633033616461663261 Dec 16 13:05:27.887563 kernel: audit: type=1327 audit(1765890327.873:470): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839623132613635623761373233326463656264633033616461663261 Dec 16 13:05:27.873000 audit: BPF prog-id=158 op=UNLOAD Dec 16 13:05:27.889782 kernel: audit: type=1334 audit(1765890327.873:471): prog-id=158 op=UNLOAD Dec 16 13:05:27.873000 audit[4086]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4075 pid=4086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:27.894491 kernel: audit: type=1300 audit(1765890327.873:471): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4075 pid=4086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:27.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839623132613635623761373233326463656264633033616461663261 Dec 16 13:05:27.900230 kernel: audit: type=1327 audit(1765890327.873:471): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839623132613635623761373233326463656264633033616461663261 Dec 16 13:05:27.873000 audit: BPF prog-id=159 op=LOAD Dec 16 13:05:27.905372 kernel: audit: type=1334 audit(1765890327.873:472): prog-id=159 op=LOAD Dec 16 13:05:27.873000 audit[4086]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4075 pid=4086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:27.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839623132613635623761373233326463656264633033616461663261 Dec 16 13:05:27.915480 kernel: audit: type=1300 audit(1765890327.873:472): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4075 pid=4086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:27.915546 kernel: audit: type=1327 audit(1765890327.873:472): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839623132613635623761373233326463656264633033616461663261 Dec 16 13:05:27.873000 audit: BPF prog-id=160 op=LOAD Dec 16 13:05:27.873000 audit[4086]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4075 pid=4086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:27.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839623132613635623761373233326463656264633033616461663261 Dec 16 13:05:27.874000 audit: BPF prog-id=160 op=UNLOAD Dec 16 13:05:27.874000 audit[4086]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4075 pid=4086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:27.874000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839623132613635623761373233326463656264633033616461663261 Dec 16 13:05:27.874000 audit: BPF prog-id=159 op=UNLOAD Dec 16 13:05:27.874000 audit[4086]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4075 pid=4086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:27.874000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839623132613635623761373233326463656264633033616461663261 Dec 16 13:05:27.874000 audit: BPF prog-id=161 op=LOAD Dec 16 13:05:27.874000 audit[4086]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4075 pid=4086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:27.874000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839623132613635623761373233326463656264633033616461663261 Dec 16 13:05:27.921871 containerd[2540]: time="2025-12-16T13:05:27.921831608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lbh7g,Uid:defd329e-536c-4773-85dc-9c888e247667,Namespace:kube-system,Attempt:0,} returns sandbox id \"89b12a65b7a7232dcebdc03adaf2a3e7523e6b07c986e35b28f6185f5a4ccfee\"" Dec 16 13:05:27.936273 containerd[2540]: time="2025-12-16T13:05:27.936245549Z" level=info msg="CreateContainer within sandbox \"89b12a65b7a7232dcebdc03adaf2a3e7523e6b07c986e35b28f6185f5a4ccfee\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 13:05:27.964659 containerd[2540]: time="2025-12-16T13:05:27.964629284Z" level=info msg="Container 2a2869a3e8b7094bfb9dfeac362d47ec1859b0ca13b4a17ed13e2ca0bbbb88ce: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:27.988289 containerd[2540]: time="2025-12-16T13:05:27.988261459Z" level=info msg="CreateContainer within sandbox \"89b12a65b7a7232dcebdc03adaf2a3e7523e6b07c986e35b28f6185f5a4ccfee\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2a2869a3e8b7094bfb9dfeac362d47ec1859b0ca13b4a17ed13e2ca0bbbb88ce\"" Dec 16 13:05:27.989874 containerd[2540]: time="2025-12-16T13:05:27.989701235Z" level=info msg="StartContainer for \"2a2869a3e8b7094bfb9dfeac362d47ec1859b0ca13b4a17ed13e2ca0bbbb88ce\"" Dec 16 13:05:27.991362 containerd[2540]: time="2025-12-16T13:05:27.991311518Z" level=info msg="connecting to shim 2a2869a3e8b7094bfb9dfeac362d47ec1859b0ca13b4a17ed13e2ca0bbbb88ce" address="unix:///run/containerd/s/ed3c6e1024af67eaaaac563ca1e5d47a2353db501a661714b5fc690c055d30fc" protocol=ttrpc version=3 Dec 16 13:05:28.016714 systemd[1]: Started cri-containerd-2a2869a3e8b7094bfb9dfeac362d47ec1859b0ca13b4a17ed13e2ca0bbbb88ce.scope - libcontainer container 2a2869a3e8b7094bfb9dfeac362d47ec1859b0ca13b4a17ed13e2ca0bbbb88ce. Dec 16 13:05:28.067140 systemd[1]: Created slice kubepods-besteffort-pod39933995_7e76_42bf_87ea_02075a0855b6.slice - libcontainer container kubepods-besteffort-pod39933995_7e76_42bf_87ea_02075a0855b6.slice. Dec 16 13:05:28.068000 audit: BPF prog-id=162 op=LOAD Dec 16 13:05:28.068000 audit[4114]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4075 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261323836396133653862373039346266623964666561633336326434 Dec 16 13:05:28.069000 audit: BPF prog-id=163 op=LOAD Dec 16 13:05:28.069000 audit[4114]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4075 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261323836396133653862373039346266623964666561633336326434 Dec 16 13:05:28.069000 audit: BPF prog-id=163 op=UNLOAD Dec 16 13:05:28.069000 audit[4114]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4075 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261323836396133653862373039346266623964666561633336326434 Dec 16 13:05:28.069000 audit: BPF prog-id=162 op=UNLOAD Dec 16 13:05:28.069000 audit[4114]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4075 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261323836396133653862373039346266623964666561633336326434 Dec 16 13:05:28.069000 audit: BPF prog-id=164 op=LOAD Dec 16 13:05:28.069000 audit[4114]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4075 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261323836396133653862373039346266623964666561633336326434 Dec 16 13:05:28.107910 containerd[2540]: time="2025-12-16T13:05:28.107730214Z" level=info msg="StartContainer for \"2a2869a3e8b7094bfb9dfeac362d47ec1859b0ca13b4a17ed13e2ca0bbbb88ce\" returns successfully" Dec 16 13:05:28.119217 kubelet[4018]: I1216 13:05:28.119192 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/39933995-7e76-42bf-87ea-02075a0855b6-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-qq9fd\" (UID: \"39933995-7e76-42bf-87ea-02075a0855b6\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-qq9fd" Dec 16 13:05:28.119622 kubelet[4018]: I1216 13:05:28.119589 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b79xr\" (UniqueName: \"kubernetes.io/projected/39933995-7e76-42bf-87ea-02075a0855b6-kube-api-access-b79xr\") pod \"tigera-operator-65cdcdfd6d-qq9fd\" (UID: \"39933995-7e76-42bf-87ea-02075a0855b6\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-qq9fd" Dec 16 13:05:28.325000 audit[4182]: NETFILTER_CFG table=mangle:57 family=2 entries=1 op=nft_register_chain pid=4182 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:28.325000 audit[4182]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffca40d030 a2=0 a3=7fffca40d01c items=0 ppid=4127 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.325000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 13:05:28.326000 audit[4183]: NETFILTER_CFG table=mangle:58 family=10 entries=1 op=nft_register_chain pid=4183 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:28.326000 audit[4183]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffde488b700 a2=0 a3=7ffde488b6ec items=0 ppid=4127 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.326000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 13:05:28.328000 audit[4186]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_chain pid=4186 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:28.328000 audit[4186]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe0bc6d9a0 a2=0 a3=7ffe0bc6d98c items=0 ppid=4127 pid=4186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.328000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 13:05:28.329000 audit[4188]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=4188 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:28.329000 audit[4188]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffec8a31690 a2=0 a3=7ffec8a3167c items=0 ppid=4127 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.329000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 13:05:28.332000 audit[4189]: NETFILTER_CFG table=nat:61 family=10 entries=1 op=nft_register_chain pid=4189 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:28.332000 audit[4189]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff811c8d70 a2=0 a3=7fff811c8d5c items=0 ppid=4127 pid=4189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.332000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 13:05:28.335000 audit[4191]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_chain pid=4191 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:28.335000 audit[4191]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe2e3c6980 a2=0 a3=7ffe2e3c696c items=0 ppid=4127 pid=4191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.335000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 13:05:28.380591 containerd[2540]: time="2025-12-16T13:05:28.380552491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-qq9fd,Uid:39933995-7e76-42bf-87ea-02075a0855b6,Namespace:tigera-operator,Attempt:0,}" Dec 16 13:05:28.432000 audit[4196]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=4196 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:28.432000 audit[4196]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff68d44640 a2=0 a3=7fff68d4462c items=0 ppid=4127 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.432000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 13:05:28.436000 audit[4201]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=4201 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:28.436000 audit[4201]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd786d9210 a2=0 a3=7ffd786d91fc items=0 ppid=4127 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.436000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Dec 16 13:05:28.439049 containerd[2540]: time="2025-12-16T13:05:28.439009201Z" level=info msg="connecting to shim 2aea087d4e2138814b7856ae3e3655a4f37d6d7e494b5231f88f723d94c7b788" address="unix:///run/containerd/s/2a87f714ee43f3e9916f692c9a0975f5d7537f8589c884d4c6162e4ced62c0c8" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:28.441000 audit[4212]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_rule pid=4212 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:28.441000 audit[4212]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffeb2462e80 a2=0 a3=7ffeb2462e6c items=0 ppid=4127 pid=4212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.441000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Dec 16 13:05:28.442000 audit[4216]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_chain pid=4216 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:28.442000 audit[4216]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd2b09c250 a2=0 a3=7ffd2b09c23c items=0 ppid=4127 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.442000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 13:05:28.445000 audit[4219]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=4219 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:28.445000 audit[4219]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffefd214300 a2=0 a3=7ffefd2142ec items=0 ppid=4127 pid=4219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.445000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 13:05:28.446000 audit[4222]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=4222 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:28.446000 audit[4222]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8261d230 a2=0 a3=7fff8261d21c items=0 ppid=4127 pid=4222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.446000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 13:05:28.449000 audit[4225]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=4225 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:28.449000 audit[4225]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe89cf6640 a2=0 a3=7ffe89cf662c items=0 ppid=4127 pid=4225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.449000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:05:28.454000 audit[4233]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=4233 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:28.454000 audit[4233]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe9d3addb0 a2=0 a3=7ffe9d3add9c items=0 ppid=4127 pid=4233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.454000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:05:28.455000 audit[4237]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_chain pid=4237 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:28.455000 audit[4237]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd629a5420 a2=0 a3=7ffd629a540c items=0 ppid=4127 pid=4237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.455000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 13:05:28.458000 audit[4241]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=4241 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:28.458000 audit[4241]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff5a7b1a20 a2=0 a3=7fff5a7b1a0c items=0 ppid=4127 pid=4241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.458000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 13:05:28.460000 audit[4242]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=4242 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:28.460000 audit[4242]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd783f7da0 a2=0 a3=7ffd783f7d8c items=0 ppid=4127 pid=4242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.460000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 13:05:28.462000 audit[4244]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=4244 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:28.462000 audit[4244]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe7b458ec0 a2=0 a3=7ffe7b458eac items=0 ppid=4127 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.462000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Dec 16 13:05:28.467000 audit[4247]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=4247 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:28.467000 audit[4247]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd49ac7f90 a2=0 a3=7ffd49ac7f7c items=0 ppid=4127 pid=4247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.467000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Dec 16 13:05:28.470592 systemd[1]: Started cri-containerd-2aea087d4e2138814b7856ae3e3655a4f37d6d7e494b5231f88f723d94c7b788.scope - libcontainer container 2aea087d4e2138814b7856ae3e3655a4f37d6d7e494b5231f88f723d94c7b788. Dec 16 13:05:28.472000 audit[4251]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=4251 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:28.472000 audit[4251]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd3c06d960 a2=0 a3=7ffd3c06d94c items=0 ppid=4127 pid=4251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.472000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Dec 16 13:05:28.473000 audit[4253]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=4253 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:28.473000 audit[4253]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff99d3dd10 a2=0 a3=7fff99d3dcfc items=0 ppid=4127 pid=4253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.473000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 13:05:28.476000 audit[4260]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=4260 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:28.476000 audit[4260]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fffa379e300 a2=0 a3=7fffa379e2ec items=0 ppid=4127 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.476000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:05:28.480000 audit[4263]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_rule pid=4263 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:28.480000 audit[4263]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc15fa6ed0 a2=0 a3=7ffc15fa6ebc items=0 ppid=4127 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.480000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:05:28.481000 audit[4264]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_chain pid=4264 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:28.481000 audit[4264]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9aa19f10 a2=0 a3=7fff9aa19efc items=0 ppid=4127 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.481000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 13:05:28.483000 audit: BPF prog-id=165 op=LOAD Dec 16 13:05:28.484000 audit: BPF prog-id=166 op=LOAD Dec 16 13:05:28.484000 audit[4221]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fe238 a2=98 a3=0 items=0 ppid=4203 pid=4221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261656130383764346532313338383134623738353661653365333635 Dec 16 13:05:28.484000 audit: BPF prog-id=166 op=UNLOAD Dec 16 13:05:28.484000 audit[4221]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4203 pid=4221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261656130383764346532313338383134623738353661653365333635 Dec 16 13:05:28.484000 audit[4266]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=4266 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:28.484000 audit[4266]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fff77b76f20 a2=0 a3=7fff77b76f0c items=0 ppid=4127 pid=4266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.484000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 13:05:28.484000 audit: BPF prog-id=167 op=LOAD Dec 16 13:05:28.484000 audit[4221]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fe488 a2=98 a3=0 items=0 ppid=4203 pid=4221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261656130383764346532313338383134623738353661653365333635 Dec 16 13:05:28.484000 audit: BPF prog-id=168 op=LOAD Dec 16 13:05:28.484000 audit[4221]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001fe218 a2=98 a3=0 items=0 ppid=4203 pid=4221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261656130383764346532313338383134623738353661653365333635 Dec 16 13:05:28.484000 audit: BPF prog-id=168 op=UNLOAD Dec 16 13:05:28.484000 audit[4221]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4203 pid=4221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261656130383764346532313338383134623738353661653365333635 Dec 16 13:05:28.484000 audit: BPF prog-id=167 op=UNLOAD Dec 16 13:05:28.484000 audit[4221]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4203 pid=4221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261656130383764346532313338383134623738353661653365333635 Dec 16 13:05:28.485000 audit: BPF prog-id=169 op=LOAD Dec 16 13:05:28.485000 audit[4221]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fe6e8 a2=98 a3=0 items=0 ppid=4203 pid=4221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261656130383764346532313338383134623738353661653365333635 Dec 16 13:05:28.523250 containerd[2540]: time="2025-12-16T13:05:28.523212867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-qq9fd,Uid:39933995-7e76-42bf-87ea-02075a0855b6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2aea087d4e2138814b7856ae3e3655a4f37d6d7e494b5231f88f723d94c7b788\"" Dec 16 13:05:28.525178 containerd[2540]: time="2025-12-16T13:05:28.525156413Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 13:05:28.585000 audit[4272]: NETFILTER_CFG table=filter:82 family=2 entries=8 op=nft_register_rule pid=4272 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:28.585000 audit[4272]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc2e57b840 a2=0 a3=7ffc2e57b82c items=0 ppid=4127 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.585000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:28.615000 audit[4272]: NETFILTER_CFG table=nat:83 family=2 entries=14 op=nft_register_chain pid=4272 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:28.615000 audit[4272]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffc2e57b840 a2=0 a3=7ffc2e57b82c items=0 ppid=4127 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.615000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:28.617000 audit[4284]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=4284 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:28.617000 audit[4284]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe9a391920 a2=0 a3=7ffe9a39190c items=0 ppid=4127 pid=4284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.617000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 13:05:28.619000 audit[4286]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=4286 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:28.619000 audit[4286]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc8b6c0650 a2=0 a3=7ffc8b6c063c items=0 ppid=4127 pid=4286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.619000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Dec 16 13:05:28.630000 audit[4289]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=4289 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:28.630000 audit[4289]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd110a6550 a2=0 a3=7ffd110a653c items=0 ppid=4127 pid=4289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.630000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Dec 16 13:05:28.631000 audit[4290]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=4290 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:28.631000 audit[4290]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe6a817f50 a2=0 a3=7ffe6a817f3c items=0 ppid=4127 pid=4290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.631000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 13:05:28.633000 audit[4292]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=4292 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:28.633000 audit[4292]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc62752a80 a2=0 a3=7ffc62752a6c items=0 ppid=4127 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.633000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 13:05:28.634000 audit[4293]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=4293 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:28.634000 audit[4293]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff3240460 a2=0 a3=7ffff324044c items=0 ppid=4127 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.634000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 13:05:28.636000 audit[4295]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=4295 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:28.636000 audit[4295]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdef95c050 a2=0 a3=7ffdef95c03c items=0 ppid=4127 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.636000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:05:28.640000 audit[4298]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=4298 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:28.640000 audit[4298]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffcd4833670 a2=0 a3=7ffcd483365c items=0 ppid=4127 pid=4298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.640000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:05:28.641000 audit[4299]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=4299 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:28.641000 audit[4299]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff6ec68d60 a2=0 a3=7fff6ec68d4c items=0 ppid=4127 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.641000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 13:05:28.643000 audit[4301]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=4301 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:28.643000 audit[4301]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcea521b50 a2=0 a3=7ffcea521b3c items=0 ppid=4127 pid=4301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.643000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 13:05:28.644000 audit[4302]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=4302 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:28.644000 audit[4302]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd163a0530 a2=0 a3=7ffd163a051c items=0 ppid=4127 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.644000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 13:05:28.646000 audit[4304]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=4304 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:28.646000 audit[4304]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffee2a53130 a2=0 a3=7ffee2a5311c items=0 ppid=4127 pid=4304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.646000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Dec 16 13:05:28.649000 audit[4307]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=4307 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:28.649000 audit[4307]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd1dd80360 a2=0 a3=7ffd1dd8034c items=0 ppid=4127 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.649000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Dec 16 13:05:28.652000 audit[4310]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=4310 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:28.652000 audit[4310]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffde80920a0 a2=0 a3=7ffde809208c items=0 ppid=4127 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.652000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Dec 16 13:05:28.653000 audit[4311]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=4311 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:28.653000 audit[4311]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe70cf2960 a2=0 a3=7ffe70cf294c items=0 ppid=4127 pid=4311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.653000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 13:05:28.656000 audit[4313]: NETFILTER_CFG table=nat:99 family=10 entries=1 op=nft_register_rule pid=4313 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:28.656000 audit[4313]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd2b59ac70 a2=0 a3=7ffd2b59ac5c items=0 ppid=4127 pid=4313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.656000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:05:28.659000 audit[4316]: NETFILTER_CFG table=nat:100 family=10 entries=1 op=nft_register_rule pid=4316 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:28.659000 audit[4316]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd8f95c6c0 a2=0 a3=7ffd8f95c6ac items=0 ppid=4127 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.659000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:05:28.660000 audit[4317]: NETFILTER_CFG table=nat:101 family=10 entries=1 op=nft_register_chain pid=4317 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:28.660000 audit[4317]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc1f194980 a2=0 a3=7ffc1f19496c items=0 ppid=4127 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.660000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 13:05:28.662000 audit[4319]: NETFILTER_CFG table=nat:102 family=10 entries=2 op=nft_register_chain pid=4319 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:28.662000 audit[4319]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffddabd2b00 a2=0 a3=7ffddabd2aec items=0 ppid=4127 pid=4319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.662000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 13:05:28.663000 audit[4320]: NETFILTER_CFG table=filter:103 family=10 entries=1 op=nft_register_chain pid=4320 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:28.663000 audit[4320]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd66a1610 a2=0 a3=7fffd66a15fc items=0 ppid=4127 pid=4320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.663000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 13:05:28.665000 audit[4322]: NETFILTER_CFG table=filter:104 family=10 entries=1 op=nft_register_rule pid=4322 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:28.665000 audit[4322]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc18834190 a2=0 a3=7ffc1883417c items=0 ppid=4127 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.665000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 13:05:28.668000 audit[4325]: NETFILTER_CFG table=filter:105 family=10 entries=1 op=nft_register_rule pid=4325 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:28.668000 audit[4325]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff3904b070 a2=0 a3=7fff3904b05c items=0 ppid=4127 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.668000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 13:05:28.671000 audit[4327]: NETFILTER_CFG table=filter:106 family=10 entries=3 op=nft_register_rule pid=4327 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 13:05:28.671000 audit[4327]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fffb44ebc70 a2=0 a3=7fffb44ebc5c items=0 ppid=4127 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.671000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:28.671000 audit[4327]: NETFILTER_CFG table=nat:107 family=10 entries=7 op=nft_register_chain pid=4327 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 13:05:28.671000 audit[4327]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fffb44ebc70 a2=0 a3=7fffb44ebc5c items=0 ppid=4127 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.671000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:29.781452 kubelet[4018]: I1216 13:05:29.781234 4018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lbh7g" podStartSLOduration=2.781211371 podStartE2EDuration="2.781211371s" podCreationTimestamp="2025-12-16 13:05:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:05:28.772005933 +0000 UTC m=+5.170971626" watchObservedRunningTime="2025-12-16 13:05:29.781211371 +0000 UTC m=+6.180177061" Dec 16 13:05:29.999723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1543384463.mount: Deactivated successfully. Dec 16 13:05:30.736939 containerd[2540]: time="2025-12-16T13:05:30.736878986Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:30.741633 containerd[2540]: time="2025-12-16T13:05:30.741405561Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Dec 16 13:05:30.748446 containerd[2540]: time="2025-12-16T13:05:30.748409885Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:30.757934 containerd[2540]: time="2025-12-16T13:05:30.757864686Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:30.758309 containerd[2540]: time="2025-12-16T13:05:30.758277315Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.233088245s" Dec 16 13:05:30.758772 containerd[2540]: time="2025-12-16T13:05:30.758309923Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 16 13:05:30.767188 containerd[2540]: time="2025-12-16T13:05:30.767155047Z" level=info msg="CreateContainer within sandbox \"2aea087d4e2138814b7856ae3e3655a4f37d6d7e494b5231f88f723d94c7b788\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 13:05:30.810544 containerd[2540]: time="2025-12-16T13:05:30.810514794Z" level=info msg="Container dea7de33a2eae6502b09c05ceb0407984ac072cef620e51f2e5045990fa28628: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:30.826532 containerd[2540]: time="2025-12-16T13:05:30.826505391Z" level=info msg="CreateContainer within sandbox \"2aea087d4e2138814b7856ae3e3655a4f37d6d7e494b5231f88f723d94c7b788\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"dea7de33a2eae6502b09c05ceb0407984ac072cef620e51f2e5045990fa28628\"" Dec 16 13:05:30.827063 containerd[2540]: time="2025-12-16T13:05:30.826911342Z" level=info msg="StartContainer for \"dea7de33a2eae6502b09c05ceb0407984ac072cef620e51f2e5045990fa28628\"" Dec 16 13:05:30.828504 containerd[2540]: time="2025-12-16T13:05:30.828475898Z" level=info msg="connecting to shim dea7de33a2eae6502b09c05ceb0407984ac072cef620e51f2e5045990fa28628" address="unix:///run/containerd/s/2a87f714ee43f3e9916f692c9a0975f5d7537f8589c884d4c6162e4ced62c0c8" protocol=ttrpc version=3 Dec 16 13:05:30.851534 systemd[1]: Started cri-containerd-dea7de33a2eae6502b09c05ceb0407984ac072cef620e51f2e5045990fa28628.scope - libcontainer container dea7de33a2eae6502b09c05ceb0407984ac072cef620e51f2e5045990fa28628. Dec 16 13:05:30.860000 audit: BPF prog-id=170 op=LOAD Dec 16 13:05:30.860000 audit: BPF prog-id=171 op=LOAD Dec 16 13:05:30.860000 audit[4336]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=4203 pid=4336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:30.860000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465613764653333613265616536353032623039633035636562303430 Dec 16 13:05:30.860000 audit: BPF prog-id=171 op=UNLOAD Dec 16 13:05:30.860000 audit[4336]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4203 pid=4336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:30.860000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465613764653333613265616536353032623039633035636562303430 Dec 16 13:05:30.860000 audit: BPF prog-id=172 op=LOAD Dec 16 13:05:30.860000 audit[4336]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4203 pid=4336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:30.860000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465613764653333613265616536353032623039633035636562303430 Dec 16 13:05:30.860000 audit: BPF prog-id=173 op=LOAD Dec 16 13:05:30.860000 audit[4336]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4203 pid=4336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:30.860000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465613764653333613265616536353032623039633035636562303430 Dec 16 13:05:30.860000 audit: BPF prog-id=173 op=UNLOAD Dec 16 13:05:30.860000 audit[4336]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4203 pid=4336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:30.860000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465613764653333613265616536353032623039633035636562303430 Dec 16 13:05:30.860000 audit: BPF prog-id=172 op=UNLOAD Dec 16 13:05:30.860000 audit[4336]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4203 pid=4336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:30.860000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465613764653333613265616536353032623039633035636562303430 Dec 16 13:05:30.860000 audit: BPF prog-id=174 op=LOAD Dec 16 13:05:30.860000 audit[4336]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=4203 pid=4336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:30.860000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465613764653333613265616536353032623039633035636562303430 Dec 16 13:05:30.894010 containerd[2540]: time="2025-12-16T13:05:30.893983761Z" level=info msg="StartContainer for \"dea7de33a2eae6502b09c05ceb0407984ac072cef620e51f2e5045990fa28628\" returns successfully" Dec 16 13:05:32.159183 kubelet[4018]: I1216 13:05:32.159111 4018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-qq9fd" podStartSLOduration=2.924415556 podStartE2EDuration="5.159084899s" podCreationTimestamp="2025-12-16 13:05:27 +0000 UTC" firstStartedPulling="2025-12-16 13:05:28.5248101 +0000 UTC m=+4.923775798" lastFinishedPulling="2025-12-16 13:05:30.759479442 +0000 UTC m=+7.158445141" observedRunningTime="2025-12-16 13:05:31.775111647 +0000 UTC m=+8.174077368" watchObservedRunningTime="2025-12-16 13:05:32.159084899 +0000 UTC m=+8.558050611" Dec 16 13:05:34.048221 systemd[1]: cri-containerd-dea7de33a2eae6502b09c05ceb0407984ac072cef620e51f2e5045990fa28628.scope: Deactivated successfully. Dec 16 13:05:34.059797 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 16 13:05:34.059894 kernel: audit: type=1334 audit(1765890334.056:549): prog-id=170 op=UNLOAD Dec 16 13:05:34.056000 audit: BPF prog-id=170 op=UNLOAD Dec 16 13:05:34.060584 containerd[2540]: time="2025-12-16T13:05:34.060541982Z" level=info msg="received container exit event container_id:\"dea7de33a2eae6502b09c05ceb0407984ac072cef620e51f2e5045990fa28628\" id:\"dea7de33a2eae6502b09c05ceb0407984ac072cef620e51f2e5045990fa28628\" pid:4349 exit_status:1 exited_at:{seconds:1765890334 nanos:58084694}" Dec 16 13:05:34.056000 audit: BPF prog-id=174 op=UNLOAD Dec 16 13:05:34.064653 kernel: audit: type=1334 audit(1765890334.056:550): prog-id=174 op=UNLOAD Dec 16 13:05:34.093415 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dea7de33a2eae6502b09c05ceb0407984ac072cef620e51f2e5045990fa28628-rootfs.mount: Deactivated successfully. Dec 16 13:05:35.779721 kubelet[4018]: I1216 13:05:35.779682 4018 scope.go:117] "RemoveContainer" containerID="dea7de33a2eae6502b09c05ceb0407984ac072cef620e51f2e5045990fa28628" Dec 16 13:05:35.783612 containerd[2540]: time="2025-12-16T13:05:35.782758269Z" level=info msg="CreateContainer within sandbox \"2aea087d4e2138814b7856ae3e3655a4f37d6d7e494b5231f88f723d94c7b788\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 16 13:05:35.813997 containerd[2540]: time="2025-12-16T13:05:35.813957623Z" level=info msg="Container 0f5b0ac7b5be33b938c750c6e53438b9ed36bf7a191e54bf0806d68021508a52: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:35.837058 containerd[2540]: time="2025-12-16T13:05:35.837029486Z" level=info msg="CreateContainer within sandbox \"2aea087d4e2138814b7856ae3e3655a4f37d6d7e494b5231f88f723d94c7b788\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"0f5b0ac7b5be33b938c750c6e53438b9ed36bf7a191e54bf0806d68021508a52\"" Dec 16 13:05:35.837813 containerd[2540]: time="2025-12-16T13:05:35.837456494Z" level=info msg="StartContainer for \"0f5b0ac7b5be33b938c750c6e53438b9ed36bf7a191e54bf0806d68021508a52\"" Dec 16 13:05:35.838869 containerd[2540]: time="2025-12-16T13:05:35.838841669Z" level=info msg="connecting to shim 0f5b0ac7b5be33b938c750c6e53438b9ed36bf7a191e54bf0806d68021508a52" address="unix:///run/containerd/s/2a87f714ee43f3e9916f692c9a0975f5d7537f8589c884d4c6162e4ced62c0c8" protocol=ttrpc version=3 Dec 16 13:05:35.861554 systemd[1]: Started cri-containerd-0f5b0ac7b5be33b938c750c6e53438b9ed36bf7a191e54bf0806d68021508a52.scope - libcontainer container 0f5b0ac7b5be33b938c750c6e53438b9ed36bf7a191e54bf0806d68021508a52. Dec 16 13:05:35.871000 audit: BPF prog-id=175 op=LOAD Dec 16 13:05:35.873387 kernel: audit: type=1334 audit(1765890335.871:551): prog-id=175 op=LOAD Dec 16 13:05:35.872000 audit: BPF prog-id=176 op=LOAD Dec 16 13:05:35.872000 audit[4409]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4203 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:35.878557 kernel: audit: type=1334 audit(1765890335.872:552): prog-id=176 op=LOAD Dec 16 13:05:35.878659 kernel: audit: type=1300 audit(1765890335.872:552): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4203 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:35.884392 kernel: audit: type=1327 audit(1765890335.872:552): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066356230616337623562653333623933386337353063366535333433 Dec 16 13:05:35.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066356230616337623562653333623933386337353063366535333433 Dec 16 13:05:35.886072 kernel: audit: type=1334 audit(1765890335.872:553): prog-id=176 op=UNLOAD Dec 16 13:05:35.872000 audit: BPF prog-id=176 op=UNLOAD Dec 16 13:05:35.872000 audit[4409]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4203 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:35.896401 kernel: audit: type=1300 audit(1765890335.872:553): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4203 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:35.896462 kernel: audit: type=1327 audit(1765890335.872:553): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066356230616337623562653333623933386337353063366535333433 Dec 16 13:05:35.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066356230616337623562653333623933386337353063366535333433 Dec 16 13:05:35.872000 audit: BPF prog-id=177 op=LOAD Dec 16 13:05:35.899483 kernel: audit: type=1334 audit(1765890335.872:554): prog-id=177 op=LOAD Dec 16 13:05:35.872000 audit[4409]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4203 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:35.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066356230616337623562653333623933386337353063366535333433 Dec 16 13:05:35.873000 audit: BPF prog-id=178 op=LOAD Dec 16 13:05:35.873000 audit[4409]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4203 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:35.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066356230616337623562653333623933386337353063366535333433 Dec 16 13:05:35.873000 audit: BPF prog-id=178 op=UNLOAD Dec 16 13:05:35.873000 audit[4409]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4203 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:35.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066356230616337623562653333623933386337353063366535333433 Dec 16 13:05:35.873000 audit: BPF prog-id=177 op=UNLOAD Dec 16 13:05:35.873000 audit[4409]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4203 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:35.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066356230616337623562653333623933386337353063366535333433 Dec 16 13:05:35.873000 audit: BPF prog-id=179 op=LOAD Dec 16 13:05:35.873000 audit[4409]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4203 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:35.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066356230616337623562653333623933386337353063366535333433 Dec 16 13:05:35.916276 containerd[2540]: time="2025-12-16T13:05:35.916234912Z" level=info msg="StartContainer for \"0f5b0ac7b5be33b938c750c6e53438b9ed36bf7a191e54bf0806d68021508a52\" returns successfully" Dec 16 13:05:37.409404 sudo[2990]: pam_unix(sudo:session): session closed for user root Dec 16 13:05:37.409000 audit[2990]: USER_END pid=2990 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:05:37.409000 audit[2990]: CRED_DISP pid=2990 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:05:37.504451 sshd[2989]: Connection closed by 10.200.16.10 port 43342 Dec 16 13:05:37.505036 sshd-session[2986]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:37.508000 audit[2986]: USER_END pid=2986 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:05:37.508000 audit[2986]: CRED_DISP pid=2986 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:05:37.513317 systemd[1]: sshd@6-10.200.4.43:22-10.200.16.10:43342.service: Deactivated successfully. Dec 16 13:05:37.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.4.43:22-10.200.16.10:43342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:05:37.517413 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 13:05:37.517995 systemd[1]: session-9.scope: Consumed 3.541s CPU time, 234.2M memory peak. Dec 16 13:05:37.522690 systemd-logind[2506]: Session 9 logged out. Waiting for processes to exit. Dec 16 13:05:37.524168 systemd-logind[2506]: Removed session 9. Dec 16 13:05:40.200630 kernel: kauditd_printk_skb: 19 callbacks suppressed Dec 16 13:05:40.200799 kernel: audit: type=1325 audit(1765890340.195:564): table=filter:108 family=2 entries=15 op=nft_register_rule pid=4464 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:40.195000 audit[4464]: NETFILTER_CFG table=filter:108 family=2 entries=15 op=nft_register_rule pid=4464 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:40.195000 audit[4464]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff7bb3dfd0 a2=0 a3=7fff7bb3dfbc items=0 ppid=4127 pid=4464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:40.210358 kernel: audit: type=1300 audit(1765890340.195:564): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff7bb3dfd0 a2=0 a3=7fff7bb3dfbc items=0 ppid=4127 pid=4464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:40.195000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:40.215351 kernel: audit: type=1327 audit(1765890340.195:564): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:40.203000 audit[4464]: NETFILTER_CFG table=nat:109 family=2 entries=12 op=nft_register_rule pid=4464 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:40.219356 kernel: audit: type=1325 audit(1765890340.203:565): table=nat:109 family=2 entries=12 op=nft_register_rule pid=4464 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:40.203000 audit[4464]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff7bb3dfd0 a2=0 a3=0 items=0 ppid=4127 pid=4464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:40.226476 kernel: audit: type=1300 audit(1765890340.203:565): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff7bb3dfd0 a2=0 a3=0 items=0 ppid=4127 pid=4464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:40.203000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:40.232354 kernel: audit: type=1327 audit(1765890340.203:565): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:41.229000 audit[4466]: NETFILTER_CFG table=filter:110 family=2 entries=16 op=nft_register_rule pid=4466 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:41.233495 kernel: audit: type=1325 audit(1765890341.229:566): table=filter:110 family=2 entries=16 op=nft_register_rule pid=4466 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:41.229000 audit[4466]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffccffb9fc0 a2=0 a3=7ffccffb9fac items=0 ppid=4127 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:41.240367 kernel: audit: type=1300 audit(1765890341.229:566): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffccffb9fc0 a2=0 a3=7ffccffb9fac items=0 ppid=4127 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:41.229000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:41.247358 kernel: audit: type=1327 audit(1765890341.229:566): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:41.241000 audit[4466]: NETFILTER_CFG table=nat:111 family=2 entries=12 op=nft_register_rule pid=4466 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:41.251361 kernel: audit: type=1325 audit(1765890341.241:567): table=nat:111 family=2 entries=12 op=nft_register_rule pid=4466 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:41.241000 audit[4466]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffccffb9fc0 a2=0 a3=0 items=0 ppid=4127 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:41.241000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:42.425000 audit[4468]: NETFILTER_CFG table=filter:112 family=2 entries=17 op=nft_register_rule pid=4468 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:42.425000 audit[4468]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe0d7dd3b0 a2=0 a3=7ffe0d7dd39c items=0 ppid=4127 pid=4468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:42.425000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:42.429000 audit[4468]: NETFILTER_CFG table=nat:113 family=2 entries=12 op=nft_register_rule pid=4468 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:42.429000 audit[4468]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe0d7dd3b0 a2=0 a3=0 items=0 ppid=4127 pid=4468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:42.429000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:43.473000 audit[4470]: NETFILTER_CFG table=filter:114 family=2 entries=19 op=nft_register_rule pid=4470 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:43.473000 audit[4470]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffca143f3d0 a2=0 a3=7ffca143f3bc items=0 ppid=4127 pid=4470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:43.473000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:43.476000 audit[4470]: NETFILTER_CFG table=nat:115 family=2 entries=12 op=nft_register_rule pid=4470 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:43.476000 audit[4470]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffca143f3d0 a2=0 a3=0 items=0 ppid=4127 pid=4470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:43.476000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:44.214852 systemd[1]: Created slice kubepods-besteffort-pod13210662_e215_4a49_984b_c07c7344d05e.slice - libcontainer container kubepods-besteffort-pod13210662_e215_4a49_984b_c07c7344d05e.slice. Dec 16 13:05:44.232299 kubelet[4018]: I1216 13:05:44.232254 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/13210662-e215-4a49-984b-c07c7344d05e-typha-certs\") pod \"calico-typha-6957547d5b-bnp7w\" (UID: \"13210662-e215-4a49-984b-c07c7344d05e\") " pod="calico-system/calico-typha-6957547d5b-bnp7w" Dec 16 13:05:44.232647 kubelet[4018]: I1216 13:05:44.232478 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smzhc\" (UniqueName: \"kubernetes.io/projected/13210662-e215-4a49-984b-c07c7344d05e-kube-api-access-smzhc\") pod \"calico-typha-6957547d5b-bnp7w\" (UID: \"13210662-e215-4a49-984b-c07c7344d05e\") " pod="calico-system/calico-typha-6957547d5b-bnp7w" Dec 16 13:05:44.232647 kubelet[4018]: I1216 13:05:44.232514 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13210662-e215-4a49-984b-c07c7344d05e-tigera-ca-bundle\") pod \"calico-typha-6957547d5b-bnp7w\" (UID: \"13210662-e215-4a49-984b-c07c7344d05e\") " pod="calico-system/calico-typha-6957547d5b-bnp7w" Dec 16 13:05:44.476406 systemd[1]: Created slice kubepods-besteffort-pod16527c30_b58e_44b4_88b6_f2e5201aa95e.slice - libcontainer container kubepods-besteffort-pod16527c30_b58e_44b4_88b6_f2e5201aa95e.slice. Dec 16 13:05:44.492000 audit[4475]: NETFILTER_CFG table=filter:116 family=2 entries=21 op=nft_register_rule pid=4475 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:44.492000 audit[4475]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd55edeac0 a2=0 a3=7ffd55edeaac items=0 ppid=4127 pid=4475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:44.492000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:44.502000 audit[4475]: NETFILTER_CFG table=nat:117 family=2 entries=12 op=nft_register_rule pid=4475 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:44.502000 audit[4475]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd55edeac0 a2=0 a3=0 items=0 ppid=4127 pid=4475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:44.502000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:44.534523 kubelet[4018]: I1216 13:05:44.534493 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/16527c30-b58e-44b4-88b6-f2e5201aa95e-cni-net-dir\") pod \"calico-node-s9njd\" (UID: \"16527c30-b58e-44b4-88b6-f2e5201aa95e\") " pod="calico-system/calico-node-s9njd" Dec 16 13:05:44.534621 kubelet[4018]: I1216 13:05:44.534527 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16527c30-b58e-44b4-88b6-f2e5201aa95e-tigera-ca-bundle\") pod \"calico-node-s9njd\" (UID: \"16527c30-b58e-44b4-88b6-f2e5201aa95e\") " pod="calico-system/calico-node-s9njd" Dec 16 13:05:44.534621 kubelet[4018]: I1216 13:05:44.534545 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16527c30-b58e-44b4-88b6-f2e5201aa95e-xtables-lock\") pod \"calico-node-s9njd\" (UID: \"16527c30-b58e-44b4-88b6-f2e5201aa95e\") " pod="calico-system/calico-node-s9njd" Dec 16 13:05:44.534621 kubelet[4018]: I1216 13:05:44.534562 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16527c30-b58e-44b4-88b6-f2e5201aa95e-lib-modules\") pod \"calico-node-s9njd\" (UID: \"16527c30-b58e-44b4-88b6-f2e5201aa95e\") " pod="calico-system/calico-node-s9njd" Dec 16 13:05:44.534621 kubelet[4018]: I1216 13:05:44.534596 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/16527c30-b58e-44b4-88b6-f2e5201aa95e-cni-bin-dir\") pod \"calico-node-s9njd\" (UID: \"16527c30-b58e-44b4-88b6-f2e5201aa95e\") " pod="calico-system/calico-node-s9njd" Dec 16 13:05:44.534621 kubelet[4018]: I1216 13:05:44.534614 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/16527c30-b58e-44b4-88b6-f2e5201aa95e-flexvol-driver-host\") pod \"calico-node-s9njd\" (UID: \"16527c30-b58e-44b4-88b6-f2e5201aa95e\") " pod="calico-system/calico-node-s9njd" Dec 16 13:05:44.534747 kubelet[4018]: I1216 13:05:44.534633 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/16527c30-b58e-44b4-88b6-f2e5201aa95e-var-lib-calico\") pod \"calico-node-s9njd\" (UID: \"16527c30-b58e-44b4-88b6-f2e5201aa95e\") " pod="calico-system/calico-node-s9njd" Dec 16 13:05:44.534747 kubelet[4018]: I1216 13:05:44.534653 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/16527c30-b58e-44b4-88b6-f2e5201aa95e-node-certs\") pod \"calico-node-s9njd\" (UID: \"16527c30-b58e-44b4-88b6-f2e5201aa95e\") " pod="calico-system/calico-node-s9njd" Dec 16 13:05:44.534747 kubelet[4018]: I1216 13:05:44.534670 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/16527c30-b58e-44b4-88b6-f2e5201aa95e-policysync\") pod \"calico-node-s9njd\" (UID: \"16527c30-b58e-44b4-88b6-f2e5201aa95e\") " pod="calico-system/calico-node-s9njd" Dec 16 13:05:44.534747 kubelet[4018]: I1216 13:05:44.534690 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/16527c30-b58e-44b4-88b6-f2e5201aa95e-var-run-calico\") pod \"calico-node-s9njd\" (UID: \"16527c30-b58e-44b4-88b6-f2e5201aa95e\") " pod="calico-system/calico-node-s9njd" Dec 16 13:05:44.534747 kubelet[4018]: I1216 13:05:44.534711 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/16527c30-b58e-44b4-88b6-f2e5201aa95e-cni-log-dir\") pod \"calico-node-s9njd\" (UID: \"16527c30-b58e-44b4-88b6-f2e5201aa95e\") " pod="calico-system/calico-node-s9njd" Dec 16 13:05:44.534868 kubelet[4018]: I1216 13:05:44.534731 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njwdn\" (UniqueName: \"kubernetes.io/projected/16527c30-b58e-44b4-88b6-f2e5201aa95e-kube-api-access-njwdn\") pod \"calico-node-s9njd\" (UID: \"16527c30-b58e-44b4-88b6-f2e5201aa95e\") " pod="calico-system/calico-node-s9njd" Dec 16 13:05:44.536135 containerd[2540]: time="2025-12-16T13:05:44.536096119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6957547d5b-bnp7w,Uid:13210662-e215-4a49-984b-c07c7344d05e,Namespace:calico-system,Attempt:0,}" Dec 16 13:05:44.630149 containerd[2540]: time="2025-12-16T13:05:44.630108799Z" level=info msg="connecting to shim 423f35e941ec8b4afcb2cae190266b8f2077452f38c27c74eaa1a057280d96fc" address="unix:///run/containerd/s/e21642adc8aadf8246f90624d3276e5ca1d53110a1e1a57a1002cea55717bff4" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:44.647445 kubelet[4018]: E1216 13:05:44.647371 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.647445 kubelet[4018]: W1216 13:05:44.647396 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.647445 kubelet[4018]: E1216 13:05:44.647419 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.660807 kubelet[4018]: E1216 13:05:44.660728 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.660807 kubelet[4018]: W1216 13:05:44.660761 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.660807 kubelet[4018]: E1216 13:05:44.660780 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.667460 kubelet[4018]: E1216 13:05:44.667391 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.667460 kubelet[4018]: W1216 13:05:44.667411 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.667799 kubelet[4018]: E1216 13:05:44.667440 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.677554 systemd[1]: Started cri-containerd-423f35e941ec8b4afcb2cae190266b8f2077452f38c27c74eaa1a057280d96fc.scope - libcontainer container 423f35e941ec8b4afcb2cae190266b8f2077452f38c27c74eaa1a057280d96fc. Dec 16 13:05:44.696313 kubelet[4018]: E1216 13:05:44.696167 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ctchn" podUID="940a093b-83dc-454c-8522-5e1b1f40521f" Dec 16 13:05:44.707628 kubelet[4018]: E1216 13:05:44.707608 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.707730 kubelet[4018]: W1216 13:05:44.707718 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.707782 kubelet[4018]: E1216 13:05:44.707773 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.711079 kubelet[4018]: E1216 13:05:44.710356 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.711079 kubelet[4018]: W1216 13:05:44.710374 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.711079 kubelet[4018]: E1216 13:05:44.710390 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.711079 kubelet[4018]: E1216 13:05:44.710574 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.711079 kubelet[4018]: W1216 13:05:44.710580 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.711079 kubelet[4018]: E1216 13:05:44.710589 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.712513 kubelet[4018]: E1216 13:05:44.712484 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.712513 kubelet[4018]: W1216 13:05:44.712499 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.712513 kubelet[4018]: E1216 13:05:44.712513 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.712689 kubelet[4018]: E1216 13:05:44.712673 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.712689 kubelet[4018]: W1216 13:05:44.712679 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.712689 kubelet[4018]: E1216 13:05:44.712687 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.713097 kubelet[4018]: E1216 13:05:44.713081 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.713097 kubelet[4018]: W1216 13:05:44.713091 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.713171 kubelet[4018]: E1216 13:05:44.713101 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.713284 kubelet[4018]: E1216 13:05:44.713265 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.713284 kubelet[4018]: W1216 13:05:44.713274 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.713284 kubelet[4018]: E1216 13:05:44.713282 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.713456 kubelet[4018]: E1216 13:05:44.713405 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.713456 kubelet[4018]: W1216 13:05:44.713411 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.713456 kubelet[4018]: E1216 13:05:44.713419 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.713807 kubelet[4018]: E1216 13:05:44.713791 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.713807 kubelet[4018]: W1216 13:05:44.713806 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.713878 kubelet[4018]: E1216 13:05:44.713815 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.714153 kubelet[4018]: E1216 13:05:44.714141 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.714208 kubelet[4018]: W1216 13:05:44.714153 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.714208 kubelet[4018]: E1216 13:05:44.714178 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.714396 kubelet[4018]: E1216 13:05:44.714385 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.714438 kubelet[4018]: W1216 13:05:44.714409 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.714438 kubelet[4018]: E1216 13:05:44.714418 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.714575 kubelet[4018]: E1216 13:05:44.714563 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.714575 kubelet[4018]: W1216 13:05:44.714573 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.714624 kubelet[4018]: E1216 13:05:44.714581 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.714721 kubelet[4018]: E1216 13:05:44.714712 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.714748 kubelet[4018]: W1216 13:05:44.714721 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.714748 kubelet[4018]: E1216 13:05:44.714741 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.714863 kubelet[4018]: E1216 13:05:44.714852 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.714863 kubelet[4018]: W1216 13:05:44.714860 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.714913 kubelet[4018]: E1216 13:05:44.714867 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.714994 kubelet[4018]: E1216 13:05:44.714986 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.715018 kubelet[4018]: W1216 13:05:44.714994 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.715018 kubelet[4018]: E1216 13:05:44.715001 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.715115 kubelet[4018]: E1216 13:05:44.715106 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.715115 kubelet[4018]: W1216 13:05:44.715114 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.715162 kubelet[4018]: E1216 13:05:44.715128 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.715283 kubelet[4018]: E1216 13:05:44.715274 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.715314 kubelet[4018]: W1216 13:05:44.715283 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.715314 kubelet[4018]: E1216 13:05:44.715292 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.715465 kubelet[4018]: E1216 13:05:44.715452 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.715465 kubelet[4018]: W1216 13:05:44.715461 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.715515 kubelet[4018]: E1216 13:05:44.715469 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.715593 kubelet[4018]: E1216 13:05:44.715585 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.715616 kubelet[4018]: W1216 13:05:44.715594 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.715616 kubelet[4018]: E1216 13:05:44.715601 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.715732 kubelet[4018]: E1216 13:05:44.715719 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.715732 kubelet[4018]: W1216 13:05:44.715728 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.715777 kubelet[4018]: E1216 13:05:44.715735 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.723000 audit: BPF prog-id=180 op=LOAD Dec 16 13:05:44.724000 audit: BPF prog-id=181 op=LOAD Dec 16 13:05:44.724000 audit[4495]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4484 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:44.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432336633356539343165633862346166636232636165313930323636 Dec 16 13:05:44.725000 audit: BPF prog-id=181 op=UNLOAD Dec 16 13:05:44.725000 audit[4495]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4484 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:44.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432336633356539343165633862346166636232636165313930323636 Dec 16 13:05:44.725000 audit: BPF prog-id=182 op=LOAD Dec 16 13:05:44.725000 audit[4495]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4484 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:44.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432336633356539343165633862346166636232636165313930323636 Dec 16 13:05:44.725000 audit: BPF prog-id=183 op=LOAD Dec 16 13:05:44.725000 audit[4495]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4484 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:44.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432336633356539343165633862346166636232636165313930323636 Dec 16 13:05:44.725000 audit: BPF prog-id=183 op=UNLOAD Dec 16 13:05:44.725000 audit[4495]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4484 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:44.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432336633356539343165633862346166636232636165313930323636 Dec 16 13:05:44.725000 audit: BPF prog-id=182 op=UNLOAD Dec 16 13:05:44.725000 audit[4495]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4484 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:44.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432336633356539343165633862346166636232636165313930323636 Dec 16 13:05:44.725000 audit: BPF prog-id=184 op=LOAD Dec 16 13:05:44.725000 audit[4495]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4484 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:44.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432336633356539343165633862346166636232636165313930323636 Dec 16 13:05:44.742893 kubelet[4018]: E1216 13:05:44.742876 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.742893 kubelet[4018]: W1216 13:05:44.742891 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.742984 kubelet[4018]: E1216 13:05:44.742903 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.742984 kubelet[4018]: I1216 13:05:44.742924 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/940a093b-83dc-454c-8522-5e1b1f40521f-varrun\") pod \"csi-node-driver-ctchn\" (UID: \"940a093b-83dc-454c-8522-5e1b1f40521f\") " pod="calico-system/csi-node-driver-ctchn" Dec 16 13:05:44.743087 kubelet[4018]: E1216 13:05:44.743077 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.743112 kubelet[4018]: W1216 13:05:44.743088 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.743112 kubelet[4018]: E1216 13:05:44.743096 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.744351 kubelet[4018]: I1216 13:05:44.743182 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4t2s\" (UniqueName: \"kubernetes.io/projected/940a093b-83dc-454c-8522-5e1b1f40521f-kube-api-access-f4t2s\") pod \"csi-node-driver-ctchn\" (UID: \"940a093b-83dc-454c-8522-5e1b1f40521f\") " pod="calico-system/csi-node-driver-ctchn" Dec 16 13:05:44.744351 kubelet[4018]: E1216 13:05:44.743223 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.744351 kubelet[4018]: W1216 13:05:44.743228 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.744351 kubelet[4018]: E1216 13:05:44.743235 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.744351 kubelet[4018]: E1216 13:05:44.743318 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.744351 kubelet[4018]: W1216 13:05:44.743323 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.744351 kubelet[4018]: E1216 13:05:44.743328 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.744351 kubelet[4018]: E1216 13:05:44.743418 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.744351 kubelet[4018]: W1216 13:05:44.743423 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.744559 kubelet[4018]: E1216 13:05:44.743430 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.744559 kubelet[4018]: I1216 13:05:44.743443 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/940a093b-83dc-454c-8522-5e1b1f40521f-kubelet-dir\") pod \"csi-node-driver-ctchn\" (UID: \"940a093b-83dc-454c-8522-5e1b1f40521f\") " pod="calico-system/csi-node-driver-ctchn" Dec 16 13:05:44.744559 kubelet[4018]: E1216 13:05:44.743541 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.744559 kubelet[4018]: W1216 13:05:44.743545 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.744559 kubelet[4018]: E1216 13:05:44.743550 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.744559 kubelet[4018]: I1216 13:05:44.743561 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/940a093b-83dc-454c-8522-5e1b1f40521f-registration-dir\") pod \"csi-node-driver-ctchn\" (UID: \"940a093b-83dc-454c-8522-5e1b1f40521f\") " pod="calico-system/csi-node-driver-ctchn" Dec 16 13:05:44.744559 kubelet[4018]: E1216 13:05:44.743667 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.744559 kubelet[4018]: W1216 13:05:44.743671 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.744736 kubelet[4018]: E1216 13:05:44.743676 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.744736 kubelet[4018]: I1216 13:05:44.743695 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/940a093b-83dc-454c-8522-5e1b1f40521f-socket-dir\") pod \"csi-node-driver-ctchn\" (UID: \"940a093b-83dc-454c-8522-5e1b1f40521f\") " pod="calico-system/csi-node-driver-ctchn" Dec 16 13:05:44.744736 kubelet[4018]: E1216 13:05:44.743813 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.744736 kubelet[4018]: W1216 13:05:44.743817 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.744736 kubelet[4018]: E1216 13:05:44.743823 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.744736 kubelet[4018]: E1216 13:05:44.743914 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.744736 kubelet[4018]: W1216 13:05:44.743918 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.744736 kubelet[4018]: E1216 13:05:44.743925 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.744736 kubelet[4018]: E1216 13:05:44.744025 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.744897 kubelet[4018]: W1216 13:05:44.744030 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.744897 kubelet[4018]: E1216 13:05:44.744036 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.744897 kubelet[4018]: E1216 13:05:44.744125 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.744897 kubelet[4018]: W1216 13:05:44.744129 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.744897 kubelet[4018]: E1216 13:05:44.744134 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.744897 kubelet[4018]: E1216 13:05:44.744218 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.744897 kubelet[4018]: W1216 13:05:44.744222 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.744897 kubelet[4018]: E1216 13:05:44.744227 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.744897 kubelet[4018]: E1216 13:05:44.744322 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.744897 kubelet[4018]: W1216 13:05:44.744326 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.745101 kubelet[4018]: E1216 13:05:44.744331 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.745101 kubelet[4018]: E1216 13:05:44.744429 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.745101 kubelet[4018]: W1216 13:05:44.744434 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.745101 kubelet[4018]: E1216 13:05:44.744439 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.745101 kubelet[4018]: E1216 13:05:44.744519 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.745101 kubelet[4018]: W1216 13:05:44.744523 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.745101 kubelet[4018]: E1216 13:05:44.744529 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.767503 containerd[2540]: time="2025-12-16T13:05:44.767467702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6957547d5b-bnp7w,Uid:13210662-e215-4a49-984b-c07c7344d05e,Namespace:calico-system,Attempt:0,} returns sandbox id \"423f35e941ec8b4afcb2cae190266b8f2077452f38c27c74eaa1a057280d96fc\"" Dec 16 13:05:44.769073 containerd[2540]: time="2025-12-16T13:05:44.769049293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 13:05:44.791785 containerd[2540]: time="2025-12-16T13:05:44.791759062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s9njd,Uid:16527c30-b58e-44b4-88b6-f2e5201aa95e,Namespace:calico-system,Attempt:0,}" Dec 16 13:05:44.844990 kubelet[4018]: E1216 13:05:44.844968 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.844990 kubelet[4018]: W1216 13:05:44.844988 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.845202 kubelet[4018]: E1216 13:05:44.845005 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.845202 kubelet[4018]: E1216 13:05:44.845147 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.845202 kubelet[4018]: W1216 13:05:44.845153 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.845202 kubelet[4018]: E1216 13:05:44.845160 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.845328 kubelet[4018]: E1216 13:05:44.845308 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.845328 kubelet[4018]: W1216 13:05:44.845316 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.845423 kubelet[4018]: E1216 13:05:44.845335 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.845530 kubelet[4018]: E1216 13:05:44.845497 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.845530 kubelet[4018]: W1216 13:05:44.845504 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.845530 kubelet[4018]: E1216 13:05:44.845512 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.845772 kubelet[4018]: E1216 13:05:44.845647 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.845772 kubelet[4018]: W1216 13:05:44.845653 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.845772 kubelet[4018]: E1216 13:05:44.845660 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.845878 kubelet[4018]: E1216 13:05:44.845864 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.845878 kubelet[4018]: W1216 13:05:44.845876 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.845940 kubelet[4018]: E1216 13:05:44.845885 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.846011 kubelet[4018]: E1216 13:05:44.846001 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.846011 kubelet[4018]: W1216 13:05:44.846009 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.846075 kubelet[4018]: E1216 13:05:44.846016 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.846124 kubelet[4018]: E1216 13:05:44.846112 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.846124 kubelet[4018]: W1216 13:05:44.846120 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.846189 kubelet[4018]: E1216 13:05:44.846127 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.846310 kubelet[4018]: E1216 13:05:44.846299 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.846310 kubelet[4018]: W1216 13:05:44.846309 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.846386 kubelet[4018]: E1216 13:05:44.846316 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.846476 kubelet[4018]: E1216 13:05:44.846465 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.846476 kubelet[4018]: W1216 13:05:44.846475 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.846524 kubelet[4018]: E1216 13:05:44.846482 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.846579 kubelet[4018]: E1216 13:05:44.846569 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.846579 kubelet[4018]: W1216 13:05:44.846576 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.846651 kubelet[4018]: E1216 13:05:44.846582 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.846682 kubelet[4018]: E1216 13:05:44.846671 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.846682 kubelet[4018]: W1216 13:05:44.846676 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.846736 kubelet[4018]: E1216 13:05:44.846682 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.846879 kubelet[4018]: E1216 13:05:44.846856 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.846879 kubelet[4018]: W1216 13:05:44.846865 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.846879 kubelet[4018]: E1216 13:05:44.846873 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.847027 kubelet[4018]: E1216 13:05:44.846991 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.847027 kubelet[4018]: W1216 13:05:44.846997 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.847027 kubelet[4018]: E1216 13:05:44.847004 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.847169 kubelet[4018]: E1216 13:05:44.847155 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.847169 kubelet[4018]: W1216 13:05:44.847166 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.847252 kubelet[4018]: E1216 13:05:44.847175 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.847314 kubelet[4018]: E1216 13:05:44.847302 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.847314 kubelet[4018]: W1216 13:05:44.847308 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.847454 kubelet[4018]: E1216 13:05:44.847315 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.847518 kubelet[4018]: E1216 13:05:44.847506 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.847518 kubelet[4018]: W1216 13:05:44.847517 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.847576 kubelet[4018]: E1216 13:05:44.847525 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.847644 kubelet[4018]: E1216 13:05:44.847632 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.847644 kubelet[4018]: W1216 13:05:44.847639 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.847690 kubelet[4018]: E1216 13:05:44.847646 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.847806 kubelet[4018]: E1216 13:05:44.847795 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.847806 kubelet[4018]: W1216 13:05:44.847804 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.847855 kubelet[4018]: E1216 13:05:44.847811 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.847947 kubelet[4018]: E1216 13:05:44.847937 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.847947 kubelet[4018]: W1216 13:05:44.847945 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.847996 kubelet[4018]: E1216 13:05:44.847951 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.848078 kubelet[4018]: E1216 13:05:44.848066 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.848078 kubelet[4018]: W1216 13:05:44.848075 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.848134 kubelet[4018]: E1216 13:05:44.848082 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.848243 kubelet[4018]: E1216 13:05:44.848234 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.848243 kubelet[4018]: W1216 13:05:44.848242 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.848298 kubelet[4018]: E1216 13:05:44.848249 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.848404 kubelet[4018]: E1216 13:05:44.848392 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.848404 kubelet[4018]: W1216 13:05:44.848401 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.848467 kubelet[4018]: E1216 13:05:44.848408 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.848685 kubelet[4018]: E1216 13:05:44.848672 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.848685 kubelet[4018]: W1216 13:05:44.848685 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.848752 kubelet[4018]: E1216 13:05:44.848694 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.848926 kubelet[4018]: E1216 13:05:44.848915 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.848962 kubelet[4018]: W1216 13:05:44.848927 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.848962 kubelet[4018]: E1216 13:05:44.848935 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.855758 kubelet[4018]: E1216 13:05:44.855726 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:44.855930 kubelet[4018]: W1216 13:05:44.855858 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:44.855930 kubelet[4018]: E1216 13:05:44.855877 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:44.863048 containerd[2540]: time="2025-12-16T13:05:44.863001654Z" level=info msg="connecting to shim a9bf4258a59d2691c327c6bfa2ed84410b663bbe3142c5e898da5587a6eaf1b1" address="unix:///run/containerd/s/30bc6743cbb383c683f3fce89c5b19d52e404fe88f60846b755da225539a8068" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:44.888552 systemd[1]: Started cri-containerd-a9bf4258a59d2691c327c6bfa2ed84410b663bbe3142c5e898da5587a6eaf1b1.scope - libcontainer container a9bf4258a59d2691c327c6bfa2ed84410b663bbe3142c5e898da5587a6eaf1b1. Dec 16 13:05:44.896000 audit: BPF prog-id=185 op=LOAD Dec 16 13:05:44.897000 audit: BPF prog-id=186 op=LOAD Dec 16 13:05:44.897000 audit[4619]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4607 pid=4619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:44.897000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139626634323538613539643236393163333237633662666132656438 Dec 16 13:05:44.897000 audit: BPF prog-id=186 op=UNLOAD Dec 16 13:05:44.897000 audit[4619]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4607 pid=4619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:44.897000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139626634323538613539643236393163333237633662666132656438 Dec 16 13:05:44.897000 audit: BPF prog-id=187 op=LOAD Dec 16 13:05:44.897000 audit[4619]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4607 pid=4619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:44.897000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139626634323538613539643236393163333237633662666132656438 Dec 16 13:05:44.897000 audit: BPF prog-id=188 op=LOAD Dec 16 13:05:44.897000 audit[4619]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4607 pid=4619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:44.897000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139626634323538613539643236393163333237633662666132656438 Dec 16 13:05:44.897000 audit: BPF prog-id=188 op=UNLOAD Dec 16 13:05:44.897000 audit[4619]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4607 pid=4619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:44.897000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139626634323538613539643236393163333237633662666132656438 Dec 16 13:05:44.897000 audit: BPF prog-id=187 op=UNLOAD Dec 16 13:05:44.897000 audit[4619]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4607 pid=4619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:44.897000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139626634323538613539643236393163333237633662666132656438 Dec 16 13:05:44.897000 audit: BPF prog-id=189 op=LOAD Dec 16 13:05:44.897000 audit[4619]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4607 pid=4619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:44.897000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139626634323538613539643236393163333237633662666132656438 Dec 16 13:05:44.915693 containerd[2540]: time="2025-12-16T13:05:44.915653313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s9njd,Uid:16527c30-b58e-44b4-88b6-f2e5201aa95e,Namespace:calico-system,Attempt:0,} returns sandbox id \"a9bf4258a59d2691c327c6bfa2ed84410b663bbe3142c5e898da5587a6eaf1b1\"" Dec 16 13:05:46.000284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2291584347.mount: Deactivated successfully. Dec 16 13:05:46.576944 containerd[2540]: time="2025-12-16T13:05:46.576892361Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:46.581055 containerd[2540]: time="2025-12-16T13:05:46.580927669Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Dec 16 13:05:46.590601 containerd[2540]: time="2025-12-16T13:05:46.590552176Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:46.598138 containerd[2540]: time="2025-12-16T13:05:46.598099735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:46.598864 containerd[2540]: time="2025-12-16T13:05:46.598585239Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.829503456s" Dec 16 13:05:46.598864 containerd[2540]: time="2025-12-16T13:05:46.598616793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 16 13:05:46.599732 containerd[2540]: time="2025-12-16T13:05:46.599701237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 13:05:46.622800 containerd[2540]: time="2025-12-16T13:05:46.622770110Z" level=info msg="CreateContainer within sandbox \"423f35e941ec8b4afcb2cae190266b8f2077452f38c27c74eaa1a057280d96fc\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 13:05:46.652318 containerd[2540]: time="2025-12-16T13:05:46.652287223Z" level=info msg="Container 141731614710ab816742c2b5dd282611618a2228319f0200d5014639e4a5ac6c: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:46.676073 containerd[2540]: time="2025-12-16T13:05:46.676042409Z" level=info msg="CreateContainer within sandbox \"423f35e941ec8b4afcb2cae190266b8f2077452f38c27c74eaa1a057280d96fc\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"141731614710ab816742c2b5dd282611618a2228319f0200d5014639e4a5ac6c\"" Dec 16 13:05:46.677625 containerd[2540]: time="2025-12-16T13:05:46.677596817Z" level=info msg="StartContainer for \"141731614710ab816742c2b5dd282611618a2228319f0200d5014639e4a5ac6c\"" Dec 16 13:05:46.678528 containerd[2540]: time="2025-12-16T13:05:46.678502526Z" level=info msg="connecting to shim 141731614710ab816742c2b5dd282611618a2228319f0200d5014639e4a5ac6c" address="unix:///run/containerd/s/e21642adc8aadf8246f90624d3276e5ca1d53110a1e1a57a1002cea55717bff4" protocol=ttrpc version=3 Dec 16 13:05:46.700527 systemd[1]: Started cri-containerd-141731614710ab816742c2b5dd282611618a2228319f0200d5014639e4a5ac6c.scope - libcontainer container 141731614710ab816742c2b5dd282611618a2228319f0200d5014639e4a5ac6c. Dec 16 13:05:46.708796 kubelet[4018]: E1216 13:05:46.708750 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ctchn" podUID="940a093b-83dc-454c-8522-5e1b1f40521f" Dec 16 13:05:46.716000 audit: BPF prog-id=190 op=LOAD Dec 16 13:05:46.718862 kernel: kauditd_printk_skb: 64 callbacks suppressed Dec 16 13:05:46.718929 kernel: audit: type=1334 audit(1765890346.716:590): prog-id=190 op=LOAD Dec 16 13:05:46.716000 audit: BPF prog-id=191 op=LOAD Dec 16 13:05:46.716000 audit[4658]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4484 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.728578 kernel: audit: type=1334 audit(1765890346.716:591): prog-id=191 op=LOAD Dec 16 13:05:46.728653 kernel: audit: type=1300 audit(1765890346.716:591): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4484 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.716000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134313733313631343731306162383136373432633262356464323832 Dec 16 13:05:46.742356 kernel: audit: type=1327 audit(1765890346.716:591): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134313733313631343731306162383136373432633262356464323832 Dec 16 13:05:46.751365 kernel: audit: type=1334 audit(1765890346.716:592): prog-id=191 op=UNLOAD Dec 16 13:05:46.716000 audit: BPF prog-id=191 op=UNLOAD Dec 16 13:05:46.716000 audit[4658]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4484 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.760375 kernel: audit: type=1300 audit(1765890346.716:592): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4484 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.716000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134313733313631343731306162383136373432633262356464323832 Dec 16 13:05:46.774423 kernel: audit: type=1327 audit(1765890346.716:592): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134313733313631343731306162383136373432633262356464323832 Dec 16 13:05:46.716000 audit: BPF prog-id=192 op=LOAD Dec 16 13:05:46.783356 kernel: audit: type=1334 audit(1765890346.716:593): prog-id=192 op=LOAD Dec 16 13:05:46.716000 audit[4658]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4484 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.794364 kernel: audit: type=1300 audit(1765890346.716:593): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4484 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.716000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134313733313631343731306162383136373432633262356464323832 Dec 16 13:05:46.716000 audit: BPF prog-id=193 op=LOAD Dec 16 13:05:46.716000 audit[4658]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4484 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.716000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134313733313631343731306162383136373432633262356464323832 Dec 16 13:05:46.716000 audit: BPF prog-id=193 op=UNLOAD Dec 16 13:05:46.716000 audit[4658]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4484 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.805462 kernel: audit: type=1327 audit(1765890346.716:593): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134313733313631343731306162383136373432633262356464323832 Dec 16 13:05:46.716000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134313733313631343731306162383136373432633262356464323832 Dec 16 13:05:46.716000 audit: BPF prog-id=192 op=UNLOAD Dec 16 13:05:46.716000 audit[4658]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4484 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.716000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134313733313631343731306162383136373432633262356464323832 Dec 16 13:05:46.717000 audit: BPF prog-id=194 op=LOAD Dec 16 13:05:46.717000 audit[4658]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4484 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.717000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134313733313631343731306162383136373432633262356464323832 Dec 16 13:05:46.807157 containerd[2540]: time="2025-12-16T13:05:46.807086997Z" level=info msg="StartContainer for \"141731614710ab816742c2b5dd282611618a2228319f0200d5014639e4a5ac6c\" returns successfully" Dec 16 13:05:46.833367 kubelet[4018]: E1216 13:05:46.833031 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:46.833367 kubelet[4018]: W1216 13:05:46.833055 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:46.833367 kubelet[4018]: E1216 13:05:46.833079 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:46.834823 kubelet[4018]: E1216 13:05:46.834583 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:46.834823 kubelet[4018]: W1216 13:05:46.834601 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:46.834823 kubelet[4018]: E1216 13:05:46.834622 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:46.834902 kubelet[4018]: E1216 13:05:46.834885 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:46.834932 kubelet[4018]: W1216 13:05:46.834918 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:46.835167 kubelet[4018]: E1216 13:05:46.835055 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:46.835768 kubelet[4018]: E1216 13:05:46.835750 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:46.835768 kubelet[4018]: W1216 13:05:46.835766 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:46.835875 kubelet[4018]: E1216 13:05:46.835781 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:46.836467 kubelet[4018]: E1216 13:05:46.836446 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:46.836467 kubelet[4018]: W1216 13:05:46.836464 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:46.836573 kubelet[4018]: E1216 13:05:46.836479 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:46.836964 kubelet[4018]: E1216 13:05:46.836947 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:46.836964 kubelet[4018]: W1216 13:05:46.836963 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:46.837035 kubelet[4018]: E1216 13:05:46.836975 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:46.837513 kubelet[4018]: E1216 13:05:46.837496 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:46.837513 kubelet[4018]: W1216 13:05:46.837512 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:46.837641 kubelet[4018]: E1216 13:05:46.837524 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:46.838153 kubelet[4018]: E1216 13:05:46.838093 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:46.838741 kubelet[4018]: W1216 13:05:46.838149 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:46.838805 kubelet[4018]: E1216 13:05:46.838747 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:46.838970 kubelet[4018]: E1216 13:05:46.838954 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:46.838970 kubelet[4018]: W1216 13:05:46.838965 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:46.839056 kubelet[4018]: E1216 13:05:46.838974 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:46.839912 kubelet[4018]: E1216 13:05:46.839806 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:46.839912 kubelet[4018]: W1216 13:05:46.839821 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:46.839912 kubelet[4018]: E1216 13:05:46.839835 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:46.840292 kubelet[4018]: E1216 13:05:46.840264 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:46.840663 kubelet[4018]: W1216 13:05:46.840530 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:46.840663 kubelet[4018]: E1216 13:05:46.840548 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:46.841308 kubelet[4018]: E1216 13:05:46.841165 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:46.841433 kubelet[4018]: W1216 13:05:46.841421 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:46.841490 kubelet[4018]: E1216 13:05:46.841482 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:46.842208 kubelet[4018]: E1216 13:05:46.842094 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:46.842208 kubelet[4018]: W1216 13:05:46.842112 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:46.842208 kubelet[4018]: E1216 13:05:46.842126 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:46.842381 kubelet[4018]: E1216 13:05:46.842373 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:46.842422 kubelet[4018]: W1216 13:05:46.842414 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:46.842528 kubelet[4018]: E1216 13:05:46.842464 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:46.842611 kubelet[4018]: E1216 13:05:46.842587 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:46.842663 kubelet[4018]: W1216 13:05:46.842657 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:46.842718 kubelet[4018]: E1216 13:05:46.842688 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:46.863475 kubelet[4018]: E1216 13:05:46.863453 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:46.863475 kubelet[4018]: W1216 13:05:46.863475 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:46.863573 kubelet[4018]: E1216 13:05:46.863490 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:46.864407 kubelet[4018]: E1216 13:05:46.864379 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:46.864407 kubelet[4018]: W1216 13:05:46.864397 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:46.864407 kubelet[4018]: E1216 13:05:46.864409 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:46.865062 kubelet[4018]: E1216 13:05:46.865043 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:46.865062 kubelet[4018]: W1216 13:05:46.865063 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:46.865141 kubelet[4018]: E1216 13:05:46.865076 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:46.865734 kubelet[4018]: E1216 13:05:46.865712 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:46.865734 kubelet[4018]: W1216 13:05:46.865732 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:46.866207 kubelet[4018]: E1216 13:05:46.866187 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:46.866563 kubelet[4018]: E1216 13:05:46.866546 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:46.866563 kubelet[4018]: W1216 13:05:46.866562 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:46.866679 kubelet[4018]: E1216 13:05:46.866573 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:46.867243 kubelet[4018]: E1216 13:05:46.867229 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:46.867243 kubelet[4018]: W1216 13:05:46.867243 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:46.867353 kubelet[4018]: E1216 13:05:46.867255 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:46.867942 kubelet[4018]: E1216 13:05:46.867925 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:46.867942 kubelet[4018]: W1216 13:05:46.867938 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:46.868030 kubelet[4018]: E1216 13:05:46.867950 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:46.869819 kubelet[4018]: E1216 13:05:46.869789 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:46.869819 kubelet[4018]: W1216 13:05:46.869809 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:46.869819 kubelet[4018]: E1216 13:05:46.869822 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:46.870438 kubelet[4018]: E1216 13:05:46.870004 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:46.870438 kubelet[4018]: W1216 13:05:46.870013 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:46.870438 kubelet[4018]: E1216 13:05:46.870039 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:46.870438 kubelet[4018]: E1216 13:05:46.870192 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:46.870438 kubelet[4018]: W1216 13:05:46.870199 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:46.870438 kubelet[4018]: E1216 13:05:46.870207 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:46.870438 kubelet[4018]: E1216 13:05:46.870387 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:46.870438 kubelet[4018]: W1216 13:05:46.870394 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:46.870438 kubelet[4018]: E1216 13:05:46.870402 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:46.870695 kubelet[4018]: E1216 13:05:46.870539 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:46.870695 kubelet[4018]: W1216 13:05:46.870546 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:46.870695 kubelet[4018]: E1216 13:05:46.870554 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:46.870775 kubelet[4018]: E1216 13:05:46.870705 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:46.870775 kubelet[4018]: W1216 13:05:46.870711 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:46.870775 kubelet[4018]: E1216 13:05:46.870717 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:46.871484 kubelet[4018]: E1216 13:05:46.871464 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:46.871484 kubelet[4018]: W1216 13:05:46.871482 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:46.871596 kubelet[4018]: E1216 13:05:46.871495 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:46.871947 kubelet[4018]: E1216 13:05:46.871930 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:46.871947 kubelet[4018]: W1216 13:05:46.871944 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:46.872075 kubelet[4018]: E1216 13:05:46.871957 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:46.873311 kubelet[4018]: E1216 13:05:46.872408 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:46.873311 kubelet[4018]: W1216 13:05:46.872422 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:46.873311 kubelet[4018]: E1216 13:05:46.872436 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:46.873311 kubelet[4018]: E1216 13:05:46.872600 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:46.873311 kubelet[4018]: W1216 13:05:46.872607 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:46.873311 kubelet[4018]: E1216 13:05:46.872615 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:46.873311 kubelet[4018]: E1216 13:05:46.872822 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:46.873311 kubelet[4018]: W1216 13:05:46.872832 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:46.873311 kubelet[4018]: E1216 13:05:46.872843 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:47.813247 kubelet[4018]: I1216 13:05:47.813211 4018 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:05:47.850072 kubelet[4018]: E1216 13:05:47.850040 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:47.850072 kubelet[4018]: W1216 13:05:47.850061 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:47.850330 kubelet[4018]: E1216 13:05:47.850082 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:47.850330 kubelet[4018]: E1216 13:05:47.850198 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:47.850330 kubelet[4018]: W1216 13:05:47.850203 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:47.850330 kubelet[4018]: E1216 13:05:47.850208 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:47.850500 kubelet[4018]: E1216 13:05:47.850392 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:47.850500 kubelet[4018]: W1216 13:05:47.850399 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:47.850500 kubelet[4018]: E1216 13:05:47.850407 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:47.850563 kubelet[4018]: E1216 13:05:47.850520 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:47.850563 kubelet[4018]: W1216 13:05:47.850526 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:47.850563 kubelet[4018]: E1216 13:05:47.850532 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:47.850646 kubelet[4018]: E1216 13:05:47.850632 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:47.850646 kubelet[4018]: W1216 13:05:47.850642 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:47.850697 kubelet[4018]: E1216 13:05:47.850647 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:47.850746 kubelet[4018]: E1216 13:05:47.850741 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:47.850768 kubelet[4018]: W1216 13:05:47.850748 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:47.850768 kubelet[4018]: E1216 13:05:47.850755 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:47.850865 kubelet[4018]: E1216 13:05:47.850855 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:47.850865 kubelet[4018]: W1216 13:05:47.850864 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:47.850912 kubelet[4018]: E1216 13:05:47.850870 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:47.850975 kubelet[4018]: E1216 13:05:47.850957 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:47.850975 kubelet[4018]: W1216 13:05:47.850972 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:47.851020 kubelet[4018]: E1216 13:05:47.850978 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:47.851088 kubelet[4018]: E1216 13:05:47.851065 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:47.851088 kubelet[4018]: W1216 13:05:47.851084 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:47.851132 kubelet[4018]: E1216 13:05:47.851090 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:47.851210 kubelet[4018]: E1216 13:05:47.851188 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:47.851233 kubelet[4018]: W1216 13:05:47.851218 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:47.851233 kubelet[4018]: E1216 13:05:47.851224 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:47.851448 kubelet[4018]: E1216 13:05:47.851420 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:47.851448 kubelet[4018]: W1216 13:05:47.851444 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:47.851531 kubelet[4018]: E1216 13:05:47.851454 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:47.851611 kubelet[4018]: E1216 13:05:47.851587 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:47.851611 kubelet[4018]: W1216 13:05:47.851609 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:47.851658 kubelet[4018]: E1216 13:05:47.851615 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:47.851762 kubelet[4018]: E1216 13:05:47.851739 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:47.851762 kubelet[4018]: W1216 13:05:47.851760 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:47.851810 kubelet[4018]: E1216 13:05:47.851766 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:47.851860 kubelet[4018]: E1216 13:05:47.851851 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:47.851860 kubelet[4018]: W1216 13:05:47.851858 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:47.851907 kubelet[4018]: E1216 13:05:47.851863 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:47.851956 kubelet[4018]: E1216 13:05:47.851946 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:47.851956 kubelet[4018]: W1216 13:05:47.851953 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:47.852002 kubelet[4018]: E1216 13:05:47.851959 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:47.872294 kubelet[4018]: E1216 13:05:47.872268 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:47.872294 kubelet[4018]: W1216 13:05:47.872284 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:47.872294 kubelet[4018]: E1216 13:05:47.872300 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:47.872531 kubelet[4018]: E1216 13:05:47.872489 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:47.872531 kubelet[4018]: W1216 13:05:47.872496 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:47.872531 kubelet[4018]: E1216 13:05:47.872505 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:47.872726 kubelet[4018]: E1216 13:05:47.872708 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:47.872726 kubelet[4018]: W1216 13:05:47.872721 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:47.872781 kubelet[4018]: E1216 13:05:47.872730 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:47.872941 kubelet[4018]: E1216 13:05:47.872930 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:47.872941 kubelet[4018]: W1216 13:05:47.872939 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:47.873002 kubelet[4018]: E1216 13:05:47.872947 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:47.873133 kubelet[4018]: E1216 13:05:47.873115 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:47.873133 kubelet[4018]: W1216 13:05:47.873131 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:47.873184 kubelet[4018]: E1216 13:05:47.873138 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:47.873316 kubelet[4018]: E1216 13:05:47.873291 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:47.873316 kubelet[4018]: W1216 13:05:47.873314 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:47.873395 kubelet[4018]: E1216 13:05:47.873320 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:47.873491 kubelet[4018]: E1216 13:05:47.873474 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:47.873491 kubelet[4018]: W1216 13:05:47.873489 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:47.873546 kubelet[4018]: E1216 13:05:47.873496 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:47.873622 kubelet[4018]: E1216 13:05:47.873611 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:47.873622 kubelet[4018]: W1216 13:05:47.873620 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:47.873677 kubelet[4018]: E1216 13:05:47.873626 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:47.873761 kubelet[4018]: E1216 13:05:47.873739 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:47.873761 kubelet[4018]: W1216 13:05:47.873757 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:47.873813 kubelet[4018]: E1216 13:05:47.873764 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:47.873887 kubelet[4018]: E1216 13:05:47.873876 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:47.873887 kubelet[4018]: W1216 13:05:47.873884 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:47.873937 kubelet[4018]: E1216 13:05:47.873889 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:47.874193 kubelet[4018]: E1216 13:05:47.874054 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:47.874193 kubelet[4018]: W1216 13:05:47.874077 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:47.874193 kubelet[4018]: E1216 13:05:47.874083 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:47.874429 kubelet[4018]: E1216 13:05:47.874416 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:47.874459 kubelet[4018]: W1216 13:05:47.874431 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:47.874459 kubelet[4018]: E1216 13:05:47.874444 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:47.874843 kubelet[4018]: E1216 13:05:47.874794 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:47.874843 kubelet[4018]: W1216 13:05:47.874805 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:47.874843 kubelet[4018]: E1216 13:05:47.874821 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:47.874972 kubelet[4018]: E1216 13:05:47.874963 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:47.874972 kubelet[4018]: W1216 13:05:47.874969 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:47.875029 kubelet[4018]: E1216 13:05:47.874976 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:47.875120 kubelet[4018]: E1216 13:05:47.875061 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:47.875120 kubelet[4018]: W1216 13:05:47.875066 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:47.875120 kubelet[4018]: E1216 13:05:47.875073 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:47.875199 kubelet[4018]: E1216 13:05:47.875192 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:47.875199 kubelet[4018]: W1216 13:05:47.875197 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:47.875253 kubelet[4018]: E1216 13:05:47.875202 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:47.875442 kubelet[4018]: E1216 13:05:47.875419 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:47.875442 kubelet[4018]: W1216 13:05:47.875439 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:47.875515 kubelet[4018]: E1216 13:05:47.875447 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:47.875608 kubelet[4018]: E1216 13:05:47.875579 4018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:47.875608 kubelet[4018]: W1216 13:05:47.875605 4018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:47.875708 kubelet[4018]: E1216 13:05:47.875612 4018 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:48.146392 containerd[2540]: time="2025-12-16T13:05:48.144746349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:48.148258 containerd[2540]: time="2025-12-16T13:05:48.148227113Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 16 13:05:48.152335 containerd[2540]: time="2025-12-16T13:05:48.152284697Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:48.157834 containerd[2540]: time="2025-12-16T13:05:48.157779987Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:48.158295 containerd[2540]: time="2025-12-16T13:05:48.158162841Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.558427857s" Dec 16 13:05:48.158295 containerd[2540]: time="2025-12-16T13:05:48.158195810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 13:05:48.166913 containerd[2540]: time="2025-12-16T13:05:48.166880992Z" level=info msg="CreateContainer within sandbox \"a9bf4258a59d2691c327c6bfa2ed84410b663bbe3142c5e898da5587a6eaf1b1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 13:05:48.196315 containerd[2540]: time="2025-12-16T13:05:48.194819039Z" level=info msg="Container 07fc15a55c6518a1aefedf553a97896db1ffb0db94554c583ae906559ec1b9de: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:48.224354 containerd[2540]: time="2025-12-16T13:05:48.224311670Z" level=info msg="CreateContainer within sandbox \"a9bf4258a59d2691c327c6bfa2ed84410b663bbe3142c5e898da5587a6eaf1b1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"07fc15a55c6518a1aefedf553a97896db1ffb0db94554c583ae906559ec1b9de\"" Dec 16 13:05:48.224791 containerd[2540]: time="2025-12-16T13:05:48.224755929Z" level=info msg="StartContainer for \"07fc15a55c6518a1aefedf553a97896db1ffb0db94554c583ae906559ec1b9de\"" Dec 16 13:05:48.226520 containerd[2540]: time="2025-12-16T13:05:48.226490233Z" level=info msg="connecting to shim 07fc15a55c6518a1aefedf553a97896db1ffb0db94554c583ae906559ec1b9de" address="unix:///run/containerd/s/30bc6743cbb383c683f3fce89c5b19d52e404fe88f60846b755da225539a8068" protocol=ttrpc version=3 Dec 16 13:05:48.246901 systemd[1]: Started cri-containerd-07fc15a55c6518a1aefedf553a97896db1ffb0db94554c583ae906559ec1b9de.scope - libcontainer container 07fc15a55c6518a1aefedf553a97896db1ffb0db94554c583ae906559ec1b9de. Dec 16 13:05:48.275000 audit: BPF prog-id=195 op=LOAD Dec 16 13:05:48.275000 audit[4766]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4607 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.275000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037666331356135356336353138613161656665646635353361393738 Dec 16 13:05:48.276000 audit: BPF prog-id=196 op=LOAD Dec 16 13:05:48.276000 audit[4766]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4607 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037666331356135356336353138613161656665646635353361393738 Dec 16 13:05:48.276000 audit: BPF prog-id=196 op=UNLOAD Dec 16 13:05:48.276000 audit[4766]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4607 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037666331356135356336353138613161656665646635353361393738 Dec 16 13:05:48.276000 audit: BPF prog-id=195 op=UNLOAD Dec 16 13:05:48.276000 audit[4766]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4607 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037666331356135356336353138613161656665646635353361393738 Dec 16 13:05:48.276000 audit: BPF prog-id=197 op=LOAD Dec 16 13:05:48.276000 audit[4766]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4607 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037666331356135356336353138613161656665646635353361393738 Dec 16 13:05:48.313900 containerd[2540]: time="2025-12-16T13:05:48.313736070Z" level=info msg="StartContainer for \"07fc15a55c6518a1aefedf553a97896db1ffb0db94554c583ae906559ec1b9de\" returns successfully" Dec 16 13:05:48.316026 systemd[1]: cri-containerd-07fc15a55c6518a1aefedf553a97896db1ffb0db94554c583ae906559ec1b9de.scope: Deactivated successfully. Dec 16 13:05:48.320911 containerd[2540]: time="2025-12-16T13:05:48.320491854Z" level=info msg="received container exit event container_id:\"07fc15a55c6518a1aefedf553a97896db1ffb0db94554c583ae906559ec1b9de\" id:\"07fc15a55c6518a1aefedf553a97896db1ffb0db94554c583ae906559ec1b9de\" pid:4778 exited_at:{seconds:1765890348 nanos:319084383}" Dec 16 13:05:48.319000 audit: BPF prog-id=197 op=UNLOAD Dec 16 13:05:48.344895 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07fc15a55c6518a1aefedf553a97896db1ffb0db94554c583ae906559ec1b9de-rootfs.mount: Deactivated successfully. Dec 16 13:05:48.708546 kubelet[4018]: E1216 13:05:48.708472 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ctchn" podUID="940a093b-83dc-454c-8522-5e1b1f40521f" Dec 16 13:05:48.838250 kubelet[4018]: I1216 13:05:48.838181 4018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6957547d5b-bnp7w" podStartSLOduration=3.007148009 podStartE2EDuration="4.838154758s" podCreationTimestamp="2025-12-16 13:05:44 +0000 UTC" firstStartedPulling="2025-12-16 13:05:44.768492562 +0000 UTC m=+21.167458261" lastFinishedPulling="2025-12-16 13:05:46.599499301 +0000 UTC m=+22.998465010" observedRunningTime="2025-12-16 13:05:46.832306398 +0000 UTC m=+23.231272112" watchObservedRunningTime="2025-12-16 13:05:48.838154758 +0000 UTC m=+25.237120462" Dec 16 13:05:49.824792 containerd[2540]: time="2025-12-16T13:05:49.824741011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 13:05:50.040203 kubelet[4018]: I1216 13:05:50.040039 4018 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:05:50.067000 audit[4817]: NETFILTER_CFG table=filter:118 family=2 entries=21 op=nft_register_rule pid=4817 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:50.067000 audit[4817]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd2f6de920 a2=0 a3=7ffd2f6de90c items=0 ppid=4127 pid=4817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:50.067000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:50.072000 audit[4817]: NETFILTER_CFG table=nat:119 family=2 entries=19 op=nft_register_chain pid=4817 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:50.072000 audit[4817]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd2f6de920 a2=0 a3=7ffd2f6de90c items=0 ppid=4127 pid=4817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:50.072000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:50.709404 kubelet[4018]: E1216 13:05:50.709266 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ctchn" podUID="940a093b-83dc-454c-8522-5e1b1f40521f" Dec 16 13:05:52.708639 kubelet[4018]: E1216 13:05:52.708580 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ctchn" podUID="940a093b-83dc-454c-8522-5e1b1f40521f" Dec 16 13:05:53.671788 containerd[2540]: time="2025-12-16T13:05:53.671731149Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:53.674846 containerd[2540]: time="2025-12-16T13:05:53.674812499Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 16 13:05:53.683269 containerd[2540]: time="2025-12-16T13:05:53.683234361Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:53.688205 containerd[2540]: time="2025-12-16T13:05:53.688131223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:53.688964 containerd[2540]: time="2025-12-16T13:05:53.688623226Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.863825478s" Dec 16 13:05:53.688964 containerd[2540]: time="2025-12-16T13:05:53.688656018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 13:05:53.696957 containerd[2540]: time="2025-12-16T13:05:53.696925287Z" level=info msg="CreateContainer within sandbox \"a9bf4258a59d2691c327c6bfa2ed84410b663bbe3142c5e898da5587a6eaf1b1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 13:05:53.725362 containerd[2540]: time="2025-12-16T13:05:53.725199966Z" level=info msg="Container e469dd3a24c304b1dfa722ca5904a774f6e17212354e54e9d3f2775067bf00ad: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:53.752503 containerd[2540]: time="2025-12-16T13:05:53.752477256Z" level=info msg="CreateContainer within sandbox \"a9bf4258a59d2691c327c6bfa2ed84410b663bbe3142c5e898da5587a6eaf1b1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e469dd3a24c304b1dfa722ca5904a774f6e17212354e54e9d3f2775067bf00ad\"" Dec 16 13:05:53.753056 containerd[2540]: time="2025-12-16T13:05:53.753016705Z" level=info msg="StartContainer for \"e469dd3a24c304b1dfa722ca5904a774f6e17212354e54e9d3f2775067bf00ad\"" Dec 16 13:05:53.754428 containerd[2540]: time="2025-12-16T13:05:53.754400001Z" level=info msg="connecting to shim e469dd3a24c304b1dfa722ca5904a774f6e17212354e54e9d3f2775067bf00ad" address="unix:///run/containerd/s/30bc6743cbb383c683f3fce89c5b19d52e404fe88f60846b755da225539a8068" protocol=ttrpc version=3 Dec 16 13:05:53.782549 systemd[1]: Started cri-containerd-e469dd3a24c304b1dfa722ca5904a774f6e17212354e54e9d3f2775067bf00ad.scope - libcontainer container e469dd3a24c304b1dfa722ca5904a774f6e17212354e54e9d3f2775067bf00ad. Dec 16 13:05:53.829000 audit: BPF prog-id=198 op=LOAD Dec 16 13:05:53.834058 kernel: kauditd_printk_skb: 34 callbacks suppressed Dec 16 13:05:53.834138 kernel: audit: type=1334 audit(1765890353.829:606): prog-id=198 op=LOAD Dec 16 13:05:53.838751 kernel: audit: type=1300 audit(1765890353.829:606): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4607 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:53.829000 audit[4826]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4607 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:53.829000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534363964643361323463333034623164666137323263613539303461 Dec 16 13:05:53.845909 kernel: audit: type=1327 audit(1765890353.829:606): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534363964643361323463333034623164666137323263613539303461 Dec 16 13:05:53.845989 kernel: audit: type=1334 audit(1765890353.829:607): prog-id=199 op=LOAD Dec 16 13:05:53.829000 audit: BPF prog-id=199 op=LOAD Dec 16 13:05:53.829000 audit[4826]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4607 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:53.853456 kernel: audit: type=1300 audit(1765890353.829:607): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4607 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:53.854335 kernel: audit: type=1327 audit(1765890353.829:607): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534363964643361323463333034623164666137323263613539303461 Dec 16 13:05:53.829000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534363964643361323463333034623164666137323263613539303461 Dec 16 13:05:53.829000 audit: BPF prog-id=199 op=UNLOAD Dec 16 13:05:53.860938 kernel: audit: type=1334 audit(1765890353.829:608): prog-id=199 op=UNLOAD Dec 16 13:05:53.861150 kernel: audit: type=1300 audit(1765890353.829:608): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4607 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:53.829000 audit[4826]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4607 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:53.829000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534363964643361323463333034623164666137323263613539303461 Dec 16 13:05:53.866189 kernel: audit: type=1327 audit(1765890353.829:608): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534363964643361323463333034623164666137323263613539303461 Dec 16 13:05:53.829000 audit: BPF prog-id=198 op=UNLOAD Dec 16 13:05:53.868475 kernel: audit: type=1334 audit(1765890353.829:609): prog-id=198 op=UNLOAD Dec 16 13:05:53.829000 audit[4826]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4607 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:53.829000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534363964643361323463333034623164666137323263613539303461 Dec 16 13:05:53.829000 audit: BPF prog-id=200 op=LOAD Dec 16 13:05:53.829000 audit[4826]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4607 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:53.829000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534363964643361323463333034623164666137323263613539303461 Dec 16 13:05:53.909535 containerd[2540]: time="2025-12-16T13:05:53.909419623Z" level=info msg="StartContainer for \"e469dd3a24c304b1dfa722ca5904a774f6e17212354e54e9d3f2775067bf00ad\" returns successfully" Dec 16 13:05:54.708917 kubelet[4018]: E1216 13:05:54.708702 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ctchn" podUID="940a093b-83dc-454c-8522-5e1b1f40521f" Dec 16 13:05:55.156918 systemd[1]: cri-containerd-e469dd3a24c304b1dfa722ca5904a774f6e17212354e54e9d3f2775067bf00ad.scope: Deactivated successfully. Dec 16 13:05:55.157419 systemd[1]: cri-containerd-e469dd3a24c304b1dfa722ca5904a774f6e17212354e54e9d3f2775067bf00ad.scope: Consumed 448ms CPU time, 190.4M memory peak, 171.3M written to disk. Dec 16 13:05:55.159377 containerd[2540]: time="2025-12-16T13:05:55.159292410Z" level=info msg="received container exit event container_id:\"e469dd3a24c304b1dfa722ca5904a774f6e17212354e54e9d3f2775067bf00ad\" id:\"e469dd3a24c304b1dfa722ca5904a774f6e17212354e54e9d3f2775067bf00ad\" pid:4839 exited_at:{seconds:1765890355 nanos:159029216}" Dec 16 13:05:55.159000 audit: BPF prog-id=200 op=UNLOAD Dec 16 13:05:55.182214 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e469dd3a24c304b1dfa722ca5904a774f6e17212354e54e9d3f2775067bf00ad-rootfs.mount: Deactivated successfully. Dec 16 13:05:55.228152 kubelet[4018]: I1216 13:05:55.228130 4018 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 16 13:05:55.657106 systemd[1]: Created slice kubepods-burstable-podc46c1cd1_ee4e_4f44_a196_4c7989633db4.slice - libcontainer container kubepods-burstable-podc46c1cd1_ee4e_4f44_a196_4c7989633db4.slice. Dec 16 13:05:55.722955 kubelet[4018]: I1216 13:05:55.722912 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ld2s\" (UniqueName: \"kubernetes.io/projected/c46c1cd1-ee4e-4f44-a196-4c7989633db4-kube-api-access-5ld2s\") pod \"coredns-66bc5c9577-22fpt\" (UID: \"c46c1cd1-ee4e-4f44-a196-4c7989633db4\") " pod="kube-system/coredns-66bc5c9577-22fpt" Dec 16 13:05:55.722955 kubelet[4018]: I1216 13:05:55.722957 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c46c1cd1-ee4e-4f44-a196-4c7989633db4-config-volume\") pod \"coredns-66bc5c9577-22fpt\" (UID: \"c46c1cd1-ee4e-4f44-a196-4c7989633db4\") " pod="kube-system/coredns-66bc5c9577-22fpt" Dec 16 13:05:55.947225 systemd[1]: Created slice kubepods-burstable-pode4c0041a_7552_4196_b88d_cb0c0c25e0f7.slice - libcontainer container kubepods-burstable-pode4c0041a_7552_4196_b88d_cb0c0c25e0f7.slice. Dec 16 13:05:56.025049 kubelet[4018]: I1216 13:05:56.025016 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4c0041a-7552-4196-b88d-cb0c0c25e0f7-config-volume\") pod \"coredns-66bc5c9577-wcczd\" (UID: \"e4c0041a-7552-4196-b88d-cb0c0c25e0f7\") " pod="kube-system/coredns-66bc5c9577-wcczd" Dec 16 13:05:56.025049 kubelet[4018]: I1216 13:05:56.025051 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnttn\" (UniqueName: \"kubernetes.io/projected/e4c0041a-7552-4196-b88d-cb0c0c25e0f7-kube-api-access-cnttn\") pod \"coredns-66bc5c9577-wcczd\" (UID: \"e4c0041a-7552-4196-b88d-cb0c0c25e0f7\") " pod="kube-system/coredns-66bc5c9577-wcczd" Dec 16 13:05:56.050409 containerd[2540]: time="2025-12-16T13:05:56.050362609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-22fpt,Uid:c46c1cd1-ee4e-4f44-a196-4c7989633db4,Namespace:kube-system,Attempt:0,}" Dec 16 13:05:56.061727 systemd[1]: Created slice kubepods-besteffort-pod021bd40b_8387_4f81_8ec5_64b895deb3c2.slice - libcontainer container kubepods-besteffort-pod021bd40b_8387_4f81_8ec5_64b895deb3c2.slice. Dec 16 13:05:56.080085 systemd[1]: Created slice kubepods-besteffort-podec8a48b8_a266_4333_abae_c471e3ab42b1.slice - libcontainer container kubepods-besteffort-podec8a48b8_a266_4333_abae_c471e3ab42b1.slice. Dec 16 13:05:56.090573 systemd[1]: Created slice kubepods-besteffort-podd35c67aa_255b_42a2_83b2_79e30256e265.slice - libcontainer container kubepods-besteffort-podd35c67aa_255b_42a2_83b2_79e30256e265.slice. Dec 16 13:05:56.106524 systemd[1]: Created slice kubepods-besteffort-podfa544c8c_af21_41f2_8ffb_1fe7c36b0bfb.slice - libcontainer container kubepods-besteffort-podfa544c8c_af21_41f2_8ffb_1fe7c36b0bfb.slice. Dec 16 13:05:56.121749 systemd[1]: Created slice kubepods-besteffort-pod02113441_a531_45ff_9a40_51f9ff37eeb2.slice - libcontainer container kubepods-besteffort-pod02113441_a531_45ff_9a40_51f9ff37eeb2.slice. Dec 16 13:05:56.126353 kubelet[4018]: I1216 13:05:56.126272 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/02113441-a531-45ff-9a40-51f9ff37eeb2-goldmane-key-pair\") pod \"goldmane-7c778bb748-kffxh\" (UID: \"02113441-a531-45ff-9a40-51f9ff37eeb2\") " pod="calico-system/goldmane-7c778bb748-kffxh" Dec 16 13:05:56.126455 kubelet[4018]: I1216 13:05:56.126327 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fa544c8c-af21-41f2-8ffb-1fe7c36b0bfb-calico-apiserver-certs\") pod \"calico-apiserver-7798f6444b-zjrsf\" (UID: \"fa544c8c-af21-41f2-8ffb-1fe7c36b0bfb\") " pod="calico-apiserver/calico-apiserver-7798f6444b-zjrsf" Dec 16 13:05:56.127049 kubelet[4018]: I1216 13:05:56.126512 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r26j4\" (UniqueName: \"kubernetes.io/projected/02113441-a531-45ff-9a40-51f9ff37eeb2-kube-api-access-r26j4\") pod \"goldmane-7c778bb748-kffxh\" (UID: \"02113441-a531-45ff-9a40-51f9ff37eeb2\") " pod="calico-system/goldmane-7c778bb748-kffxh" Dec 16 13:05:56.127049 kubelet[4018]: I1216 13:05:56.126535 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2msg6\" (UniqueName: \"kubernetes.io/projected/d35c67aa-255b-42a2-83b2-79e30256e265-kube-api-access-2msg6\") pod \"calico-apiserver-7798f6444b-p9dhf\" (UID: \"d35c67aa-255b-42a2-83b2-79e30256e265\") " pod="calico-apiserver/calico-apiserver-7798f6444b-p9dhf" Dec 16 13:05:56.127049 kubelet[4018]: I1216 13:05:56.126573 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbzdx\" (UniqueName: \"kubernetes.io/projected/021bd40b-8387-4f81-8ec5-64b895deb3c2-kube-api-access-fbzdx\") pod \"calico-kube-controllers-8c4454f6d-fzx24\" (UID: \"021bd40b-8387-4f81-8ec5-64b895deb3c2\") " pod="calico-system/calico-kube-controllers-8c4454f6d-fzx24" Dec 16 13:05:56.127049 kubelet[4018]: I1216 13:05:56.126687 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02113441-a531-45ff-9a40-51f9ff37eeb2-config\") pod \"goldmane-7c778bb748-kffxh\" (UID: \"02113441-a531-45ff-9a40-51f9ff37eeb2\") " pod="calico-system/goldmane-7c778bb748-kffxh" Dec 16 13:05:56.127049 kubelet[4018]: I1216 13:05:56.126714 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02113441-a531-45ff-9a40-51f9ff37eeb2-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-kffxh\" (UID: \"02113441-a531-45ff-9a40-51f9ff37eeb2\") " pod="calico-system/goldmane-7c778bb748-kffxh" Dec 16 13:05:56.127221 kubelet[4018]: I1216 13:05:56.127107 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk98v\" (UniqueName: \"kubernetes.io/projected/fa544c8c-af21-41f2-8ffb-1fe7c36b0bfb-kube-api-access-kk98v\") pod \"calico-apiserver-7798f6444b-zjrsf\" (UID: \"fa544c8c-af21-41f2-8ffb-1fe7c36b0bfb\") " pod="calico-apiserver/calico-apiserver-7798f6444b-zjrsf" Dec 16 13:05:56.127221 kubelet[4018]: I1216 13:05:56.127139 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d35c67aa-255b-42a2-83b2-79e30256e265-calico-apiserver-certs\") pod \"calico-apiserver-7798f6444b-p9dhf\" (UID: \"d35c67aa-255b-42a2-83b2-79e30256e265\") " pod="calico-apiserver/calico-apiserver-7798f6444b-p9dhf" Dec 16 13:05:56.127289 kubelet[4018]: I1216 13:05:56.127275 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/021bd40b-8387-4f81-8ec5-64b895deb3c2-tigera-ca-bundle\") pod \"calico-kube-controllers-8c4454f6d-fzx24\" (UID: \"021bd40b-8387-4f81-8ec5-64b895deb3c2\") " pod="calico-system/calico-kube-controllers-8c4454f6d-fzx24" Dec 16 13:05:56.127317 kubelet[4018]: I1216 13:05:56.127300 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ec8a48b8-a266-4333-abae-c471e3ab42b1-whisker-backend-key-pair\") pod \"whisker-b8f45d798-6gv9n\" (UID: \"ec8a48b8-a266-4333-abae-c471e3ab42b1\") " pod="calico-system/whisker-b8f45d798-6gv9n" Dec 16 13:05:56.128467 kubelet[4018]: I1216 13:05:56.128440 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec8a48b8-a266-4333-abae-c471e3ab42b1-whisker-ca-bundle\") pod \"whisker-b8f45d798-6gv9n\" (UID: \"ec8a48b8-a266-4333-abae-c471e3ab42b1\") " pod="calico-system/whisker-b8f45d798-6gv9n" Dec 16 13:05:56.128641 kubelet[4018]: I1216 13:05:56.128620 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j6z4\" (UniqueName: \"kubernetes.io/projected/ec8a48b8-a266-4333-abae-c471e3ab42b1-kube-api-access-2j6z4\") pod \"whisker-b8f45d798-6gv9n\" (UID: \"ec8a48b8-a266-4333-abae-c471e3ab42b1\") " pod="calico-system/whisker-b8f45d798-6gv9n" Dec 16 13:05:56.173409 containerd[2540]: time="2025-12-16T13:05:56.173324603Z" level=error msg="Failed to destroy network for sandbox \"46efbe17cce0e26c286086c678d18a0ae265820b2f2128d0d62f914a5909f740\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:56.175957 systemd[1]: run-netns-cni\x2dfc070b3f\x2d98fa\x2d8c94\x2d4fac\x2d9de7f7c6bdca.mount: Deactivated successfully. Dec 16 13:05:56.189578 containerd[2540]: time="2025-12-16T13:05:56.189535152Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-22fpt,Uid:c46c1cd1-ee4e-4f44-a196-4c7989633db4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"46efbe17cce0e26c286086c678d18a0ae265820b2f2128d0d62f914a5909f740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:56.189819 kubelet[4018]: E1216 13:05:56.189784 4018 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46efbe17cce0e26c286086c678d18a0ae265820b2f2128d0d62f914a5909f740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:56.189863 kubelet[4018]: E1216 13:05:56.189852 4018 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46efbe17cce0e26c286086c678d18a0ae265820b2f2128d0d62f914a5909f740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-22fpt" Dec 16 13:05:56.189899 kubelet[4018]: E1216 13:05:56.189872 4018 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46efbe17cce0e26c286086c678d18a0ae265820b2f2128d0d62f914a5909f740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-22fpt" Dec 16 13:05:56.189952 kubelet[4018]: E1216 13:05:56.189930 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-22fpt_kube-system(c46c1cd1-ee4e-4f44-a196-4c7989633db4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-22fpt_kube-system(c46c1cd1-ee4e-4f44-a196-4c7989633db4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46efbe17cce0e26c286086c678d18a0ae265820b2f2128d0d62f914a5909f740\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-22fpt" podUID="c46c1cd1-ee4e-4f44-a196-4c7989633db4" Dec 16 13:05:56.274779 containerd[2540]: time="2025-12-16T13:05:56.274544053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wcczd,Uid:e4c0041a-7552-4196-b88d-cb0c0c25e0f7,Namespace:kube-system,Attempt:0,}" Dec 16 13:05:56.323203 containerd[2540]: time="2025-12-16T13:05:56.323162025Z" level=error msg="Failed to destroy network for sandbox \"a2af74c73f2644c30670c03881f0841101e009386b227ca294a47a8a06b4419c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:56.331575 containerd[2540]: time="2025-12-16T13:05:56.331533826Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wcczd,Uid:e4c0041a-7552-4196-b88d-cb0c0c25e0f7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2af74c73f2644c30670c03881f0841101e009386b227ca294a47a8a06b4419c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:56.331755 kubelet[4018]: E1216 13:05:56.331721 4018 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2af74c73f2644c30670c03881f0841101e009386b227ca294a47a8a06b4419c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:56.331804 kubelet[4018]: E1216 13:05:56.331775 4018 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2af74c73f2644c30670c03881f0841101e009386b227ca294a47a8a06b4419c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-wcczd" Dec 16 13:05:56.331804 kubelet[4018]: E1216 13:05:56.331794 4018 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2af74c73f2644c30670c03881f0841101e009386b227ca294a47a8a06b4419c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-wcczd" Dec 16 13:05:56.331879 kubelet[4018]: E1216 13:05:56.331856 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-wcczd_kube-system(e4c0041a-7552-4196-b88d-cb0c0c25e0f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-wcczd_kube-system(e4c0041a-7552-4196-b88d-cb0c0c25e0f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2af74c73f2644c30670c03881f0841101e009386b227ca294a47a8a06b4419c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-wcczd" podUID="e4c0041a-7552-4196-b88d-cb0c0c25e0f7" Dec 16 13:05:56.378219 containerd[2540]: time="2025-12-16T13:05:56.378190218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8c4454f6d-fzx24,Uid:021bd40b-8387-4f81-8ec5-64b895deb3c2,Namespace:calico-system,Attempt:0,}" Dec 16 13:05:56.403380 containerd[2540]: time="2025-12-16T13:05:56.402989920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7798f6444b-p9dhf,Uid:d35c67aa-255b-42a2-83b2-79e30256e265,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:05:56.409025 containerd[2540]: time="2025-12-16T13:05:56.408998894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b8f45d798-6gv9n,Uid:ec8a48b8-a266-4333-abae-c471e3ab42b1,Namespace:calico-system,Attempt:0,}" Dec 16 13:05:56.426924 containerd[2540]: time="2025-12-16T13:05:56.426877613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7798f6444b-zjrsf,Uid:fa544c8c-af21-41f2-8ffb-1fe7c36b0bfb,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:05:56.437199 containerd[2540]: time="2025-12-16T13:05:56.437174402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-kffxh,Uid:02113441-a531-45ff-9a40-51f9ff37eeb2,Namespace:calico-system,Attempt:0,}" Dec 16 13:05:56.440496 containerd[2540]: time="2025-12-16T13:05:56.440447797Z" level=error msg="Failed to destroy network for sandbox \"1a8ddc604baca58807d20d888d9c92054985bda10c9187f06a74e186e3755795\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:56.498543 containerd[2540]: time="2025-12-16T13:05:56.498409813Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8c4454f6d-fzx24,Uid:021bd40b-8387-4f81-8ec5-64b895deb3c2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a8ddc604baca58807d20d888d9c92054985bda10c9187f06a74e186e3755795\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:56.498727 kubelet[4018]: E1216 13:05:56.498655 4018 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a8ddc604baca58807d20d888d9c92054985bda10c9187f06a74e186e3755795\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:56.498727 kubelet[4018]: E1216 13:05:56.498711 4018 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a8ddc604baca58807d20d888d9c92054985bda10c9187f06a74e186e3755795\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8c4454f6d-fzx24" Dec 16 13:05:56.498807 kubelet[4018]: E1216 13:05:56.498731 4018 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a8ddc604baca58807d20d888d9c92054985bda10c9187f06a74e186e3755795\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8c4454f6d-fzx24" Dec 16 13:05:56.498807 kubelet[4018]: E1216 13:05:56.498787 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8c4454f6d-fzx24_calico-system(021bd40b-8387-4f81-8ec5-64b895deb3c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8c4454f6d-fzx24_calico-system(021bd40b-8387-4f81-8ec5-64b895deb3c2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a8ddc604baca58807d20d888d9c92054985bda10c9187f06a74e186e3755795\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8c4454f6d-fzx24" podUID="021bd40b-8387-4f81-8ec5-64b895deb3c2" Dec 16 13:05:56.515319 containerd[2540]: time="2025-12-16T13:05:56.515226486Z" level=error msg="Failed to destroy network for sandbox \"22c096590be182a842027bc23b0a614839e8b2e0cefddc6eec7815b2e4a6d837\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:56.522850 containerd[2540]: time="2025-12-16T13:05:56.522748298Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7798f6444b-p9dhf,Uid:d35c67aa-255b-42a2-83b2-79e30256e265,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"22c096590be182a842027bc23b0a614839e8b2e0cefddc6eec7815b2e4a6d837\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:56.522974 kubelet[4018]: E1216 13:05:56.522945 4018 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22c096590be182a842027bc23b0a614839e8b2e0cefddc6eec7815b2e4a6d837\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:56.523022 kubelet[4018]: E1216 13:05:56.522989 4018 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22c096590be182a842027bc23b0a614839e8b2e0cefddc6eec7815b2e4a6d837\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7798f6444b-p9dhf" Dec 16 13:05:56.523052 kubelet[4018]: E1216 13:05:56.523007 4018 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22c096590be182a842027bc23b0a614839e8b2e0cefddc6eec7815b2e4a6d837\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7798f6444b-p9dhf" Dec 16 13:05:56.523353 kubelet[4018]: E1216 13:05:56.523072 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7798f6444b-p9dhf_calico-apiserver(d35c67aa-255b-42a2-83b2-79e30256e265)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7798f6444b-p9dhf_calico-apiserver(d35c67aa-255b-42a2-83b2-79e30256e265)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22c096590be182a842027bc23b0a614839e8b2e0cefddc6eec7815b2e4a6d837\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7798f6444b-p9dhf" podUID="d35c67aa-255b-42a2-83b2-79e30256e265" Dec 16 13:05:56.551504 containerd[2540]: time="2025-12-16T13:05:56.550721553Z" level=error msg="Failed to destroy network for sandbox \"01310c8d9cd862bc26f0a34d6dd65deaa716217d083074ae7320d171791d289b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:56.563851 containerd[2540]: time="2025-12-16T13:05:56.563813311Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-kffxh,Uid:02113441-a531-45ff-9a40-51f9ff37eeb2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"01310c8d9cd862bc26f0a34d6dd65deaa716217d083074ae7320d171791d289b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:56.564144 kubelet[4018]: E1216 13:05:56.564120 4018 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01310c8d9cd862bc26f0a34d6dd65deaa716217d083074ae7320d171791d289b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:56.564209 kubelet[4018]: E1216 13:05:56.564164 4018 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01310c8d9cd862bc26f0a34d6dd65deaa716217d083074ae7320d171791d289b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-kffxh" Dec 16 13:05:56.564209 kubelet[4018]: E1216 13:05:56.564182 4018 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01310c8d9cd862bc26f0a34d6dd65deaa716217d083074ae7320d171791d289b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-kffxh" Dec 16 13:05:56.564258 kubelet[4018]: E1216 13:05:56.564236 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-kffxh_calico-system(02113441-a531-45ff-9a40-51f9ff37eeb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-kffxh_calico-system(02113441-a531-45ff-9a40-51f9ff37eeb2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01310c8d9cd862bc26f0a34d6dd65deaa716217d083074ae7320d171791d289b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-kffxh" podUID="02113441-a531-45ff-9a40-51f9ff37eeb2" Dec 16 13:05:56.564962 containerd[2540]: time="2025-12-16T13:05:56.564861736Z" level=error msg="Failed to destroy network for sandbox \"95673f9306fbc995f67824f17f5dd2ac993e0aaf52534b6c103bf6bf2b554446\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:56.572524 containerd[2540]: time="2025-12-16T13:05:56.572430576Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b8f45d798-6gv9n,Uid:ec8a48b8-a266-4333-abae-c471e3ab42b1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"95673f9306fbc995f67824f17f5dd2ac993e0aaf52534b6c103bf6bf2b554446\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:56.572673 kubelet[4018]: E1216 13:05:56.572620 4018 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95673f9306fbc995f67824f17f5dd2ac993e0aaf52534b6c103bf6bf2b554446\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:56.572673 kubelet[4018]: E1216 13:05:56.572662 4018 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95673f9306fbc995f67824f17f5dd2ac993e0aaf52534b6c103bf6bf2b554446\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b8f45d798-6gv9n" Dec 16 13:05:56.572761 kubelet[4018]: E1216 13:05:56.572681 4018 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95673f9306fbc995f67824f17f5dd2ac993e0aaf52534b6c103bf6bf2b554446\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b8f45d798-6gv9n" Dec 16 13:05:56.572761 kubelet[4018]: E1216 13:05:56.572733 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-b8f45d798-6gv9n_calico-system(ec8a48b8-a266-4333-abae-c471e3ab42b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-b8f45d798-6gv9n_calico-system(ec8a48b8-a266-4333-abae-c471e3ab42b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95673f9306fbc995f67824f17f5dd2ac993e0aaf52534b6c103bf6bf2b554446\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b8f45d798-6gv9n" podUID="ec8a48b8-a266-4333-abae-c471e3ab42b1" Dec 16 13:05:56.582617 containerd[2540]: time="2025-12-16T13:05:56.582591051Z" level=error msg="Failed to destroy network for sandbox \"730440ce58caca2230d957e669c533194c4793046503c449cb2b4d04b28acca8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:56.593128 containerd[2540]: time="2025-12-16T13:05:56.593096924Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7798f6444b-zjrsf,Uid:fa544c8c-af21-41f2-8ffb-1fe7c36b0bfb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"730440ce58caca2230d957e669c533194c4793046503c449cb2b4d04b28acca8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:56.593281 kubelet[4018]: E1216 13:05:56.593254 4018 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"730440ce58caca2230d957e669c533194c4793046503c449cb2b4d04b28acca8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:56.593327 kubelet[4018]: E1216 13:05:56.593299 4018 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"730440ce58caca2230d957e669c533194c4793046503c449cb2b4d04b28acca8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7798f6444b-zjrsf" Dec 16 13:05:56.593390 kubelet[4018]: E1216 13:05:56.593329 4018 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"730440ce58caca2230d957e669c533194c4793046503c449cb2b4d04b28acca8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7798f6444b-zjrsf" Dec 16 13:05:56.593414 kubelet[4018]: E1216 13:05:56.593398 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7798f6444b-zjrsf_calico-apiserver(fa544c8c-af21-41f2-8ffb-1fe7c36b0bfb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7798f6444b-zjrsf_calico-apiserver(fa544c8c-af21-41f2-8ffb-1fe7c36b0bfb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"730440ce58caca2230d957e669c533194c4793046503c449cb2b4d04b28acca8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7798f6444b-zjrsf" podUID="fa544c8c-af21-41f2-8ffb-1fe7c36b0bfb" Dec 16 13:05:56.713952 systemd[1]: Created slice kubepods-besteffort-pod940a093b_83dc_454c_8522_5e1b1f40521f.slice - libcontainer container kubepods-besteffort-pod940a093b_83dc_454c_8522_5e1b1f40521f.slice. Dec 16 13:05:56.724854 containerd[2540]: time="2025-12-16T13:05:56.724825732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ctchn,Uid:940a093b-83dc-454c-8522-5e1b1f40521f,Namespace:calico-system,Attempt:0,}" Dec 16 13:05:56.782733 containerd[2540]: time="2025-12-16T13:05:56.782690992Z" level=error msg="Failed to destroy network for sandbox \"14cac750007201cf3d8d0332c99849118f548a8ee8e3d1b8ab9259acb7eacff6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:56.797784 containerd[2540]: time="2025-12-16T13:05:56.797746463Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ctchn,Uid:940a093b-83dc-454c-8522-5e1b1f40521f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"14cac750007201cf3d8d0332c99849118f548a8ee8e3d1b8ab9259acb7eacff6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:56.797976 kubelet[4018]: E1216 13:05:56.797934 4018 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14cac750007201cf3d8d0332c99849118f548a8ee8e3d1b8ab9259acb7eacff6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:56.798286 kubelet[4018]: E1216 13:05:56.797994 4018 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14cac750007201cf3d8d0332c99849118f548a8ee8e3d1b8ab9259acb7eacff6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ctchn" Dec 16 13:05:56.798286 kubelet[4018]: E1216 13:05:56.798022 4018 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14cac750007201cf3d8d0332c99849118f548a8ee8e3d1b8ab9259acb7eacff6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ctchn" Dec 16 13:05:56.798286 kubelet[4018]: E1216 13:05:56.798075 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ctchn_calico-system(940a093b-83dc-454c-8522-5e1b1f40521f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ctchn_calico-system(940a093b-83dc-454c-8522-5e1b1f40521f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14cac750007201cf3d8d0332c99849118f548a8ee8e3d1b8ab9259acb7eacff6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ctchn" podUID="940a093b-83dc-454c-8522-5e1b1f40521f" Dec 16 13:05:56.852238 containerd[2540]: time="2025-12-16T13:05:56.852107307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 13:06:01.036322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2198038806.mount: Deactivated successfully. Dec 16 13:06:01.078567 containerd[2540]: time="2025-12-16T13:06:01.078504540Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:01.082822 containerd[2540]: time="2025-12-16T13:06:01.082778157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 16 13:06:01.086038 containerd[2540]: time="2025-12-16T13:06:01.085990711Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:01.090051 containerd[2540]: time="2025-12-16T13:06:01.089991746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:01.090613 containerd[2540]: time="2025-12-16T13:06:01.090296212Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.238152083s" Dec 16 13:06:01.090613 containerd[2540]: time="2025-12-16T13:06:01.090331792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 13:06:01.110907 containerd[2540]: time="2025-12-16T13:06:01.110869743Z" level=info msg="CreateContainer within sandbox \"a9bf4258a59d2691c327c6bfa2ed84410b663bbe3142c5e898da5587a6eaf1b1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 13:06:01.137685 containerd[2540]: time="2025-12-16T13:06:01.137647003Z" level=info msg="Container d502ac9fc44f7abed50066ce38fa91facf3b57ef712e2fa93a1e0f310a3026e1: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:01.162783 containerd[2540]: time="2025-12-16T13:06:01.162752696Z" level=info msg="CreateContainer within sandbox \"a9bf4258a59d2691c327c6bfa2ed84410b663bbe3142c5e898da5587a6eaf1b1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d502ac9fc44f7abed50066ce38fa91facf3b57ef712e2fa93a1e0f310a3026e1\"" Dec 16 13:06:01.164400 containerd[2540]: time="2025-12-16T13:06:01.163217732Z" level=info msg="StartContainer for \"d502ac9fc44f7abed50066ce38fa91facf3b57ef712e2fa93a1e0f310a3026e1\"" Dec 16 13:06:01.164746 containerd[2540]: time="2025-12-16T13:06:01.164703563Z" level=info msg="connecting to shim d502ac9fc44f7abed50066ce38fa91facf3b57ef712e2fa93a1e0f310a3026e1" address="unix:///run/containerd/s/30bc6743cbb383c683f3fce89c5b19d52e404fe88f60846b755da225539a8068" protocol=ttrpc version=3 Dec 16 13:06:01.184550 systemd[1]: Started cri-containerd-d502ac9fc44f7abed50066ce38fa91facf3b57ef712e2fa93a1e0f310a3026e1.scope - libcontainer container d502ac9fc44f7abed50066ce38fa91facf3b57ef712e2fa93a1e0f310a3026e1. Dec 16 13:06:01.239000 audit: BPF prog-id=201 op=LOAD Dec 16 13:06:01.240995 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 16 13:06:01.241065 kernel: audit: type=1334 audit(1765890361.239:612): prog-id=201 op=LOAD Dec 16 13:06:01.239000 audit[5102]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4607 pid=5102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:01.254878 kernel: audit: type=1300 audit(1765890361.239:612): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4607 pid=5102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:01.254959 kernel: audit: type=1327 audit(1765890361.239:612): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435303261633966633434663761626564353030363663653338666139 Dec 16 13:06:01.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435303261633966633434663761626564353030363663653338666139 Dec 16 13:06:01.257068 kernel: audit: type=1334 audit(1765890361.240:613): prog-id=202 op=LOAD Dec 16 13:06:01.240000 audit: BPF prog-id=202 op=LOAD Dec 16 13:06:01.240000 audit[5102]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4607 pid=5102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:01.268399 kernel: audit: type=1300 audit(1765890361.240:613): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4607 pid=5102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:01.268483 kernel: audit: type=1327 audit(1765890361.240:613): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435303261633966633434663761626564353030363663653338666139 Dec 16 13:06:01.240000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435303261633966633434663761626564353030363663653338666139 Dec 16 13:06:01.278369 kernel: audit: type=1334 audit(1765890361.240:614): prog-id=202 op=UNLOAD Dec 16 13:06:01.278438 kernel: audit: type=1300 audit(1765890361.240:614): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4607 pid=5102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:01.240000 audit: BPF prog-id=202 op=UNLOAD Dec 16 13:06:01.240000 audit[5102]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4607 pid=5102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:01.286613 kernel: audit: type=1327 audit(1765890361.240:614): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435303261633966633434663761626564353030363663653338666139 Dec 16 13:06:01.240000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435303261633966633434663761626564353030363663653338666139 Dec 16 13:06:01.240000 audit: BPF prog-id=201 op=UNLOAD Dec 16 13:06:01.290386 kernel: audit: type=1334 audit(1765890361.240:615): prog-id=201 op=UNLOAD Dec 16 13:06:01.240000 audit[5102]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4607 pid=5102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:01.240000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435303261633966633434663761626564353030363663653338666139 Dec 16 13:06:01.240000 audit: BPF prog-id=203 op=LOAD Dec 16 13:06:01.240000 audit[5102]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4607 pid=5102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:01.240000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435303261633966633434663761626564353030363663653338666139 Dec 16 13:06:01.306660 containerd[2540]: time="2025-12-16T13:06:01.306620164Z" level=info msg="StartContainer for \"d502ac9fc44f7abed50066ce38fa91facf3b57ef712e2fa93a1e0f310a3026e1\" returns successfully" Dec 16 13:06:01.582020 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 13:06:01.582160 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 13:06:01.767101 kubelet[4018]: I1216 13:06:01.766485 4018 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2j6z4\" (UniqueName: \"kubernetes.io/projected/ec8a48b8-a266-4333-abae-c471e3ab42b1-kube-api-access-2j6z4\") pod \"ec8a48b8-a266-4333-abae-c471e3ab42b1\" (UID: \"ec8a48b8-a266-4333-abae-c471e3ab42b1\") " Dec 16 13:06:01.767101 kubelet[4018]: I1216 13:06:01.766532 4018 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ec8a48b8-a266-4333-abae-c471e3ab42b1-whisker-backend-key-pair\") pod \"ec8a48b8-a266-4333-abae-c471e3ab42b1\" (UID: \"ec8a48b8-a266-4333-abae-c471e3ab42b1\") " Dec 16 13:06:01.767101 kubelet[4018]: I1216 13:06:01.766551 4018 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec8a48b8-a266-4333-abae-c471e3ab42b1-whisker-ca-bundle\") pod \"ec8a48b8-a266-4333-abae-c471e3ab42b1\" (UID: \"ec8a48b8-a266-4333-abae-c471e3ab42b1\") " Dec 16 13:06:01.767101 kubelet[4018]: I1216 13:06:01.766877 4018 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec8a48b8-a266-4333-abae-c471e3ab42b1-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ec8a48b8-a266-4333-abae-c471e3ab42b1" (UID: "ec8a48b8-a266-4333-abae-c471e3ab42b1"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:06:01.772783 kubelet[4018]: I1216 13:06:01.772752 4018 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec8a48b8-a266-4333-abae-c471e3ab42b1-kube-api-access-2j6z4" (OuterVolumeSpecName: "kube-api-access-2j6z4") pod "ec8a48b8-a266-4333-abae-c471e3ab42b1" (UID: "ec8a48b8-a266-4333-abae-c471e3ab42b1"). InnerVolumeSpecName "kube-api-access-2j6z4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:06:01.773353 kubelet[4018]: I1216 13:06:01.773194 4018 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec8a48b8-a266-4333-abae-c471e3ab42b1-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ec8a48b8-a266-4333-abae-c471e3ab42b1" (UID: "ec8a48b8-a266-4333-abae-c471e3ab42b1"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 13:06:01.870689 kubelet[4018]: I1216 13:06:01.870660 4018 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ec8a48b8-a266-4333-abae-c471e3ab42b1-whisker-backend-key-pair\") on node \"ci-4515.1.0-a-5ae2bb3665\" DevicePath \"\"" Dec 16 13:06:01.870689 kubelet[4018]: I1216 13:06:01.870689 4018 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec8a48b8-a266-4333-abae-c471e3ab42b1-whisker-ca-bundle\") on node \"ci-4515.1.0-a-5ae2bb3665\" DevicePath \"\"" Dec 16 13:06:01.870840 kubelet[4018]: I1216 13:06:01.870698 4018 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2j6z4\" (UniqueName: \"kubernetes.io/projected/ec8a48b8-a266-4333-abae-c471e3ab42b1-kube-api-access-2j6z4\") on node \"ci-4515.1.0-a-5ae2bb3665\" DevicePath \"\"" Dec 16 13:06:01.873972 systemd[1]: Removed slice kubepods-besteffort-podec8a48b8_a266_4333_abae_c471e3ab42b1.slice - libcontainer container kubepods-besteffort-podec8a48b8_a266_4333_abae_c471e3ab42b1.slice. Dec 16 13:06:01.924825 kubelet[4018]: I1216 13:06:01.924406 4018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-s9njd" podStartSLOduration=1.750019945 podStartE2EDuration="17.924387208s" podCreationTimestamp="2025-12-16 13:05:44 +0000 UTC" firstStartedPulling="2025-12-16 13:05:44.917043924 +0000 UTC m=+21.316009627" lastFinishedPulling="2025-12-16 13:06:01.0914112 +0000 UTC m=+37.490376890" observedRunningTime="2025-12-16 13:06:01.906032073 +0000 UTC m=+38.304997783" watchObservedRunningTime="2025-12-16 13:06:01.924387208 +0000 UTC m=+38.323353086" Dec 16 13:06:02.004028 systemd[1]: Created slice kubepods-besteffort-pod7c146d92_4a81_4948_9e2f_1093c61dcd5c.slice - libcontainer container kubepods-besteffort-pod7c146d92_4a81_4948_9e2f_1093c61dcd5c.slice. Dec 16 13:06:02.035184 systemd[1]: var-lib-kubelet-pods-ec8a48b8\x2da266\x2d4333\x2dabae\x2dc471e3ab42b1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2j6z4.mount: Deactivated successfully. Dec 16 13:06:02.035932 systemd[1]: var-lib-kubelet-pods-ec8a48b8\x2da266\x2d4333\x2dabae\x2dc471e3ab42b1-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 13:06:02.073794 kubelet[4018]: I1216 13:06:02.073762 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7c146d92-4a81-4948-9e2f-1093c61dcd5c-whisker-backend-key-pair\") pod \"whisker-66f67cd584-8rhwp\" (UID: \"7c146d92-4a81-4948-9e2f-1093c61dcd5c\") " pod="calico-system/whisker-66f67cd584-8rhwp" Dec 16 13:06:02.073996 kubelet[4018]: I1216 13:06:02.073809 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c146d92-4a81-4948-9e2f-1093c61dcd5c-whisker-ca-bundle\") pod \"whisker-66f67cd584-8rhwp\" (UID: \"7c146d92-4a81-4948-9e2f-1093c61dcd5c\") " pod="calico-system/whisker-66f67cd584-8rhwp" Dec 16 13:06:02.073996 kubelet[4018]: I1216 13:06:02.073830 4018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rjcj\" (UniqueName: \"kubernetes.io/projected/7c146d92-4a81-4948-9e2f-1093c61dcd5c-kube-api-access-7rjcj\") pod \"whisker-66f67cd584-8rhwp\" (UID: \"7c146d92-4a81-4948-9e2f-1093c61dcd5c\") " pod="calico-system/whisker-66f67cd584-8rhwp" Dec 16 13:06:02.320296 containerd[2540]: time="2025-12-16T13:06:02.319971626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66f67cd584-8rhwp,Uid:7c146d92-4a81-4948-9e2f-1093c61dcd5c,Namespace:calico-system,Attempt:0,}" Dec 16 13:06:02.442355 systemd-networkd[2150]: caliacfd055ee67: Link UP Dec 16 13:06:02.442834 systemd-networkd[2150]: caliacfd055ee67: Gained carrier Dec 16 13:06:02.458982 containerd[2540]: 2025-12-16 13:06:02.364 [INFO][5168] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 13:06:02.458982 containerd[2540]: 2025-12-16 13:06:02.374 [INFO][5168] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--5ae2bb3665-k8s-whisker--66f67cd584--8rhwp-eth0 whisker-66f67cd584- calico-system 7c146d92-4a81-4948-9e2f-1093c61dcd5c 925 0 2025-12-16 13:06:01 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:66f67cd584 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4515.1.0-a-5ae2bb3665 whisker-66f67cd584-8rhwp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliacfd055ee67 [] [] }} ContainerID="69c35c45bb34e56a98c60ce4ab8c9ec5edf662bb1f6e1d2ff22b536ccd39743a" Namespace="calico-system" Pod="whisker-66f67cd584-8rhwp" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-whisker--66f67cd584--8rhwp-" Dec 16 13:06:02.458982 containerd[2540]: 2025-12-16 13:06:02.374 [INFO][5168] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="69c35c45bb34e56a98c60ce4ab8c9ec5edf662bb1f6e1d2ff22b536ccd39743a" Namespace="calico-system" Pod="whisker-66f67cd584-8rhwp" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-whisker--66f67cd584--8rhwp-eth0" Dec 16 13:06:02.458982 containerd[2540]: 2025-12-16 13:06:02.398 [INFO][5180] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69c35c45bb34e56a98c60ce4ab8c9ec5edf662bb1f6e1d2ff22b536ccd39743a" HandleID="k8s-pod-network.69c35c45bb34e56a98c60ce4ab8c9ec5edf662bb1f6e1d2ff22b536ccd39743a" Workload="ci--4515.1.0--a--5ae2bb3665-k8s-whisker--66f67cd584--8rhwp-eth0" Dec 16 13:06:02.459237 containerd[2540]: 2025-12-16 13:06:02.398 [INFO][5180] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="69c35c45bb34e56a98c60ce4ab8c9ec5edf662bb1f6e1d2ff22b536ccd39743a" HandleID="k8s-pod-network.69c35c45bb34e56a98c60ce4ab8c9ec5edf662bb1f6e1d2ff22b536ccd39743a" Workload="ci--4515.1.0--a--5ae2bb3665-k8s-whisker--66f67cd584--8rhwp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5710), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-a-5ae2bb3665", "pod":"whisker-66f67cd584-8rhwp", "timestamp":"2025-12-16 13:06:02.398695269 +0000 UTC"}, Hostname:"ci-4515.1.0-a-5ae2bb3665", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:06:02.459237 containerd[2540]: 2025-12-16 13:06:02.398 [INFO][5180] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:06:02.459237 containerd[2540]: 2025-12-16 13:06:02.398 [INFO][5180] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:06:02.459237 containerd[2540]: 2025-12-16 13:06:02.398 [INFO][5180] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-5ae2bb3665' Dec 16 13:06:02.459237 containerd[2540]: 2025-12-16 13:06:02.404 [INFO][5180] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.69c35c45bb34e56a98c60ce4ab8c9ec5edf662bb1f6e1d2ff22b536ccd39743a" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:02.459237 containerd[2540]: 2025-12-16 13:06:02.408 [INFO][5180] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:02.459237 containerd[2540]: 2025-12-16 13:06:02.412 [INFO][5180] ipam/ipam.go 511: Trying affinity for 192.168.5.64/26 host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:02.459237 containerd[2540]: 2025-12-16 13:06:02.413 [INFO][5180] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.64/26 host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:02.459237 containerd[2540]: 2025-12-16 13:06:02.415 [INFO][5180] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.64/26 host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:02.459493 containerd[2540]: 2025-12-16 13:06:02.415 [INFO][5180] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.64/26 handle="k8s-pod-network.69c35c45bb34e56a98c60ce4ab8c9ec5edf662bb1f6e1d2ff22b536ccd39743a" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:02.459493 containerd[2540]: 2025-12-16 13:06:02.416 [INFO][5180] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.69c35c45bb34e56a98c60ce4ab8c9ec5edf662bb1f6e1d2ff22b536ccd39743a Dec 16 13:06:02.459493 containerd[2540]: 2025-12-16 13:06:02.420 [INFO][5180] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.64/26 handle="k8s-pod-network.69c35c45bb34e56a98c60ce4ab8c9ec5edf662bb1f6e1d2ff22b536ccd39743a" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:02.459493 containerd[2540]: 2025-12-16 13:06:02.431 [INFO][5180] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.65/26] block=192.168.5.64/26 handle="k8s-pod-network.69c35c45bb34e56a98c60ce4ab8c9ec5edf662bb1f6e1d2ff22b536ccd39743a" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:02.459493 containerd[2540]: 2025-12-16 13:06:02.431 [INFO][5180] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.65/26] handle="k8s-pod-network.69c35c45bb34e56a98c60ce4ab8c9ec5edf662bb1f6e1d2ff22b536ccd39743a" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:02.459493 containerd[2540]: 2025-12-16 13:06:02.431 [INFO][5180] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:06:02.459493 containerd[2540]: 2025-12-16 13:06:02.431 [INFO][5180] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.65/26] IPv6=[] ContainerID="69c35c45bb34e56a98c60ce4ab8c9ec5edf662bb1f6e1d2ff22b536ccd39743a" HandleID="k8s-pod-network.69c35c45bb34e56a98c60ce4ab8c9ec5edf662bb1f6e1d2ff22b536ccd39743a" Workload="ci--4515.1.0--a--5ae2bb3665-k8s-whisker--66f67cd584--8rhwp-eth0" Dec 16 13:06:02.459662 containerd[2540]: 2025-12-16 13:06:02.435 [INFO][5168] cni-plugin/k8s.go 418: Populated endpoint ContainerID="69c35c45bb34e56a98c60ce4ab8c9ec5edf662bb1f6e1d2ff22b536ccd39743a" Namespace="calico-system" Pod="whisker-66f67cd584-8rhwp" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-whisker--66f67cd584--8rhwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--5ae2bb3665-k8s-whisker--66f67cd584--8rhwp-eth0", GenerateName:"whisker-66f67cd584-", Namespace:"calico-system", SelfLink:"", UID:"7c146d92-4a81-4948-9e2f-1093c61dcd5c", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 6, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66f67cd584", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-5ae2bb3665", ContainerID:"", Pod:"whisker-66f67cd584-8rhwp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.5.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliacfd055ee67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:06:02.459662 containerd[2540]: 2025-12-16 13:06:02.435 [INFO][5168] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.65/32] ContainerID="69c35c45bb34e56a98c60ce4ab8c9ec5edf662bb1f6e1d2ff22b536ccd39743a" Namespace="calico-system" Pod="whisker-66f67cd584-8rhwp" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-whisker--66f67cd584--8rhwp-eth0" Dec 16 13:06:02.459774 containerd[2540]: 2025-12-16 13:06:02.435 [INFO][5168] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliacfd055ee67 ContainerID="69c35c45bb34e56a98c60ce4ab8c9ec5edf662bb1f6e1d2ff22b536ccd39743a" Namespace="calico-system" Pod="whisker-66f67cd584-8rhwp" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-whisker--66f67cd584--8rhwp-eth0" Dec 16 13:06:02.459774 containerd[2540]: 2025-12-16 13:06:02.442 [INFO][5168] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="69c35c45bb34e56a98c60ce4ab8c9ec5edf662bb1f6e1d2ff22b536ccd39743a" Namespace="calico-system" Pod="whisker-66f67cd584-8rhwp" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-whisker--66f67cd584--8rhwp-eth0" Dec 16 13:06:02.459834 containerd[2540]: 2025-12-16 13:06:02.443 [INFO][5168] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="69c35c45bb34e56a98c60ce4ab8c9ec5edf662bb1f6e1d2ff22b536ccd39743a" Namespace="calico-system" Pod="whisker-66f67cd584-8rhwp" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-whisker--66f67cd584--8rhwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--5ae2bb3665-k8s-whisker--66f67cd584--8rhwp-eth0", GenerateName:"whisker-66f67cd584-", Namespace:"calico-system", SelfLink:"", UID:"7c146d92-4a81-4948-9e2f-1093c61dcd5c", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 6, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66f67cd584", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-5ae2bb3665", ContainerID:"69c35c45bb34e56a98c60ce4ab8c9ec5edf662bb1f6e1d2ff22b536ccd39743a", Pod:"whisker-66f67cd584-8rhwp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.5.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliacfd055ee67", MAC:"46:c8:2d:72:10:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:06:02.459901 containerd[2540]: 2025-12-16 13:06:02.455 [INFO][5168] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="69c35c45bb34e56a98c60ce4ab8c9ec5edf662bb1f6e1d2ff22b536ccd39743a" Namespace="calico-system" Pod="whisker-66f67cd584-8rhwp" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-whisker--66f67cd584--8rhwp-eth0" Dec 16 13:06:02.508364 containerd[2540]: time="2025-12-16T13:06:02.508297917Z" level=info msg="connecting to shim 69c35c45bb34e56a98c60ce4ab8c9ec5edf662bb1f6e1d2ff22b536ccd39743a" address="unix:///run/containerd/s/28a7abfd9ec3b996e72bd8d5f1542cd6e60d0bb918ec42c0c7eee9bd374e2386" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:02.529601 systemd[1]: Started cri-containerd-69c35c45bb34e56a98c60ce4ab8c9ec5edf662bb1f6e1d2ff22b536ccd39743a.scope - libcontainer container 69c35c45bb34e56a98c60ce4ab8c9ec5edf662bb1f6e1d2ff22b536ccd39743a. Dec 16 13:06:02.538000 audit: BPF prog-id=204 op=LOAD Dec 16 13:06:02.538000 audit: BPF prog-id=205 op=LOAD Dec 16 13:06:02.538000 audit[5214]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5202 pid=5214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:02.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639633335633435626233346535366139386336306365346162386339 Dec 16 13:06:02.538000 audit: BPF prog-id=205 op=UNLOAD Dec 16 13:06:02.538000 audit[5214]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5202 pid=5214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:02.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639633335633435626233346535366139386336306365346162386339 Dec 16 13:06:02.538000 audit: BPF prog-id=206 op=LOAD Dec 16 13:06:02.538000 audit[5214]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5202 pid=5214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:02.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639633335633435626233346535366139386336306365346162386339 Dec 16 13:06:02.539000 audit: BPF prog-id=207 op=LOAD Dec 16 13:06:02.539000 audit[5214]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5202 pid=5214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:02.539000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639633335633435626233346535366139386336306365346162386339 Dec 16 13:06:02.539000 audit: BPF prog-id=207 op=UNLOAD Dec 16 13:06:02.539000 audit[5214]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5202 pid=5214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:02.539000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639633335633435626233346535366139386336306365346162386339 Dec 16 13:06:02.539000 audit: BPF prog-id=206 op=UNLOAD Dec 16 13:06:02.539000 audit[5214]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5202 pid=5214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:02.539000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639633335633435626233346535366139386336306365346162386339 Dec 16 13:06:02.539000 audit: BPF prog-id=208 op=LOAD Dec 16 13:06:02.539000 audit[5214]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5202 pid=5214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:02.539000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639633335633435626233346535366139386336306365346162386339 Dec 16 13:06:02.574075 containerd[2540]: time="2025-12-16T13:06:02.573974451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66f67cd584-8rhwp,Uid:7c146d92-4a81-4948-9e2f-1093c61dcd5c,Namespace:calico-system,Attempt:0,} returns sandbox id \"69c35c45bb34e56a98c60ce4ab8c9ec5edf662bb1f6e1d2ff22b536ccd39743a\"" Dec 16 13:06:02.577447 containerd[2540]: time="2025-12-16T13:06:02.577409054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:06:02.885714 containerd[2540]: time="2025-12-16T13:06:02.885446686Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:02.890317 containerd[2540]: time="2025-12-16T13:06:02.890278063Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:06:02.890499 containerd[2540]: time="2025-12-16T13:06:02.890319087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:02.890711 kubelet[4018]: E1216 13:06:02.890678 4018 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:06:02.891095 kubelet[4018]: E1216 13:06:02.890729 4018 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:06:02.891095 kubelet[4018]: E1216 13:06:02.890822 4018 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-66f67cd584-8rhwp_calico-system(7c146d92-4a81-4948-9e2f-1093c61dcd5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:02.892724 containerd[2540]: time="2025-12-16T13:06:02.892687105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:06:03.193230 containerd[2540]: time="2025-12-16T13:06:03.193169000Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:03.201463 containerd[2540]: time="2025-12-16T13:06:03.201305574Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:06:03.201463 containerd[2540]: time="2025-12-16T13:06:03.201356649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:03.201684 kubelet[4018]: E1216 13:06:03.201635 4018 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:06:03.201753 kubelet[4018]: E1216 13:06:03.201712 4018 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:06:03.201844 kubelet[4018]: E1216 13:06:03.201825 4018 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-66f67cd584-8rhwp_calico-system(7c146d92-4a81-4948-9e2f-1093c61dcd5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:03.201917 kubelet[4018]: E1216 13:06:03.201884 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66f67cd584-8rhwp" podUID="7c146d92-4a81-4948-9e2f-1093c61dcd5c" Dec 16 13:06:03.374000 audit: BPF prog-id=209 op=LOAD Dec 16 13:06:03.374000 audit[5358]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe139548c0 a2=98 a3=1fffffffffffffff items=0 ppid=5252 pid=5358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.374000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:06:03.374000 audit: BPF prog-id=209 op=UNLOAD Dec 16 13:06:03.374000 audit[5358]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe13954890 a3=0 items=0 ppid=5252 pid=5358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.374000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:06:03.374000 audit: BPF prog-id=210 op=LOAD Dec 16 13:06:03.374000 audit[5358]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe139547a0 a2=94 a3=3 items=0 ppid=5252 pid=5358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.374000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:06:03.374000 audit: BPF prog-id=210 op=UNLOAD Dec 16 13:06:03.374000 audit[5358]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe139547a0 a2=94 a3=3 items=0 ppid=5252 pid=5358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.374000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:06:03.374000 audit: BPF prog-id=211 op=LOAD Dec 16 13:06:03.374000 audit[5358]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe139547e0 a2=94 a3=7ffe139549c0 items=0 ppid=5252 pid=5358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.374000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:06:03.375000 audit: BPF prog-id=211 op=UNLOAD Dec 16 13:06:03.375000 audit[5358]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe139547e0 a2=94 a3=7ffe139549c0 items=0 ppid=5252 pid=5358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.375000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:06:03.376000 audit: BPF prog-id=212 op=LOAD Dec 16 13:06:03.376000 audit[5359]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffcab64840 a2=98 a3=3 items=0 ppid=5252 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.376000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:06:03.376000 audit: BPF prog-id=212 op=UNLOAD Dec 16 13:06:03.376000 audit[5359]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffcab64810 a3=0 items=0 ppid=5252 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.376000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:06:03.376000 audit: BPF prog-id=213 op=LOAD Dec 16 13:06:03.376000 audit[5359]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffcab64630 a2=94 a3=54428f items=0 ppid=5252 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.376000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:06:03.376000 audit: BPF prog-id=213 op=UNLOAD Dec 16 13:06:03.376000 audit[5359]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffcab64630 a2=94 a3=54428f items=0 ppid=5252 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.376000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:06:03.376000 audit: BPF prog-id=214 op=LOAD Dec 16 13:06:03.376000 audit[5359]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffcab64660 a2=94 a3=2 items=0 ppid=5252 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.376000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:06:03.376000 audit: BPF prog-id=214 op=UNLOAD Dec 16 13:06:03.376000 audit[5359]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffcab64660 a2=0 a3=2 items=0 ppid=5252 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.376000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:06:03.497000 audit: BPF prog-id=215 op=LOAD Dec 16 13:06:03.497000 audit[5359]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffcab64520 a2=94 a3=1 items=0 ppid=5252 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.497000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:06:03.497000 audit: BPF prog-id=215 op=UNLOAD Dec 16 13:06:03.497000 audit[5359]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffcab64520 a2=94 a3=1 items=0 ppid=5252 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.497000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:06:03.507000 audit: BPF prog-id=216 op=LOAD Dec 16 13:06:03.507000 audit[5359]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffcab64510 a2=94 a3=4 items=0 ppid=5252 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.507000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:06:03.507000 audit: BPF prog-id=216 op=UNLOAD Dec 16 13:06:03.507000 audit[5359]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fffcab64510 a2=0 a3=4 items=0 ppid=5252 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.507000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:06:03.507000 audit: BPF prog-id=217 op=LOAD Dec 16 13:06:03.507000 audit[5359]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffcab64370 a2=94 a3=5 items=0 ppid=5252 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.507000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:06:03.507000 audit: BPF prog-id=217 op=UNLOAD Dec 16 13:06:03.507000 audit[5359]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffcab64370 a2=0 a3=5 items=0 ppid=5252 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.507000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:06:03.507000 audit: BPF prog-id=218 op=LOAD Dec 16 13:06:03.507000 audit[5359]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffcab64590 a2=94 a3=6 items=0 ppid=5252 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.507000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:06:03.507000 audit: BPF prog-id=218 op=UNLOAD Dec 16 13:06:03.507000 audit[5359]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fffcab64590 a2=0 a3=6 items=0 ppid=5252 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.507000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:06:03.508000 audit: BPF prog-id=219 op=LOAD Dec 16 13:06:03.508000 audit[5359]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffcab63d40 a2=94 a3=88 items=0 ppid=5252 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.508000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:06:03.508000 audit: BPF prog-id=220 op=LOAD Dec 16 13:06:03.508000 audit[5359]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fffcab63bc0 a2=94 a3=2 items=0 ppid=5252 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.508000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:06:03.508000 audit: BPF prog-id=220 op=UNLOAD Dec 16 13:06:03.508000 audit[5359]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fffcab63bf0 a2=0 a3=7fffcab63cf0 items=0 ppid=5252 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.508000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:06:03.508000 audit: BPF prog-id=219 op=UNLOAD Dec 16 13:06:03.508000 audit[5359]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=35cbbd10 a2=0 a3=f90893141ff3a63e items=0 ppid=5252 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.508000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:06:03.515000 audit: BPF prog-id=221 op=LOAD Dec 16 13:06:03.515000 audit[5362]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe63d5aa30 a2=98 a3=1999999999999999 items=0 ppid=5252 pid=5362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.515000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:06:03.516000 audit: BPF prog-id=221 op=UNLOAD Dec 16 13:06:03.516000 audit[5362]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe63d5aa00 a3=0 items=0 ppid=5252 pid=5362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.516000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:06:03.516000 audit: BPF prog-id=222 op=LOAD Dec 16 13:06:03.516000 audit[5362]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe63d5a910 a2=94 a3=ffff items=0 ppid=5252 pid=5362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.516000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:06:03.516000 audit: BPF prog-id=222 op=UNLOAD Dec 16 13:06:03.516000 audit[5362]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe63d5a910 a2=94 a3=ffff items=0 ppid=5252 pid=5362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.516000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:06:03.516000 audit: BPF prog-id=223 op=LOAD Dec 16 13:06:03.516000 audit[5362]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe63d5a950 a2=94 a3=7ffe63d5ab30 items=0 ppid=5252 pid=5362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.516000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:06:03.516000 audit: BPF prog-id=223 op=UNLOAD Dec 16 13:06:03.516000 audit[5362]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe63d5a950 a2=94 a3=7ffe63d5ab30 items=0 ppid=5252 pid=5362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.516000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:06:03.690592 systemd-networkd[2150]: vxlan.calico: Link UP Dec 16 13:06:03.690602 systemd-networkd[2150]: vxlan.calico: Gained carrier Dec 16 13:06:03.692488 systemd-networkd[2150]: caliacfd055ee67: Gained IPv6LL Dec 16 13:06:03.713161 kubelet[4018]: I1216 13:06:03.713107 4018 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec8a48b8-a266-4333-abae-c471e3ab42b1" path="/var/lib/kubelet/pods/ec8a48b8-a266-4333-abae-c471e3ab42b1/volumes" Dec 16 13:06:03.716000 audit: BPF prog-id=224 op=LOAD Dec 16 13:06:03.716000 audit[5391]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe73845cc0 a2=98 a3=0 items=0 ppid=5252 pid=5391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.716000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:06:03.716000 audit: BPF prog-id=224 op=UNLOAD Dec 16 13:06:03.716000 audit[5391]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe73845c90 a3=0 items=0 ppid=5252 pid=5391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.716000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:06:03.716000 audit: BPF prog-id=225 op=LOAD Dec 16 13:06:03.716000 audit[5391]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe73845ad0 a2=94 a3=54428f items=0 ppid=5252 pid=5391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.716000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:06:03.716000 audit: BPF prog-id=225 op=UNLOAD Dec 16 13:06:03.716000 audit[5391]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe73845ad0 a2=94 a3=54428f items=0 ppid=5252 pid=5391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.716000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:06:03.716000 audit: BPF prog-id=226 op=LOAD Dec 16 13:06:03.716000 audit[5391]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe73845b00 a2=94 a3=2 items=0 ppid=5252 pid=5391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.716000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:06:03.716000 audit: BPF prog-id=226 op=UNLOAD Dec 16 13:06:03.716000 audit[5391]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe73845b00 a2=0 a3=2 items=0 ppid=5252 pid=5391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.716000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:06:03.716000 audit: BPF prog-id=227 op=LOAD Dec 16 13:06:03.716000 audit[5391]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe738458b0 a2=94 a3=4 items=0 ppid=5252 pid=5391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.716000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:06:03.716000 audit: BPF prog-id=227 op=UNLOAD Dec 16 13:06:03.716000 audit[5391]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe738458b0 a2=94 a3=4 items=0 ppid=5252 pid=5391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.716000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:06:03.716000 audit: BPF prog-id=228 op=LOAD Dec 16 13:06:03.716000 audit[5391]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe738459b0 a2=94 a3=7ffe73845b30 items=0 ppid=5252 pid=5391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.716000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:06:03.716000 audit: BPF prog-id=228 op=UNLOAD Dec 16 13:06:03.716000 audit[5391]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe738459b0 a2=0 a3=7ffe73845b30 items=0 ppid=5252 pid=5391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.716000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:06:03.721000 audit: BPF prog-id=229 op=LOAD Dec 16 13:06:03.721000 audit[5391]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe738450e0 a2=94 a3=2 items=0 ppid=5252 pid=5391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.721000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:06:03.721000 audit: BPF prog-id=229 op=UNLOAD Dec 16 13:06:03.721000 audit[5391]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe738450e0 a2=0 a3=2 items=0 ppid=5252 pid=5391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.721000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:06:03.721000 audit: BPF prog-id=230 op=LOAD Dec 16 13:06:03.721000 audit[5391]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe738451e0 a2=94 a3=30 items=0 ppid=5252 pid=5391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.721000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:06:03.729000 audit: BPF prog-id=231 op=LOAD Dec 16 13:06:03.729000 audit[5397]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe690ecdc0 a2=98 a3=0 items=0 ppid=5252 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.729000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:06:03.729000 audit: BPF prog-id=231 op=UNLOAD Dec 16 13:06:03.729000 audit[5397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe690ecd90 a3=0 items=0 ppid=5252 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.729000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:06:03.729000 audit: BPF prog-id=232 op=LOAD Dec 16 13:06:03.729000 audit[5397]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe690ecbb0 a2=94 a3=54428f items=0 ppid=5252 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.729000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:06:03.729000 audit: BPF prog-id=232 op=UNLOAD Dec 16 13:06:03.729000 audit[5397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe690ecbb0 a2=94 a3=54428f items=0 ppid=5252 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.729000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:06:03.729000 audit: BPF prog-id=233 op=LOAD Dec 16 13:06:03.729000 audit[5397]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe690ecbe0 a2=94 a3=2 items=0 ppid=5252 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.729000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:06:03.729000 audit: BPF prog-id=233 op=UNLOAD Dec 16 13:06:03.729000 audit[5397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe690ecbe0 a2=0 a3=2 items=0 ppid=5252 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.729000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:06:03.853000 audit: BPF prog-id=234 op=LOAD Dec 16 13:06:03.853000 audit[5397]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe690ecaa0 a2=94 a3=1 items=0 ppid=5252 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.853000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:06:03.853000 audit: BPF prog-id=234 op=UNLOAD Dec 16 13:06:03.853000 audit[5397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe690ecaa0 a2=94 a3=1 items=0 ppid=5252 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.853000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:06:03.862000 audit: BPF prog-id=235 op=LOAD Dec 16 13:06:03.862000 audit[5397]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe690eca90 a2=94 a3=4 items=0 ppid=5252 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.862000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:06:03.863000 audit: BPF prog-id=235 op=UNLOAD Dec 16 13:06:03.863000 audit[5397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe690eca90 a2=0 a3=4 items=0 ppid=5252 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.863000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:06:03.863000 audit: BPF prog-id=236 op=LOAD Dec 16 13:06:03.863000 audit[5397]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe690ec8f0 a2=94 a3=5 items=0 ppid=5252 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.863000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:06:03.863000 audit: BPF prog-id=236 op=UNLOAD Dec 16 13:06:03.863000 audit[5397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe690ec8f0 a2=0 a3=5 items=0 ppid=5252 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.863000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:06:03.863000 audit: BPF prog-id=237 op=LOAD Dec 16 13:06:03.863000 audit[5397]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe690ecb10 a2=94 a3=6 items=0 ppid=5252 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.863000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:06:03.863000 audit: BPF prog-id=237 op=UNLOAD Dec 16 13:06:03.863000 audit[5397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe690ecb10 a2=0 a3=6 items=0 ppid=5252 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.863000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:06:03.863000 audit: BPF prog-id=238 op=LOAD Dec 16 13:06:03.863000 audit[5397]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe690ec2c0 a2=94 a3=88 items=0 ppid=5252 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.863000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:06:03.863000 audit: BPF prog-id=239 op=LOAD Dec 16 13:06:03.863000 audit[5397]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffe690ec140 a2=94 a3=2 items=0 ppid=5252 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.863000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:06:03.863000 audit: BPF prog-id=239 op=UNLOAD Dec 16 13:06:03.863000 audit[5397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffe690ec170 a2=0 a3=7ffe690ec270 items=0 ppid=5252 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.863000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:06:03.864000 audit: BPF prog-id=238 op=UNLOAD Dec 16 13:06:03.864000 audit[5397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=240ead10 a2=0 a3=76bad34be4a895f8 items=0 ppid=5252 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.864000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:06:03.868000 audit: BPF prog-id=230 op=UNLOAD Dec 16 13:06:03.868000 audit[5252]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000a5e300 a2=0 a3=0 items=0 ppid=5245 pid=5252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.868000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 13:06:03.875502 kubelet[4018]: E1216 13:06:03.875461 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66f67cd584-8rhwp" podUID="7c146d92-4a81-4948-9e2f-1093c61dcd5c" Dec 16 13:06:03.908000 audit[5411]: NETFILTER_CFG table=filter:120 family=2 entries=20 op=nft_register_rule pid=5411 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:06:03.908000 audit[5411]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdb2b6e080 a2=0 a3=7ffdb2b6e06c items=0 ppid=4127 pid=5411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.908000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:06:03.914000 audit[5411]: NETFILTER_CFG table=nat:121 family=2 entries=14 op=nft_register_rule pid=5411 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:06:03.914000 audit[5411]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdb2b6e080 a2=0 a3=0 items=0 ppid=4127 pid=5411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.914000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:06:03.970000 audit[5420]: NETFILTER_CFG table=mangle:122 family=2 entries=16 op=nft_register_chain pid=5420 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:06:03.970000 audit[5420]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffeb9075e70 a2=0 a3=7ffeb9075e5c items=0 ppid=5252 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.970000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:06:03.971000 audit[5423]: NETFILTER_CFG table=nat:123 family=2 entries=15 op=nft_register_chain pid=5423 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:06:03.971000 audit[5423]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fffda0721e0 a2=0 a3=7fffda0721cc items=0 ppid=5252 pid=5423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.971000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:06:03.996000 audit[5422]: NETFILTER_CFG table=raw:124 family=2 entries=21 op=nft_register_chain pid=5422 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:06:03.996000 audit[5422]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd6808fa20 a2=0 a3=7ffd6808fa0c items=0 ppid=5252 pid=5422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.996000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:06:03.998000 audit[5424]: NETFILTER_CFG table=filter:125 family=2 entries=94 op=nft_register_chain pid=5424 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:06:03.998000 audit[5424]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffcfb1aadf0 a2=0 a3=7ffcfb1aaddc items=0 ppid=5252 pid=5424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:03.998000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:06:04.844568 systemd-networkd[2150]: vxlan.calico: Gained IPv6LL Dec 16 13:06:07.721161 containerd[2540]: time="2025-12-16T13:06:07.721105269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7798f6444b-p9dhf,Uid:d35c67aa-255b-42a2-83b2-79e30256e265,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:06:07.728858 containerd[2540]: time="2025-12-16T13:06:07.728820787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wcczd,Uid:e4c0041a-7552-4196-b88d-cb0c0c25e0f7,Namespace:kube-system,Attempt:0,}" Dec 16 13:06:07.737726 containerd[2540]: time="2025-12-16T13:06:07.737696308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-22fpt,Uid:c46c1cd1-ee4e-4f44-a196-4c7989633db4,Namespace:kube-system,Attempt:0,}" Dec 16 13:06:07.890112 systemd-networkd[2150]: calie1bd2f6d408: Link UP Dec 16 13:06:07.890994 systemd-networkd[2150]: calie1bd2f6d408: Gained carrier Dec 16 13:06:07.918546 containerd[2540]: 2025-12-16 13:06:07.785 [INFO][5439] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--5ae2bb3665-k8s-calico--apiserver--7798f6444b--p9dhf-eth0 calico-apiserver-7798f6444b- calico-apiserver d35c67aa-255b-42a2-83b2-79e30256e265 862 0 2025-12-16 13:05:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7798f6444b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515.1.0-a-5ae2bb3665 calico-apiserver-7798f6444b-p9dhf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie1bd2f6d408 [] [] }} ContainerID="6108fb9a4a8ce66a09b169cf62637977364195790b7db64d4801178a670371a1" Namespace="calico-apiserver" Pod="calico-apiserver-7798f6444b-p9dhf" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-calico--apiserver--7798f6444b--p9dhf-" Dec 16 13:06:07.918546 containerd[2540]: 2025-12-16 13:06:07.785 [INFO][5439] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6108fb9a4a8ce66a09b169cf62637977364195790b7db64d4801178a670371a1" Namespace="calico-apiserver" Pod="calico-apiserver-7798f6444b-p9dhf" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-calico--apiserver--7798f6444b--p9dhf-eth0" Dec 16 13:06:07.918546 containerd[2540]: 2025-12-16 13:06:07.826 [INFO][5464] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6108fb9a4a8ce66a09b169cf62637977364195790b7db64d4801178a670371a1" HandleID="k8s-pod-network.6108fb9a4a8ce66a09b169cf62637977364195790b7db64d4801178a670371a1" Workload="ci--4515.1.0--a--5ae2bb3665-k8s-calico--apiserver--7798f6444b--p9dhf-eth0" Dec 16 13:06:07.919033 containerd[2540]: 2025-12-16 13:06:07.827 [INFO][5464] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6108fb9a4a8ce66a09b169cf62637977364195790b7db64d4801178a670371a1" HandleID="k8s-pod-network.6108fb9a4a8ce66a09b169cf62637977364195790b7db64d4801178a670371a1" Workload="ci--4515.1.0--a--5ae2bb3665-k8s-calico--apiserver--7798f6444b--p9dhf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad630), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515.1.0-a-5ae2bb3665", "pod":"calico-apiserver-7798f6444b-p9dhf", "timestamp":"2025-12-16 13:06:07.826923902 +0000 UTC"}, Hostname:"ci-4515.1.0-a-5ae2bb3665", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:06:07.919033 containerd[2540]: 2025-12-16 13:06:07.828 [INFO][5464] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:06:07.919033 containerd[2540]: 2025-12-16 13:06:07.828 [INFO][5464] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:06:07.919033 containerd[2540]: 2025-12-16 13:06:07.828 [INFO][5464] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-5ae2bb3665' Dec 16 13:06:07.919033 containerd[2540]: 2025-12-16 13:06:07.838 [INFO][5464] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6108fb9a4a8ce66a09b169cf62637977364195790b7db64d4801178a670371a1" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:07.919033 containerd[2540]: 2025-12-16 13:06:07.843 [INFO][5464] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:07.919033 containerd[2540]: 2025-12-16 13:06:07.851 [INFO][5464] ipam/ipam.go 511: Trying affinity for 192.168.5.64/26 host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:07.919033 containerd[2540]: 2025-12-16 13:06:07.853 [INFO][5464] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.64/26 host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:07.919033 containerd[2540]: 2025-12-16 13:06:07.855 [INFO][5464] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.64/26 host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:07.919529 containerd[2540]: 2025-12-16 13:06:07.855 [INFO][5464] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.64/26 handle="k8s-pod-network.6108fb9a4a8ce66a09b169cf62637977364195790b7db64d4801178a670371a1" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:07.919529 containerd[2540]: 2025-12-16 13:06:07.858 [INFO][5464] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6108fb9a4a8ce66a09b169cf62637977364195790b7db64d4801178a670371a1 Dec 16 13:06:07.919529 containerd[2540]: 2025-12-16 13:06:07.865 [INFO][5464] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.64/26 handle="k8s-pod-network.6108fb9a4a8ce66a09b169cf62637977364195790b7db64d4801178a670371a1" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:07.919529 containerd[2540]: 2025-12-16 13:06:07.875 [INFO][5464] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.66/26] block=192.168.5.64/26 handle="k8s-pod-network.6108fb9a4a8ce66a09b169cf62637977364195790b7db64d4801178a670371a1" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:07.919529 containerd[2540]: 2025-12-16 13:06:07.875 [INFO][5464] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.66/26] handle="k8s-pod-network.6108fb9a4a8ce66a09b169cf62637977364195790b7db64d4801178a670371a1" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:07.919529 containerd[2540]: 2025-12-16 13:06:07.875 [INFO][5464] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:06:07.919529 containerd[2540]: 2025-12-16 13:06:07.875 [INFO][5464] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.66/26] IPv6=[] ContainerID="6108fb9a4a8ce66a09b169cf62637977364195790b7db64d4801178a670371a1" HandleID="k8s-pod-network.6108fb9a4a8ce66a09b169cf62637977364195790b7db64d4801178a670371a1" Workload="ci--4515.1.0--a--5ae2bb3665-k8s-calico--apiserver--7798f6444b--p9dhf-eth0" Dec 16 13:06:07.919743 containerd[2540]: 2025-12-16 13:06:07.878 [INFO][5439] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6108fb9a4a8ce66a09b169cf62637977364195790b7db64d4801178a670371a1" Namespace="calico-apiserver" Pod="calico-apiserver-7798f6444b-p9dhf" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-calico--apiserver--7798f6444b--p9dhf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--5ae2bb3665-k8s-calico--apiserver--7798f6444b--p9dhf-eth0", GenerateName:"calico-apiserver-7798f6444b-", Namespace:"calico-apiserver", SelfLink:"", UID:"d35c67aa-255b-42a2-83b2-79e30256e265", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7798f6444b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-5ae2bb3665", ContainerID:"", Pod:"calico-apiserver-7798f6444b-p9dhf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie1bd2f6d408", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:06:07.919839 containerd[2540]: 2025-12-16 13:06:07.878 [INFO][5439] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.66/32] ContainerID="6108fb9a4a8ce66a09b169cf62637977364195790b7db64d4801178a670371a1" Namespace="calico-apiserver" Pod="calico-apiserver-7798f6444b-p9dhf" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-calico--apiserver--7798f6444b--p9dhf-eth0" Dec 16 13:06:07.919839 containerd[2540]: 2025-12-16 13:06:07.878 [INFO][5439] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie1bd2f6d408 ContainerID="6108fb9a4a8ce66a09b169cf62637977364195790b7db64d4801178a670371a1" Namespace="calico-apiserver" Pod="calico-apiserver-7798f6444b-p9dhf" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-calico--apiserver--7798f6444b--p9dhf-eth0" Dec 16 13:06:07.919839 containerd[2540]: 2025-12-16 13:06:07.894 [INFO][5439] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6108fb9a4a8ce66a09b169cf62637977364195790b7db64d4801178a670371a1" Namespace="calico-apiserver" Pod="calico-apiserver-7798f6444b-p9dhf" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-calico--apiserver--7798f6444b--p9dhf-eth0" Dec 16 13:06:07.919931 containerd[2540]: 2025-12-16 13:06:07.894 [INFO][5439] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6108fb9a4a8ce66a09b169cf62637977364195790b7db64d4801178a670371a1" Namespace="calico-apiserver" Pod="calico-apiserver-7798f6444b-p9dhf" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-calico--apiserver--7798f6444b--p9dhf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--5ae2bb3665-k8s-calico--apiserver--7798f6444b--p9dhf-eth0", GenerateName:"calico-apiserver-7798f6444b-", Namespace:"calico-apiserver", SelfLink:"", UID:"d35c67aa-255b-42a2-83b2-79e30256e265", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7798f6444b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-5ae2bb3665", ContainerID:"6108fb9a4a8ce66a09b169cf62637977364195790b7db64d4801178a670371a1", Pod:"calico-apiserver-7798f6444b-p9dhf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie1bd2f6d408", MAC:"5e:06:af:41:2f:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:06:07.920004 containerd[2540]: 2025-12-16 13:06:07.915 [INFO][5439] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6108fb9a4a8ce66a09b169cf62637977364195790b7db64d4801178a670371a1" Namespace="calico-apiserver" Pod="calico-apiserver-7798f6444b-p9dhf" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-calico--apiserver--7798f6444b--p9dhf-eth0" Dec 16 13:06:07.930000 audit[5508]: NETFILTER_CFG table=filter:126 family=2 entries=50 op=nft_register_chain pid=5508 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:06:07.933599 kernel: kauditd_printk_skb: 231 callbacks suppressed Dec 16 13:06:07.933762 kernel: audit: type=1325 audit(1765890367.930:693): table=filter:126 family=2 entries=50 op=nft_register_chain pid=5508 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:06:07.938308 kernel: audit: type=1300 audit(1765890367.930:693): arch=c000003e syscall=46 success=yes exit=28208 a0=3 a1=7ffe188b5d90 a2=0 a3=7ffe188b5d7c items=0 ppid=5252 pid=5508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:07.930000 audit[5508]: SYSCALL arch=c000003e syscall=46 success=yes exit=28208 a0=3 a1=7ffe188b5d90 a2=0 a3=7ffe188b5d7c items=0 ppid=5252 pid=5508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:07.945561 kernel: audit: type=1327 audit(1765890367.930:693): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:06:07.930000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:06:07.989591 systemd-networkd[2150]: calib6435d050b0: Link UP Dec 16 13:06:07.989763 systemd-networkd[2150]: calib6435d050b0: Gained carrier Dec 16 13:06:08.000651 containerd[2540]: time="2025-12-16T13:06:08.000034907Z" level=info msg="connecting to shim 6108fb9a4a8ce66a09b169cf62637977364195790b7db64d4801178a670371a1" address="unix:///run/containerd/s/6a88ca6219eb0eaea4eb0b5385c65285c8b48037d28b0c15df25ca81829e1128" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:08.019140 containerd[2540]: 2025-12-16 13:06:07.849 [INFO][5462] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--5ae2bb3665-k8s-coredns--66bc5c9577--22fpt-eth0 coredns-66bc5c9577- kube-system c46c1cd1-ee4e-4f44-a196-4c7989633db4 857 0 2025-12-16 13:05:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515.1.0-a-5ae2bb3665 coredns-66bc5c9577-22fpt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib6435d050b0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="c2da12ad4fb4e4a53c5817f4ae4b90dd9efbc455bf8948235f91e650957ace60" Namespace="kube-system" Pod="coredns-66bc5c9577-22fpt" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-coredns--66bc5c9577--22fpt-" Dec 16 13:06:08.019140 containerd[2540]: 2025-12-16 13:06:07.849 [INFO][5462] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c2da12ad4fb4e4a53c5817f4ae4b90dd9efbc455bf8948235f91e650957ace60" Namespace="kube-system" Pod="coredns-66bc5c9577-22fpt" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-coredns--66bc5c9577--22fpt-eth0" Dec 16 13:06:08.019140 containerd[2540]: 2025-12-16 13:06:07.897 [INFO][5490] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c2da12ad4fb4e4a53c5817f4ae4b90dd9efbc455bf8948235f91e650957ace60" HandleID="k8s-pod-network.c2da12ad4fb4e4a53c5817f4ae4b90dd9efbc455bf8948235f91e650957ace60" Workload="ci--4515.1.0--a--5ae2bb3665-k8s-coredns--66bc5c9577--22fpt-eth0" Dec 16 13:06:08.019374 containerd[2540]: 2025-12-16 13:06:07.900 [INFO][5490] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c2da12ad4fb4e4a53c5817f4ae4b90dd9efbc455bf8948235f91e650957ace60" HandleID="k8s-pod-network.c2da12ad4fb4e4a53c5817f4ae4b90dd9efbc455bf8948235f91e650957ace60" Workload="ci--4515.1.0--a--5ae2bb3665-k8s-coredns--66bc5c9577--22fpt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f860), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515.1.0-a-5ae2bb3665", "pod":"coredns-66bc5c9577-22fpt", "timestamp":"2025-12-16 13:06:07.8979683 +0000 UTC"}, Hostname:"ci-4515.1.0-a-5ae2bb3665", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:06:08.019374 containerd[2540]: 2025-12-16 13:06:07.901 [INFO][5490] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:06:08.019374 containerd[2540]: 2025-12-16 13:06:07.902 [INFO][5490] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:06:08.019374 containerd[2540]: 2025-12-16 13:06:07.902 [INFO][5490] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-5ae2bb3665' Dec 16 13:06:08.019374 containerd[2540]: 2025-12-16 13:06:07.941 [INFO][5490] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c2da12ad4fb4e4a53c5817f4ae4b90dd9efbc455bf8948235f91e650957ace60" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:08.019374 containerd[2540]: 2025-12-16 13:06:07.946 [INFO][5490] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:08.019374 containerd[2540]: 2025-12-16 13:06:07.951 [INFO][5490] ipam/ipam.go 511: Trying affinity for 192.168.5.64/26 host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:08.019374 containerd[2540]: 2025-12-16 13:06:07.952 [INFO][5490] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.64/26 host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:08.019374 containerd[2540]: 2025-12-16 13:06:07.954 [INFO][5490] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.64/26 host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:08.019745 containerd[2540]: 2025-12-16 13:06:07.955 [INFO][5490] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.64/26 handle="k8s-pod-network.c2da12ad4fb4e4a53c5817f4ae4b90dd9efbc455bf8948235f91e650957ace60" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:08.019745 containerd[2540]: 2025-12-16 13:06:07.956 [INFO][5490] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c2da12ad4fb4e4a53c5817f4ae4b90dd9efbc455bf8948235f91e650957ace60 Dec 16 13:06:08.019745 containerd[2540]: 2025-12-16 13:06:07.966 [INFO][5490] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.64/26 handle="k8s-pod-network.c2da12ad4fb4e4a53c5817f4ae4b90dd9efbc455bf8948235f91e650957ace60" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:08.019745 containerd[2540]: 2025-12-16 13:06:07.978 [INFO][5490] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.67/26] block=192.168.5.64/26 handle="k8s-pod-network.c2da12ad4fb4e4a53c5817f4ae4b90dd9efbc455bf8948235f91e650957ace60" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:08.019745 containerd[2540]: 2025-12-16 13:06:07.978 [INFO][5490] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.67/26] handle="k8s-pod-network.c2da12ad4fb4e4a53c5817f4ae4b90dd9efbc455bf8948235f91e650957ace60" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:08.019745 containerd[2540]: 2025-12-16 13:06:07.979 [INFO][5490] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:06:08.019745 containerd[2540]: 2025-12-16 13:06:07.979 [INFO][5490] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.67/26] IPv6=[] ContainerID="c2da12ad4fb4e4a53c5817f4ae4b90dd9efbc455bf8948235f91e650957ace60" HandleID="k8s-pod-network.c2da12ad4fb4e4a53c5817f4ae4b90dd9efbc455bf8948235f91e650957ace60" Workload="ci--4515.1.0--a--5ae2bb3665-k8s-coredns--66bc5c9577--22fpt-eth0" Dec 16 13:06:08.019894 containerd[2540]: 2025-12-16 13:06:07.983 [INFO][5462] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c2da12ad4fb4e4a53c5817f4ae4b90dd9efbc455bf8948235f91e650957ace60" Namespace="kube-system" Pod="coredns-66bc5c9577-22fpt" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-coredns--66bc5c9577--22fpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--5ae2bb3665-k8s-coredns--66bc5c9577--22fpt-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c46c1cd1-ee4e-4f44-a196-4c7989633db4", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-5ae2bb3665", ContainerID:"", Pod:"coredns-66bc5c9577-22fpt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib6435d050b0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:06:08.019894 containerd[2540]: 2025-12-16 13:06:07.983 [INFO][5462] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.67/32] ContainerID="c2da12ad4fb4e4a53c5817f4ae4b90dd9efbc455bf8948235f91e650957ace60" Namespace="kube-system" Pod="coredns-66bc5c9577-22fpt" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-coredns--66bc5c9577--22fpt-eth0" Dec 16 13:06:08.019894 containerd[2540]: 2025-12-16 13:06:07.983 [INFO][5462] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6435d050b0 ContainerID="c2da12ad4fb4e4a53c5817f4ae4b90dd9efbc455bf8948235f91e650957ace60" Namespace="kube-system" Pod="coredns-66bc5c9577-22fpt" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-coredns--66bc5c9577--22fpt-eth0" Dec 16 13:06:08.019894 containerd[2540]: 2025-12-16 13:06:07.992 [INFO][5462] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c2da12ad4fb4e4a53c5817f4ae4b90dd9efbc455bf8948235f91e650957ace60" Namespace="kube-system" Pod="coredns-66bc5c9577-22fpt" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-coredns--66bc5c9577--22fpt-eth0" Dec 16 13:06:08.019894 containerd[2540]: 2025-12-16 13:06:07.993 [INFO][5462] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c2da12ad4fb4e4a53c5817f4ae4b90dd9efbc455bf8948235f91e650957ace60" Namespace="kube-system" Pod="coredns-66bc5c9577-22fpt" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-coredns--66bc5c9577--22fpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--5ae2bb3665-k8s-coredns--66bc5c9577--22fpt-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c46c1cd1-ee4e-4f44-a196-4c7989633db4", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-5ae2bb3665", ContainerID:"c2da12ad4fb4e4a53c5817f4ae4b90dd9efbc455bf8948235f91e650957ace60", Pod:"coredns-66bc5c9577-22fpt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib6435d050b0", MAC:"da:9c:c5:79:7a:58", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:06:08.020104 containerd[2540]: 2025-12-16 13:06:08.016 [INFO][5462] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c2da12ad4fb4e4a53c5817f4ae4b90dd9efbc455bf8948235f91e650957ace60" Namespace="kube-system" Pod="coredns-66bc5c9577-22fpt" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-coredns--66bc5c9577--22fpt-eth0" Dec 16 13:06:08.033000 audit[5550]: NETFILTER_CFG table=filter:127 family=2 entries=46 op=nft_register_chain pid=5550 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:06:08.038944 kernel: audit: type=1325 audit(1765890368.033:694): table=filter:127 family=2 entries=46 op=nft_register_chain pid=5550 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:06:08.040265 kernel: audit: type=1300 audit(1765890368.033:694): arch=c000003e syscall=46 success=yes exit=23740 a0=3 a1=7ffe30ae1220 a2=0 a3=7ffe30ae120c items=0 ppid=5252 pid=5550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.033000 audit[5550]: SYSCALL arch=c000003e syscall=46 success=yes exit=23740 a0=3 a1=7ffe30ae1220 a2=0 a3=7ffe30ae120c items=0 ppid=5252 pid=5550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.037559 systemd[1]: Started cri-containerd-6108fb9a4a8ce66a09b169cf62637977364195790b7db64d4801178a670371a1.scope - libcontainer container 6108fb9a4a8ce66a09b169cf62637977364195790b7db64d4801178a670371a1. Dec 16 13:06:08.033000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:06:08.051356 kernel: audit: type=1327 audit(1765890368.033:694): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:06:08.061000 audit: BPF prog-id=240 op=LOAD Dec 16 13:06:08.070029 kernel: audit: type=1334 audit(1765890368.061:695): prog-id=240 op=LOAD Dec 16 13:06:08.070070 kernel: audit: type=1334 audit(1765890368.062:696): prog-id=241 op=LOAD Dec 16 13:06:08.070097 kernel: audit: type=1300 audit(1765890368.062:696): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=5517 pid=5531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.062000 audit: BPF prog-id=241 op=LOAD Dec 16 13:06:08.062000 audit[5531]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=5517 pid=5531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.075102 kernel: audit: type=1327 audit(1765890368.062:696): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631303866623961346138636536366130396231363963663632363337 Dec 16 13:06:08.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631303866623961346138636536366130396231363963663632363337 Dec 16 13:06:08.062000 audit: BPF prog-id=241 op=UNLOAD Dec 16 13:06:08.062000 audit[5531]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5517 pid=5531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631303866623961346138636536366130396231363963663632363337 Dec 16 13:06:08.063000 audit: BPF prog-id=242 op=LOAD Dec 16 13:06:08.063000 audit[5531]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=5517 pid=5531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631303866623961346138636536366130396231363963663632363337 Dec 16 13:06:08.063000 audit: BPF prog-id=243 op=LOAD Dec 16 13:06:08.063000 audit[5531]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=5517 pid=5531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631303866623961346138636536366130396231363963663632363337 Dec 16 13:06:08.063000 audit: BPF prog-id=243 op=UNLOAD Dec 16 13:06:08.063000 audit[5531]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5517 pid=5531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631303866623961346138636536366130396231363963663632363337 Dec 16 13:06:08.063000 audit: BPF prog-id=242 op=UNLOAD Dec 16 13:06:08.063000 audit[5531]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5517 pid=5531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631303866623961346138636536366130396231363963663632363337 Dec 16 13:06:08.063000 audit: BPF prog-id=244 op=LOAD Dec 16 13:06:08.063000 audit[5531]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=5517 pid=5531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631303866623961346138636536366130396231363963663632363337 Dec 16 13:06:08.100139 containerd[2540]: time="2025-12-16T13:06:08.100001560Z" level=info msg="connecting to shim c2da12ad4fb4e4a53c5817f4ae4b90dd9efbc455bf8948235f91e650957ace60" address="unix:///run/containerd/s/d481610fb78cbe7b68656da8a5c827189bcfe49ed7ac636d58fb56cbb77b11d2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:08.108273 systemd-networkd[2150]: calia27ff377d8d: Link UP Dec 16 13:06:08.109594 systemd-networkd[2150]: calia27ff377d8d: Gained carrier Dec 16 13:06:08.143922 containerd[2540]: time="2025-12-16T13:06:08.143882853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7798f6444b-p9dhf,Uid:d35c67aa-255b-42a2-83b2-79e30256e265,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6108fb9a4a8ce66a09b169cf62637977364195790b7db64d4801178a670371a1\"" Dec 16 13:06:08.146923 containerd[2540]: 2025-12-16 13:06:07.849 [INFO][5453] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--5ae2bb3665-k8s-coredns--66bc5c9577--wcczd-eth0 coredns-66bc5c9577- kube-system e4c0041a-7552-4196-b88d-cb0c0c25e0f7 859 0 2025-12-16 13:05:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515.1.0-a-5ae2bb3665 coredns-66bc5c9577-wcczd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia27ff377d8d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="28819828080b0e116bd36592cf3863d1d623f0a7d771e8b53f44ce48d947704b" Namespace="kube-system" Pod="coredns-66bc5c9577-wcczd" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-coredns--66bc5c9577--wcczd-" Dec 16 13:06:08.146923 containerd[2540]: 2025-12-16 13:06:07.849 [INFO][5453] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28819828080b0e116bd36592cf3863d1d623f0a7d771e8b53f44ce48d947704b" Namespace="kube-system" Pod="coredns-66bc5c9577-wcczd" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-coredns--66bc5c9577--wcczd-eth0" Dec 16 13:06:08.146923 containerd[2540]: 2025-12-16 13:06:07.917 [INFO][5485] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28819828080b0e116bd36592cf3863d1d623f0a7d771e8b53f44ce48d947704b" HandleID="k8s-pod-network.28819828080b0e116bd36592cf3863d1d623f0a7d771e8b53f44ce48d947704b" Workload="ci--4515.1.0--a--5ae2bb3665-k8s-coredns--66bc5c9577--wcczd-eth0" Dec 16 13:06:08.146923 containerd[2540]: 2025-12-16 13:06:07.917 [INFO][5485] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="28819828080b0e116bd36592cf3863d1d623f0a7d771e8b53f44ce48d947704b" HandleID="k8s-pod-network.28819828080b0e116bd36592cf3863d1d623f0a7d771e8b53f44ce48d947704b" Workload="ci--4515.1.0--a--5ae2bb3665-k8s-coredns--66bc5c9577--wcczd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f7f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515.1.0-a-5ae2bb3665", "pod":"coredns-66bc5c9577-wcczd", "timestamp":"2025-12-16 13:06:07.91766874 +0000 UTC"}, Hostname:"ci-4515.1.0-a-5ae2bb3665", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:06:08.146923 containerd[2540]: 2025-12-16 13:06:07.917 [INFO][5485] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:06:08.146923 containerd[2540]: 2025-12-16 13:06:07.978 [INFO][5485] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:06:08.146923 containerd[2540]: 2025-12-16 13:06:07.978 [INFO][5485] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-5ae2bb3665' Dec 16 13:06:08.146923 containerd[2540]: 2025-12-16 13:06:08.044 [INFO][5485] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.28819828080b0e116bd36592cf3863d1d623f0a7d771e8b53f44ce48d947704b" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:08.146923 containerd[2540]: 2025-12-16 13:06:08.055 [INFO][5485] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:08.146923 containerd[2540]: 2025-12-16 13:06:08.059 [INFO][5485] ipam/ipam.go 511: Trying affinity for 192.168.5.64/26 host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:08.146923 containerd[2540]: 2025-12-16 13:06:08.061 [INFO][5485] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.64/26 host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:08.146923 containerd[2540]: 2025-12-16 13:06:08.069 [INFO][5485] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.64/26 host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:08.146923 containerd[2540]: 2025-12-16 13:06:08.069 [INFO][5485] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.64/26 handle="k8s-pod-network.28819828080b0e116bd36592cf3863d1d623f0a7d771e8b53f44ce48d947704b" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:08.146923 containerd[2540]: 2025-12-16 13:06:08.073 [INFO][5485] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.28819828080b0e116bd36592cf3863d1d623f0a7d771e8b53f44ce48d947704b Dec 16 13:06:08.146923 containerd[2540]: 2025-12-16 13:06:08.080 [INFO][5485] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.64/26 handle="k8s-pod-network.28819828080b0e116bd36592cf3863d1d623f0a7d771e8b53f44ce48d947704b" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:08.146923 containerd[2540]: 2025-12-16 13:06:08.091 [INFO][5485] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.68/26] block=192.168.5.64/26 handle="k8s-pod-network.28819828080b0e116bd36592cf3863d1d623f0a7d771e8b53f44ce48d947704b" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:08.146923 containerd[2540]: 2025-12-16 13:06:08.091 [INFO][5485] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.68/26] handle="k8s-pod-network.28819828080b0e116bd36592cf3863d1d623f0a7d771e8b53f44ce48d947704b" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:08.146923 containerd[2540]: 2025-12-16 13:06:08.091 [INFO][5485] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:06:08.146923 containerd[2540]: 2025-12-16 13:06:08.091 [INFO][5485] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.68/26] IPv6=[] ContainerID="28819828080b0e116bd36592cf3863d1d623f0a7d771e8b53f44ce48d947704b" HandleID="k8s-pod-network.28819828080b0e116bd36592cf3863d1d623f0a7d771e8b53f44ce48d947704b" Workload="ci--4515.1.0--a--5ae2bb3665-k8s-coredns--66bc5c9577--wcczd-eth0" Dec 16 13:06:08.148100 containerd[2540]: 2025-12-16 13:06:08.095 [INFO][5453] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28819828080b0e116bd36592cf3863d1d623f0a7d771e8b53f44ce48d947704b" Namespace="kube-system" Pod="coredns-66bc5c9577-wcczd" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-coredns--66bc5c9577--wcczd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--5ae2bb3665-k8s-coredns--66bc5c9577--wcczd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e4c0041a-7552-4196-b88d-cb0c0c25e0f7", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-5ae2bb3665", ContainerID:"", Pod:"coredns-66bc5c9577-wcczd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia27ff377d8d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:06:08.148100 containerd[2540]: 2025-12-16 13:06:08.098 [INFO][5453] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.68/32] ContainerID="28819828080b0e116bd36592cf3863d1d623f0a7d771e8b53f44ce48d947704b" Namespace="kube-system" Pod="coredns-66bc5c9577-wcczd" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-coredns--66bc5c9577--wcczd-eth0" Dec 16 13:06:08.148100 containerd[2540]: 2025-12-16 13:06:08.098 [INFO][5453] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia27ff377d8d ContainerID="28819828080b0e116bd36592cf3863d1d623f0a7d771e8b53f44ce48d947704b" Namespace="kube-system" Pod="coredns-66bc5c9577-wcczd" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-coredns--66bc5c9577--wcczd-eth0" Dec 16 13:06:08.148100 containerd[2540]: 2025-12-16 13:06:08.122 [INFO][5453] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28819828080b0e116bd36592cf3863d1d623f0a7d771e8b53f44ce48d947704b" Namespace="kube-system" Pod="coredns-66bc5c9577-wcczd" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-coredns--66bc5c9577--wcczd-eth0" Dec 16 13:06:08.148100 containerd[2540]: 2025-12-16 13:06:08.124 [INFO][5453] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28819828080b0e116bd36592cf3863d1d623f0a7d771e8b53f44ce48d947704b" Namespace="kube-system" Pod="coredns-66bc5c9577-wcczd" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-coredns--66bc5c9577--wcczd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--5ae2bb3665-k8s-coredns--66bc5c9577--wcczd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e4c0041a-7552-4196-b88d-cb0c0c25e0f7", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-5ae2bb3665", ContainerID:"28819828080b0e116bd36592cf3863d1d623f0a7d771e8b53f44ce48d947704b", Pod:"coredns-66bc5c9577-wcczd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia27ff377d8d", MAC:"ea:a7:f8:9e:12:79", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:06:08.148292 containerd[2540]: 2025-12-16 13:06:08.143 [INFO][5453] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28819828080b0e116bd36592cf3863d1d623f0a7d771e8b53f44ce48d947704b" Namespace="kube-system" Pod="coredns-66bc5c9577-wcczd" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-coredns--66bc5c9577--wcczd-eth0" Dec 16 13:06:08.153825 containerd[2540]: time="2025-12-16T13:06:08.153793062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:06:08.161651 systemd[1]: Started cri-containerd-c2da12ad4fb4e4a53c5817f4ae4b90dd9efbc455bf8948235f91e650957ace60.scope - libcontainer container c2da12ad4fb4e4a53c5817f4ae4b90dd9efbc455bf8948235f91e650957ace60. Dec 16 13:06:08.171000 audit[5610]: NETFILTER_CFG table=filter:128 family=2 entries=46 op=nft_register_chain pid=5610 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:06:08.171000 audit[5610]: SYSCALL arch=c000003e syscall=46 success=yes exit=23196 a0=3 a1=7ffcab8246b0 a2=0 a3=7ffcab82469c items=0 ppid=5252 pid=5610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.171000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:06:08.176000 audit: BPF prog-id=245 op=LOAD Dec 16 13:06:08.176000 audit: BPF prog-id=246 op=LOAD Dec 16 13:06:08.176000 audit[5587]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5567 pid=5587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.176000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332646131326164346662346534613533633538313766346165346239 Dec 16 13:06:08.177000 audit: BPF prog-id=246 op=UNLOAD Dec 16 13:06:08.177000 audit[5587]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5567 pid=5587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.177000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332646131326164346662346534613533633538313766346165346239 Dec 16 13:06:08.178000 audit: BPF prog-id=247 op=LOAD Dec 16 13:06:08.178000 audit[5587]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5567 pid=5587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332646131326164346662346534613533633538313766346165346239 Dec 16 13:06:08.178000 audit: BPF prog-id=248 op=LOAD Dec 16 13:06:08.178000 audit[5587]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5567 pid=5587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332646131326164346662346534613533633538313766346165346239 Dec 16 13:06:08.178000 audit: BPF prog-id=248 op=UNLOAD Dec 16 13:06:08.178000 audit[5587]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5567 pid=5587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332646131326164346662346534613533633538313766346165346239 Dec 16 13:06:08.178000 audit: BPF prog-id=247 op=UNLOAD Dec 16 13:06:08.178000 audit[5587]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5567 pid=5587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332646131326164346662346534613533633538313766346165346239 Dec 16 13:06:08.178000 audit: BPF prog-id=249 op=LOAD Dec 16 13:06:08.178000 audit[5587]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5567 pid=5587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332646131326164346662346534613533633538313766346165346239 Dec 16 13:06:08.232753 containerd[2540]: time="2025-12-16T13:06:08.232710915Z" level=info msg="connecting to shim 28819828080b0e116bd36592cf3863d1d623f0a7d771e8b53f44ce48d947704b" address="unix:///run/containerd/s/7ac0df8587df7957f8fef71664b9f63d755ee80673ec42afdc78e77f0d736ccd" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:08.236281 containerd[2540]: time="2025-12-16T13:06:08.236253878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-22fpt,Uid:c46c1cd1-ee4e-4f44-a196-4c7989633db4,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2da12ad4fb4e4a53c5817f4ae4b90dd9efbc455bf8948235f91e650957ace60\"" Dec 16 13:06:08.249049 containerd[2540]: time="2025-12-16T13:06:08.248887659Z" level=info msg="CreateContainer within sandbox \"c2da12ad4fb4e4a53c5817f4ae4b90dd9efbc455bf8948235f91e650957ace60\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:06:08.255577 systemd[1]: Started cri-containerd-28819828080b0e116bd36592cf3863d1d623f0a7d771e8b53f44ce48d947704b.scope - libcontainer container 28819828080b0e116bd36592cf3863d1d623f0a7d771e8b53f44ce48d947704b. Dec 16 13:06:08.265000 audit: BPF prog-id=250 op=LOAD Dec 16 13:06:08.266000 audit: BPF prog-id=251 op=LOAD Dec 16 13:06:08.266000 audit[5643]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5632 pid=5643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.266000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238383139383238303830623065313136626433363539326366333836 Dec 16 13:06:08.266000 audit: BPF prog-id=251 op=UNLOAD Dec 16 13:06:08.266000 audit[5643]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5632 pid=5643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.266000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238383139383238303830623065313136626433363539326366333836 Dec 16 13:06:08.266000 audit: BPF prog-id=252 op=LOAD Dec 16 13:06:08.266000 audit[5643]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5632 pid=5643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.266000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238383139383238303830623065313136626433363539326366333836 Dec 16 13:06:08.266000 audit: BPF prog-id=253 op=LOAD Dec 16 13:06:08.266000 audit[5643]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5632 pid=5643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.266000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238383139383238303830623065313136626433363539326366333836 Dec 16 13:06:08.266000 audit: BPF prog-id=253 op=UNLOAD Dec 16 13:06:08.266000 audit[5643]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5632 pid=5643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.266000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238383139383238303830623065313136626433363539326366333836 Dec 16 13:06:08.266000 audit: BPF prog-id=252 op=UNLOAD Dec 16 13:06:08.266000 audit[5643]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5632 pid=5643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.266000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238383139383238303830623065313136626433363539326366333836 Dec 16 13:06:08.266000 audit: BPF prog-id=254 op=LOAD Dec 16 13:06:08.266000 audit[5643]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5632 pid=5643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.266000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238383139383238303830623065313136626433363539326366333836 Dec 16 13:06:08.283238 containerd[2540]: time="2025-12-16T13:06:08.283212140Z" level=info msg="Container b19e6787e1672cbc31330ac3d7a98e5de682e90bf971776f85d5da9b86643a8f: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:08.286246 kubelet[4018]: I1216 13:06:08.285528 4018 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:06:08.317397 containerd[2540]: time="2025-12-16T13:06:08.317368975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wcczd,Uid:e4c0041a-7552-4196-b88d-cb0c0c25e0f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"28819828080b0e116bd36592cf3863d1d623f0a7d771e8b53f44ce48d947704b\"" Dec 16 13:06:08.320477 containerd[2540]: time="2025-12-16T13:06:08.320449642Z" level=info msg="CreateContainer within sandbox \"c2da12ad4fb4e4a53c5817f4ae4b90dd9efbc455bf8948235f91e650957ace60\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b19e6787e1672cbc31330ac3d7a98e5de682e90bf971776f85d5da9b86643a8f\"" Dec 16 13:06:08.322903 containerd[2540]: time="2025-12-16T13:06:08.322874480Z" level=info msg="StartContainer for \"b19e6787e1672cbc31330ac3d7a98e5de682e90bf971776f85d5da9b86643a8f\"" Dec 16 13:06:08.327815 containerd[2540]: time="2025-12-16T13:06:08.327573702Z" level=info msg="connecting to shim b19e6787e1672cbc31330ac3d7a98e5de682e90bf971776f85d5da9b86643a8f" address="unix:///run/containerd/s/d481610fb78cbe7b68656da8a5c827189bcfe49ed7ac636d58fb56cbb77b11d2" protocol=ttrpc version=3 Dec 16 13:06:08.330048 containerd[2540]: time="2025-12-16T13:06:08.329873785Z" level=info msg="CreateContainer within sandbox \"28819828080b0e116bd36592cf3863d1d623f0a7d771e8b53f44ce48d947704b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:06:08.350672 systemd[1]: Started cri-containerd-b19e6787e1672cbc31330ac3d7a98e5de682e90bf971776f85d5da9b86643a8f.scope - libcontainer container b19e6787e1672cbc31330ac3d7a98e5de682e90bf971776f85d5da9b86643a8f. Dec 16 13:06:08.361000 audit: BPF prog-id=255 op=LOAD Dec 16 13:06:08.362000 audit: BPF prog-id=256 op=LOAD Dec 16 13:06:08.362000 audit[5693]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=5567 pid=5693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.362000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231396536373837653136373263626333313333306163336437613938 Dec 16 13:06:08.362000 audit: BPF prog-id=256 op=UNLOAD Dec 16 13:06:08.362000 audit[5693]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5567 pid=5693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.362000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231396536373837653136373263626333313333306163336437613938 Dec 16 13:06:08.364287 containerd[2540]: time="2025-12-16T13:06:08.363545245Z" level=info msg="Container 841556f050679e2c6655dea3acd18d9d425faa6715e83c1ec77314e90ec7687f: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:08.362000 audit: BPF prog-id=257 op=LOAD Dec 16 13:06:08.362000 audit[5693]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=5567 pid=5693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.362000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231396536373837653136373263626333313333306163336437613938 Dec 16 13:06:08.362000 audit: BPF prog-id=258 op=LOAD Dec 16 13:06:08.362000 audit[5693]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=5567 pid=5693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.362000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231396536373837653136373263626333313333306163336437613938 Dec 16 13:06:08.362000 audit: BPF prog-id=258 op=UNLOAD Dec 16 13:06:08.362000 audit[5693]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5567 pid=5693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.362000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231396536373837653136373263626333313333306163336437613938 Dec 16 13:06:08.362000 audit: BPF prog-id=257 op=UNLOAD Dec 16 13:06:08.362000 audit[5693]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5567 pid=5693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.362000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231396536373837653136373263626333313333306163336437613938 Dec 16 13:06:08.362000 audit: BPF prog-id=259 op=LOAD Dec 16 13:06:08.362000 audit[5693]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=5567 pid=5693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.362000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231396536373837653136373263626333313333306163336437613938 Dec 16 13:06:08.402791 containerd[2540]: time="2025-12-16T13:06:08.402710134Z" level=info msg="StartContainer for \"b19e6787e1672cbc31330ac3d7a98e5de682e90bf971776f85d5da9b86643a8f\" returns successfully" Dec 16 13:06:08.408301 containerd[2540]: time="2025-12-16T13:06:08.407660746Z" level=info msg="CreateContainer within sandbox \"28819828080b0e116bd36592cf3863d1d623f0a7d771e8b53f44ce48d947704b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"841556f050679e2c6655dea3acd18d9d425faa6715e83c1ec77314e90ec7687f\"" Dec 16 13:06:08.411059 containerd[2540]: time="2025-12-16T13:06:08.411035397Z" level=info msg="StartContainer for \"841556f050679e2c6655dea3acd18d9d425faa6715e83c1ec77314e90ec7687f\"" Dec 16 13:06:08.417850 containerd[2540]: time="2025-12-16T13:06:08.417679257Z" level=info msg="connecting to shim 841556f050679e2c6655dea3acd18d9d425faa6715e83c1ec77314e90ec7687f" address="unix:///run/containerd/s/7ac0df8587df7957f8fef71664b9f63d755ee80673ec42afdc78e77f0d736ccd" protocol=ttrpc version=3 Dec 16 13:06:08.449615 containerd[2540]: time="2025-12-16T13:06:08.449503161Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:08.455722 containerd[2540]: time="2025-12-16T13:06:08.455684684Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:06:08.458464 containerd[2540]: time="2025-12-16T13:06:08.455687679Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:08.459223 kubelet[4018]: E1216 13:06:08.459189 4018 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:08.459381 kubelet[4018]: E1216 13:06:08.459365 4018 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:08.459509 kubelet[4018]: E1216 13:06:08.459493 4018 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7798f6444b-p9dhf_calico-apiserver(d35c67aa-255b-42a2-83b2-79e30256e265): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:08.459565 systemd[1]: Started cri-containerd-841556f050679e2c6655dea3acd18d9d425faa6715e83c1ec77314e90ec7687f.scope - libcontainer container 841556f050679e2c6655dea3acd18d9d425faa6715e83c1ec77314e90ec7687f. Dec 16 13:06:08.459740 kubelet[4018]: E1216 13:06:08.459719 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7798f6444b-p9dhf" podUID="d35c67aa-255b-42a2-83b2-79e30256e265" Dec 16 13:06:08.482000 audit: BPF prog-id=260 op=LOAD Dec 16 13:06:08.483000 audit: BPF prog-id=261 op=LOAD Dec 16 13:06:08.483000 audit[5735]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=5632 pid=5735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834313535366630353036373965326336363535646561336163643138 Dec 16 13:06:08.483000 audit: BPF prog-id=261 op=UNLOAD Dec 16 13:06:08.483000 audit[5735]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5632 pid=5735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834313535366630353036373965326336363535646561336163643138 Dec 16 13:06:08.483000 audit: BPF prog-id=262 op=LOAD Dec 16 13:06:08.483000 audit[5735]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=5632 pid=5735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834313535366630353036373965326336363535646561336163643138 Dec 16 13:06:08.483000 audit: BPF prog-id=263 op=LOAD Dec 16 13:06:08.483000 audit[5735]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=5632 pid=5735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834313535366630353036373965326336363535646561336163643138 Dec 16 13:06:08.483000 audit: BPF prog-id=263 op=UNLOAD Dec 16 13:06:08.483000 audit[5735]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5632 pid=5735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834313535366630353036373965326336363535646561336163643138 Dec 16 13:06:08.483000 audit: BPF prog-id=262 op=UNLOAD Dec 16 13:06:08.483000 audit[5735]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5632 pid=5735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834313535366630353036373965326336363535646561336163643138 Dec 16 13:06:08.483000 audit: BPF prog-id=264 op=LOAD Dec 16 13:06:08.483000 audit[5735]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=5632 pid=5735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834313535366630353036373965326336363535646561336163643138 Dec 16 13:06:08.508953 containerd[2540]: time="2025-12-16T13:06:08.508871654Z" level=info msg="StartContainer for \"841556f050679e2c6655dea3acd18d9d425faa6715e83c1ec77314e90ec7687f\" returns successfully" Dec 16 13:06:08.716027 containerd[2540]: time="2025-12-16T13:06:08.715974435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-kffxh,Uid:02113441-a531-45ff-9a40-51f9ff37eeb2,Namespace:calico-system,Attempt:0,}" Dec 16 13:06:08.860090 systemd-networkd[2150]: califbb348ced66: Link UP Dec 16 13:06:08.861036 systemd-networkd[2150]: califbb348ced66: Gained carrier Dec 16 13:06:08.879098 containerd[2540]: 2025-12-16 13:06:08.776 [INFO][5783] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--5ae2bb3665-k8s-goldmane--7c778bb748--kffxh-eth0 goldmane-7c778bb748- calico-system 02113441-a531-45ff-9a40-51f9ff37eeb2 864 0 2025-12-16 13:05:42 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4515.1.0-a-5ae2bb3665 goldmane-7c778bb748-kffxh eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] califbb348ced66 [] [] }} ContainerID="2af068141d31c4cc7c45a20934233024905193ff526d72db7286009dca917d60" Namespace="calico-system" Pod="goldmane-7c778bb748-kffxh" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-goldmane--7c778bb748--kffxh-" Dec 16 13:06:08.879098 containerd[2540]: 2025-12-16 13:06:08.776 [INFO][5783] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2af068141d31c4cc7c45a20934233024905193ff526d72db7286009dca917d60" Namespace="calico-system" Pod="goldmane-7c778bb748-kffxh" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-goldmane--7c778bb748--kffxh-eth0" Dec 16 13:06:08.879098 containerd[2540]: 2025-12-16 13:06:08.814 [INFO][5795] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2af068141d31c4cc7c45a20934233024905193ff526d72db7286009dca917d60" HandleID="k8s-pod-network.2af068141d31c4cc7c45a20934233024905193ff526d72db7286009dca917d60" Workload="ci--4515.1.0--a--5ae2bb3665-k8s-goldmane--7c778bb748--kffxh-eth0" Dec 16 13:06:08.879098 containerd[2540]: 2025-12-16 13:06:08.814 [INFO][5795] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2af068141d31c4cc7c45a20934233024905193ff526d72db7286009dca917d60" HandleID="k8s-pod-network.2af068141d31c4cc7c45a20934233024905193ff526d72db7286009dca917d60" Workload="ci--4515.1.0--a--5ae2bb3665-k8s-goldmane--7c778bb748--kffxh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5800), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-a-5ae2bb3665", "pod":"goldmane-7c778bb748-kffxh", "timestamp":"2025-12-16 13:06:08.814168477 +0000 UTC"}, Hostname:"ci-4515.1.0-a-5ae2bb3665", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:06:08.879098 containerd[2540]: 2025-12-16 13:06:08.814 [INFO][5795] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:06:08.879098 containerd[2540]: 2025-12-16 13:06:08.814 [INFO][5795] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:06:08.879098 containerd[2540]: 2025-12-16 13:06:08.814 [INFO][5795] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-5ae2bb3665' Dec 16 13:06:08.879098 containerd[2540]: 2025-12-16 13:06:08.820 [INFO][5795] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2af068141d31c4cc7c45a20934233024905193ff526d72db7286009dca917d60" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:08.879098 containerd[2540]: 2025-12-16 13:06:08.825 [INFO][5795] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:08.879098 containerd[2540]: 2025-12-16 13:06:08.833 [INFO][5795] ipam/ipam.go 511: Trying affinity for 192.168.5.64/26 host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:08.879098 containerd[2540]: 2025-12-16 13:06:08.835 [INFO][5795] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.64/26 host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:08.879098 containerd[2540]: 2025-12-16 13:06:08.837 [INFO][5795] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.64/26 host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:08.879098 containerd[2540]: 2025-12-16 13:06:08.837 [INFO][5795] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.64/26 handle="k8s-pod-network.2af068141d31c4cc7c45a20934233024905193ff526d72db7286009dca917d60" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:08.879098 containerd[2540]: 2025-12-16 13:06:08.839 [INFO][5795] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2af068141d31c4cc7c45a20934233024905193ff526d72db7286009dca917d60 Dec 16 13:06:08.879098 containerd[2540]: 2025-12-16 13:06:08.843 [INFO][5795] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.64/26 handle="k8s-pod-network.2af068141d31c4cc7c45a20934233024905193ff526d72db7286009dca917d60" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:08.879098 containerd[2540]: 2025-12-16 13:06:08.855 [INFO][5795] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.69/26] block=192.168.5.64/26 handle="k8s-pod-network.2af068141d31c4cc7c45a20934233024905193ff526d72db7286009dca917d60" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:08.879098 containerd[2540]: 2025-12-16 13:06:08.855 [INFO][5795] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.69/26] handle="k8s-pod-network.2af068141d31c4cc7c45a20934233024905193ff526d72db7286009dca917d60" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:08.879098 containerd[2540]: 2025-12-16 13:06:08.855 [INFO][5795] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:06:08.879098 containerd[2540]: 2025-12-16 13:06:08.855 [INFO][5795] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.69/26] IPv6=[] ContainerID="2af068141d31c4cc7c45a20934233024905193ff526d72db7286009dca917d60" HandleID="k8s-pod-network.2af068141d31c4cc7c45a20934233024905193ff526d72db7286009dca917d60" Workload="ci--4515.1.0--a--5ae2bb3665-k8s-goldmane--7c778bb748--kffxh-eth0" Dec 16 13:06:08.880372 containerd[2540]: 2025-12-16 13:06:08.857 [INFO][5783] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2af068141d31c4cc7c45a20934233024905193ff526d72db7286009dca917d60" Namespace="calico-system" Pod="goldmane-7c778bb748-kffxh" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-goldmane--7c778bb748--kffxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--5ae2bb3665-k8s-goldmane--7c778bb748--kffxh-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"02113441-a531-45ff-9a40-51f9ff37eeb2", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-5ae2bb3665", ContainerID:"", Pod:"goldmane-7c778bb748-kffxh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.5.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califbb348ced66", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:06:08.880372 containerd[2540]: 2025-12-16 13:06:08.857 [INFO][5783] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.69/32] ContainerID="2af068141d31c4cc7c45a20934233024905193ff526d72db7286009dca917d60" Namespace="calico-system" Pod="goldmane-7c778bb748-kffxh" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-goldmane--7c778bb748--kffxh-eth0" Dec 16 13:06:08.880372 containerd[2540]: 2025-12-16 13:06:08.857 [INFO][5783] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califbb348ced66 ContainerID="2af068141d31c4cc7c45a20934233024905193ff526d72db7286009dca917d60" Namespace="calico-system" Pod="goldmane-7c778bb748-kffxh" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-goldmane--7c778bb748--kffxh-eth0" Dec 16 13:06:08.880372 containerd[2540]: 2025-12-16 13:06:08.861 [INFO][5783] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2af068141d31c4cc7c45a20934233024905193ff526d72db7286009dca917d60" Namespace="calico-system" Pod="goldmane-7c778bb748-kffxh" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-goldmane--7c778bb748--kffxh-eth0" Dec 16 13:06:08.880372 containerd[2540]: 2025-12-16 13:06:08.861 [INFO][5783] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2af068141d31c4cc7c45a20934233024905193ff526d72db7286009dca917d60" Namespace="calico-system" Pod="goldmane-7c778bb748-kffxh" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-goldmane--7c778bb748--kffxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--5ae2bb3665-k8s-goldmane--7c778bb748--kffxh-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"02113441-a531-45ff-9a40-51f9ff37eeb2", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-5ae2bb3665", ContainerID:"2af068141d31c4cc7c45a20934233024905193ff526d72db7286009dca917d60", Pod:"goldmane-7c778bb748-kffxh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.5.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califbb348ced66", MAC:"be:6d:ee:22:8c:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:06:08.880372 containerd[2540]: 2025-12-16 13:06:08.874 [INFO][5783] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2af068141d31c4cc7c45a20934233024905193ff526d72db7286009dca917d60" Namespace="calico-system" Pod="goldmane-7c778bb748-kffxh" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-goldmane--7c778bb748--kffxh-eth0" Dec 16 13:06:08.901207 kubelet[4018]: E1216 13:06:08.900933 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7798f6444b-p9dhf" podUID="d35c67aa-255b-42a2-83b2-79e30256e265" Dec 16 13:06:08.905000 audit[5811]: NETFILTER_CFG table=filter:129 family=2 entries=52 op=nft_register_chain pid=5811 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:06:08.905000 audit[5811]: SYSCALL arch=c000003e syscall=46 success=yes exit=27540 a0=3 a1=7ffd0037c820 a2=0 a3=7ffd0037c80c items=0 ppid=5252 pid=5811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.905000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:06:08.937219 kubelet[4018]: I1216 13:06:08.936179 4018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wcczd" podStartSLOduration=41.936157457 podStartE2EDuration="41.936157457s" podCreationTimestamp="2025-12-16 13:05:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:06:08.915270336 +0000 UTC m=+45.314236048" watchObservedRunningTime="2025-12-16 13:06:08.936157457 +0000 UTC m=+45.335123158" Dec 16 13:06:08.949541 containerd[2540]: time="2025-12-16T13:06:08.949494752Z" level=info msg="connecting to shim 2af068141d31c4cc7c45a20934233024905193ff526d72db7286009dca917d60" address="unix:///run/containerd/s/f1060dc9c06e1c257cec791c705f385df12969f922822ec43f0a6c8a9caebfc8" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:08.960211 kubelet[4018]: I1216 13:06:08.960152 4018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-22fpt" podStartSLOduration=41.960132902 podStartE2EDuration="41.960132902s" podCreationTimestamp="2025-12-16 13:05:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:06:08.93610975 +0000 UTC m=+45.335075464" watchObservedRunningTime="2025-12-16 13:06:08.960132902 +0000 UTC m=+45.359098614" Dec 16 13:06:08.974000 audit[5829]: NETFILTER_CFG table=filter:130 family=2 entries=20 op=nft_register_rule pid=5829 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:06:08.974000 audit[5829]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe3c0352e0 a2=0 a3=7ffe3c0352cc items=0 ppid=4127 pid=5829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.974000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:06:08.981000 audit[5829]: NETFILTER_CFG table=nat:131 family=2 entries=14 op=nft_register_rule pid=5829 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:06:08.981000 audit[5829]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe3c0352e0 a2=0 a3=0 items=0 ppid=4127 pid=5829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:08.981000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:06:08.999552 systemd[1]: Started cri-containerd-2af068141d31c4cc7c45a20934233024905193ff526d72db7286009dca917d60.scope - libcontainer container 2af068141d31c4cc7c45a20934233024905193ff526d72db7286009dca917d60. Dec 16 13:06:09.034000 audit: BPF prog-id=265 op=LOAD Dec 16 13:06:09.035000 audit: BPF prog-id=266 op=LOAD Dec 16 13:06:09.035000 audit[5834]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=5823 pid=5834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:09.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261663036383134316433316334636337633435613230393334323333 Dec 16 13:06:09.035000 audit: BPF prog-id=266 op=UNLOAD Dec 16 13:06:09.035000 audit[5834]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5823 pid=5834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:09.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261663036383134316433316334636337633435613230393334323333 Dec 16 13:06:09.035000 audit: BPF prog-id=267 op=LOAD Dec 16 13:06:09.035000 audit[5834]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=5823 pid=5834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:09.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261663036383134316433316334636337633435613230393334323333 Dec 16 13:06:09.035000 audit: BPF prog-id=268 op=LOAD Dec 16 13:06:09.035000 audit[5834]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=5823 pid=5834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:09.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261663036383134316433316334636337633435613230393334323333 Dec 16 13:06:09.035000 audit: BPF prog-id=268 op=UNLOAD Dec 16 13:06:09.035000 audit[5834]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5823 pid=5834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:09.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261663036383134316433316334636337633435613230393334323333 Dec 16 13:06:09.035000 audit: BPF prog-id=267 op=UNLOAD Dec 16 13:06:09.035000 audit[5834]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5823 pid=5834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:09.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261663036383134316433316334636337633435613230393334323333 Dec 16 13:06:09.035000 audit: BPF prog-id=269 op=LOAD Dec 16 13:06:09.035000 audit[5834]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=5823 pid=5834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:09.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261663036383134316433316334636337633435613230393334323333 Dec 16 13:06:09.076256 containerd[2540]: time="2025-12-16T13:06:09.076220554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-kffxh,Uid:02113441-a531-45ff-9a40-51f9ff37eeb2,Namespace:calico-system,Attempt:0,} returns sandbox id \"2af068141d31c4cc7c45a20934233024905193ff526d72db7286009dca917d60\"" Dec 16 13:06:09.077786 containerd[2540]: time="2025-12-16T13:06:09.077759068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:06:09.383589 containerd[2540]: time="2025-12-16T13:06:09.383529435Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:09.390506 containerd[2540]: time="2025-12-16T13:06:09.390474787Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:06:09.391120 containerd[2540]: time="2025-12-16T13:06:09.390588969Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:09.391210 kubelet[4018]: E1216 13:06:09.390766 4018 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:06:09.391210 kubelet[4018]: E1216 13:06:09.390835 4018 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:06:09.391210 kubelet[4018]: E1216 13:06:09.390953 4018 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-kffxh_calico-system(02113441-a531-45ff-9a40-51f9ff37eeb2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:09.391210 kubelet[4018]: E1216 13:06:09.391007 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kffxh" podUID="02113441-a531-45ff-9a40-51f9ff37eeb2" Dec 16 13:06:09.580647 systemd-networkd[2150]: calia27ff377d8d: Gained IPv6LL Dec 16 13:06:09.644512 systemd-networkd[2150]: calie1bd2f6d408: Gained IPv6LL Dec 16 13:06:09.726790 containerd[2540]: time="2025-12-16T13:06:09.726712261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7798f6444b-zjrsf,Uid:fa544c8c-af21-41f2-8ffb-1fe7c36b0bfb,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:06:09.733173 containerd[2540]: time="2025-12-16T13:06:09.733133613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8c4454f6d-fzx24,Uid:021bd40b-8387-4f81-8ec5-64b895deb3c2,Namespace:calico-system,Attempt:0,}" Dec 16 13:06:09.907975 kubelet[4018]: E1216 13:06:09.907931 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7798f6444b-p9dhf" podUID="d35c67aa-255b-42a2-83b2-79e30256e265" Dec 16 13:06:09.910176 kubelet[4018]: E1216 13:06:09.910140 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kffxh" podUID="02113441-a531-45ff-9a40-51f9ff37eeb2" Dec 16 13:06:09.933326 systemd-networkd[2150]: cali11436c62a92: Link UP Dec 16 13:06:09.933527 systemd-networkd[2150]: cali11436c62a92: Gained carrier Dec 16 13:06:09.964361 containerd[2540]: 2025-12-16 13:06:09.807 [INFO][5866] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--5ae2bb3665-k8s-calico--apiserver--7798f6444b--zjrsf-eth0 calico-apiserver-7798f6444b- calico-apiserver fa544c8c-af21-41f2-8ffb-1fe7c36b0bfb 863 0 2025-12-16 13:05:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7798f6444b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515.1.0-a-5ae2bb3665 calico-apiserver-7798f6444b-zjrsf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali11436c62a92 [] [] }} ContainerID="b06045bbf86ee6215bd8140effa06b1de25464d023f9ead31ba7563eaabd43a9" Namespace="calico-apiserver" Pod="calico-apiserver-7798f6444b-zjrsf" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-calico--apiserver--7798f6444b--zjrsf-" Dec 16 13:06:09.964361 containerd[2540]: 2025-12-16 13:06:09.808 [INFO][5866] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b06045bbf86ee6215bd8140effa06b1de25464d023f9ead31ba7563eaabd43a9" Namespace="calico-apiserver" Pod="calico-apiserver-7798f6444b-zjrsf" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-calico--apiserver--7798f6444b--zjrsf-eth0" Dec 16 13:06:09.964361 containerd[2540]: 2025-12-16 13:06:09.868 [INFO][5891] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b06045bbf86ee6215bd8140effa06b1de25464d023f9ead31ba7563eaabd43a9" HandleID="k8s-pod-network.b06045bbf86ee6215bd8140effa06b1de25464d023f9ead31ba7563eaabd43a9" Workload="ci--4515.1.0--a--5ae2bb3665-k8s-calico--apiserver--7798f6444b--zjrsf-eth0" Dec 16 13:06:09.964361 containerd[2540]: 2025-12-16 13:06:09.870 [INFO][5891] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b06045bbf86ee6215bd8140effa06b1de25464d023f9ead31ba7563eaabd43a9" HandleID="k8s-pod-network.b06045bbf86ee6215bd8140effa06b1de25464d023f9ead31ba7563eaabd43a9" Workload="ci--4515.1.0--a--5ae2bb3665-k8s-calico--apiserver--7798f6444b--zjrsf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5660), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515.1.0-a-5ae2bb3665", "pod":"calico-apiserver-7798f6444b-zjrsf", "timestamp":"2025-12-16 13:06:09.868363906 +0000 UTC"}, Hostname:"ci-4515.1.0-a-5ae2bb3665", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:06:09.964361 containerd[2540]: 2025-12-16 13:06:09.870 [INFO][5891] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:06:09.964361 containerd[2540]: 2025-12-16 13:06:09.870 [INFO][5891] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:06:09.964361 containerd[2540]: 2025-12-16 13:06:09.870 [INFO][5891] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-5ae2bb3665' Dec 16 13:06:09.964361 containerd[2540]: 2025-12-16 13:06:09.882 [INFO][5891] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b06045bbf86ee6215bd8140effa06b1de25464d023f9ead31ba7563eaabd43a9" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:09.964361 containerd[2540]: 2025-12-16 13:06:09.886 [INFO][5891] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:09.964361 containerd[2540]: 2025-12-16 13:06:09.890 [INFO][5891] ipam/ipam.go 511: Trying affinity for 192.168.5.64/26 host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:09.964361 containerd[2540]: 2025-12-16 13:06:09.892 [INFO][5891] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.64/26 host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:09.964361 containerd[2540]: 2025-12-16 13:06:09.894 [INFO][5891] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.64/26 host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:09.964361 containerd[2540]: 2025-12-16 13:06:09.894 [INFO][5891] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.64/26 handle="k8s-pod-network.b06045bbf86ee6215bd8140effa06b1de25464d023f9ead31ba7563eaabd43a9" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:09.964361 containerd[2540]: 2025-12-16 13:06:09.896 [INFO][5891] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b06045bbf86ee6215bd8140effa06b1de25464d023f9ead31ba7563eaabd43a9 Dec 16 13:06:09.964361 containerd[2540]: 2025-12-16 13:06:09.903 [INFO][5891] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.64/26 handle="k8s-pod-network.b06045bbf86ee6215bd8140effa06b1de25464d023f9ead31ba7563eaabd43a9" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:09.964361 containerd[2540]: 2025-12-16 13:06:09.920 [INFO][5891] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.70/26] block=192.168.5.64/26 handle="k8s-pod-network.b06045bbf86ee6215bd8140effa06b1de25464d023f9ead31ba7563eaabd43a9" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:09.964361 containerd[2540]: 2025-12-16 13:06:09.920 [INFO][5891] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.70/26] handle="k8s-pod-network.b06045bbf86ee6215bd8140effa06b1de25464d023f9ead31ba7563eaabd43a9" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:09.964361 containerd[2540]: 2025-12-16 13:06:09.920 [INFO][5891] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:06:09.964361 containerd[2540]: 2025-12-16 13:06:09.920 [INFO][5891] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.70/26] IPv6=[] ContainerID="b06045bbf86ee6215bd8140effa06b1de25464d023f9ead31ba7563eaabd43a9" HandleID="k8s-pod-network.b06045bbf86ee6215bd8140effa06b1de25464d023f9ead31ba7563eaabd43a9" Workload="ci--4515.1.0--a--5ae2bb3665-k8s-calico--apiserver--7798f6444b--zjrsf-eth0" Dec 16 13:06:09.967748 containerd[2540]: 2025-12-16 13:06:09.925 [INFO][5866] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b06045bbf86ee6215bd8140effa06b1de25464d023f9ead31ba7563eaabd43a9" Namespace="calico-apiserver" Pod="calico-apiserver-7798f6444b-zjrsf" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-calico--apiserver--7798f6444b--zjrsf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--5ae2bb3665-k8s-calico--apiserver--7798f6444b--zjrsf-eth0", GenerateName:"calico-apiserver-7798f6444b-", Namespace:"calico-apiserver", SelfLink:"", UID:"fa544c8c-af21-41f2-8ffb-1fe7c36b0bfb", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7798f6444b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-5ae2bb3665", ContainerID:"", Pod:"calico-apiserver-7798f6444b-zjrsf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali11436c62a92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:06:09.967748 containerd[2540]: 2025-12-16 13:06:09.925 [INFO][5866] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.70/32] ContainerID="b06045bbf86ee6215bd8140effa06b1de25464d023f9ead31ba7563eaabd43a9" Namespace="calico-apiserver" Pod="calico-apiserver-7798f6444b-zjrsf" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-calico--apiserver--7798f6444b--zjrsf-eth0" Dec 16 13:06:09.967748 containerd[2540]: 2025-12-16 13:06:09.925 [INFO][5866] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali11436c62a92 ContainerID="b06045bbf86ee6215bd8140effa06b1de25464d023f9ead31ba7563eaabd43a9" Namespace="calico-apiserver" Pod="calico-apiserver-7798f6444b-zjrsf" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-calico--apiserver--7798f6444b--zjrsf-eth0" Dec 16 13:06:09.967748 containerd[2540]: 2025-12-16 13:06:09.934 [INFO][5866] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b06045bbf86ee6215bd8140effa06b1de25464d023f9ead31ba7563eaabd43a9" Namespace="calico-apiserver" Pod="calico-apiserver-7798f6444b-zjrsf" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-calico--apiserver--7798f6444b--zjrsf-eth0" Dec 16 13:06:09.967748 containerd[2540]: 2025-12-16 13:06:09.937 [INFO][5866] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b06045bbf86ee6215bd8140effa06b1de25464d023f9ead31ba7563eaabd43a9" Namespace="calico-apiserver" Pod="calico-apiserver-7798f6444b-zjrsf" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-calico--apiserver--7798f6444b--zjrsf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--5ae2bb3665-k8s-calico--apiserver--7798f6444b--zjrsf-eth0", GenerateName:"calico-apiserver-7798f6444b-", Namespace:"calico-apiserver", SelfLink:"", UID:"fa544c8c-af21-41f2-8ffb-1fe7c36b0bfb", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7798f6444b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-5ae2bb3665", ContainerID:"b06045bbf86ee6215bd8140effa06b1de25464d023f9ead31ba7563eaabd43a9", Pod:"calico-apiserver-7798f6444b-zjrsf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali11436c62a92", MAC:"ee:b4:c2:ec:d0:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:06:09.967748 containerd[2540]: 2025-12-16 13:06:09.958 [INFO][5866] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b06045bbf86ee6215bd8140effa06b1de25464d023f9ead31ba7563eaabd43a9" Namespace="calico-apiserver" Pod="calico-apiserver-7798f6444b-zjrsf" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-calico--apiserver--7798f6444b--zjrsf-eth0" Dec 16 13:06:09.964471 systemd-networkd[2150]: calib6435d050b0: Gained IPv6LL Dec 16 13:06:10.010000 audit[5914]: NETFILTER_CFG table=filter:132 family=2 entries=49 op=nft_register_chain pid=5914 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:06:10.010000 audit[5914]: SYSCALL arch=c000003e syscall=46 success=yes exit=25436 a0=3 a1=7fff62537010 a2=0 a3=7fff62536ffc items=0 ppid=5252 pid=5914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:10.010000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:06:10.028000 audit[5916]: NETFILTER_CFG table=filter:133 family=2 entries=17 op=nft_register_rule pid=5916 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:06:10.028000 audit[5916]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffed2b86610 a2=0 a3=7ffed2b865fc items=0 ppid=4127 pid=5916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:10.028000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:06:10.057375 containerd[2540]: time="2025-12-16T13:06:10.057167116Z" level=info msg="connecting to shim b06045bbf86ee6215bd8140effa06b1de25464d023f9ead31ba7563eaabd43a9" address="unix:///run/containerd/s/195295b9980fa4a0735fe1dc0a80ee2015a57cb69e4de0ace4ed73e30e60caf4" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:10.092688 systemd-networkd[2150]: calia7ca3d5484a: Link UP Dec 16 13:06:10.094389 systemd-networkd[2150]: calia7ca3d5484a: Gained carrier Dec 16 13:06:10.117563 systemd[1]: Started cri-containerd-b06045bbf86ee6215bd8140effa06b1de25464d023f9ead31ba7563eaabd43a9.scope - libcontainer container b06045bbf86ee6215bd8140effa06b1de25464d023f9ead31ba7563eaabd43a9. Dec 16 13:06:10.129487 containerd[2540]: 2025-12-16 13:06:09.820 [INFO][5875] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--5ae2bb3665-k8s-calico--kube--controllers--8c4454f6d--fzx24-eth0 calico-kube-controllers-8c4454f6d- calico-system 021bd40b-8387-4f81-8ec5-64b895deb3c2 860 0 2025-12-16 13:05:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8c4454f6d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4515.1.0-a-5ae2bb3665 calico-kube-controllers-8c4454f6d-fzx24 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia7ca3d5484a [] [] }} ContainerID="6ba91e37381ad1eb7b9e510be349517811b76c0da70343267c243a048e277f94" Namespace="calico-system" Pod="calico-kube-controllers-8c4454f6d-fzx24" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-calico--kube--controllers--8c4454f6d--fzx24-" Dec 16 13:06:10.129487 containerd[2540]: 2025-12-16 13:06:09.820 [INFO][5875] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ba91e37381ad1eb7b9e510be349517811b76c0da70343267c243a048e277f94" Namespace="calico-system" Pod="calico-kube-controllers-8c4454f6d-fzx24" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-calico--kube--controllers--8c4454f6d--fzx24-eth0" Dec 16 13:06:10.129487 containerd[2540]: 2025-12-16 13:06:09.884 [INFO][5896] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ba91e37381ad1eb7b9e510be349517811b76c0da70343267c243a048e277f94" HandleID="k8s-pod-network.6ba91e37381ad1eb7b9e510be349517811b76c0da70343267c243a048e277f94" Workload="ci--4515.1.0--a--5ae2bb3665-k8s-calico--kube--controllers--8c4454f6d--fzx24-eth0" Dec 16 13:06:10.129487 containerd[2540]: 2025-12-16 13:06:09.885 [INFO][5896] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6ba91e37381ad1eb7b9e510be349517811b76c0da70343267c243a048e277f94" HandleID="k8s-pod-network.6ba91e37381ad1eb7b9e510be349517811b76c0da70343267c243a048e277f94" Workload="ci--4515.1.0--a--5ae2bb3665-k8s-calico--kube--controllers--8c4454f6d--fzx24-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-a-5ae2bb3665", "pod":"calico-kube-controllers-8c4454f6d-fzx24", "timestamp":"2025-12-16 13:06:09.884838789 +0000 UTC"}, Hostname:"ci-4515.1.0-a-5ae2bb3665", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:06:10.129487 containerd[2540]: 2025-12-16 13:06:09.885 [INFO][5896] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:06:10.129487 containerd[2540]: 2025-12-16 13:06:09.920 [INFO][5896] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:06:10.129487 containerd[2540]: 2025-12-16 13:06:09.920 [INFO][5896] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-5ae2bb3665' Dec 16 13:06:10.129487 containerd[2540]: 2025-12-16 13:06:09.982 [INFO][5896] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6ba91e37381ad1eb7b9e510be349517811b76c0da70343267c243a048e277f94" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:10.129487 containerd[2540]: 2025-12-16 13:06:10.002 [INFO][5896] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:10.129487 containerd[2540]: 2025-12-16 13:06:10.006 [INFO][5896] ipam/ipam.go 511: Trying affinity for 192.168.5.64/26 host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:10.129487 containerd[2540]: 2025-12-16 13:06:10.008 [INFO][5896] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.64/26 host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:10.129487 containerd[2540]: 2025-12-16 13:06:10.014 [INFO][5896] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.64/26 host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:10.129487 containerd[2540]: 2025-12-16 13:06:10.014 [INFO][5896] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.64/26 handle="k8s-pod-network.6ba91e37381ad1eb7b9e510be349517811b76c0da70343267c243a048e277f94" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:10.129487 containerd[2540]: 2025-12-16 13:06:10.016 [INFO][5896] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6ba91e37381ad1eb7b9e510be349517811b76c0da70343267c243a048e277f94 Dec 16 13:06:10.129487 containerd[2540]: 2025-12-16 13:06:10.022 [INFO][5896] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.64/26 handle="k8s-pod-network.6ba91e37381ad1eb7b9e510be349517811b76c0da70343267c243a048e277f94" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:10.129487 containerd[2540]: 2025-12-16 13:06:10.064 [INFO][5896] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.71/26] block=192.168.5.64/26 handle="k8s-pod-network.6ba91e37381ad1eb7b9e510be349517811b76c0da70343267c243a048e277f94" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:10.129487 containerd[2540]: 2025-12-16 13:06:10.064 [INFO][5896] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.71/26] handle="k8s-pod-network.6ba91e37381ad1eb7b9e510be349517811b76c0da70343267c243a048e277f94" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:10.129487 containerd[2540]: 2025-12-16 13:06:10.064 [INFO][5896] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:06:10.129487 containerd[2540]: 2025-12-16 13:06:10.064 [INFO][5896] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.71/26] IPv6=[] ContainerID="6ba91e37381ad1eb7b9e510be349517811b76c0da70343267c243a048e277f94" HandleID="k8s-pod-network.6ba91e37381ad1eb7b9e510be349517811b76c0da70343267c243a048e277f94" Workload="ci--4515.1.0--a--5ae2bb3665-k8s-calico--kube--controllers--8c4454f6d--fzx24-eth0" Dec 16 13:06:10.130057 containerd[2540]: 2025-12-16 13:06:10.068 [INFO][5875] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ba91e37381ad1eb7b9e510be349517811b76c0da70343267c243a048e277f94" Namespace="calico-system" Pod="calico-kube-controllers-8c4454f6d-fzx24" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-calico--kube--controllers--8c4454f6d--fzx24-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--5ae2bb3665-k8s-calico--kube--controllers--8c4454f6d--fzx24-eth0", GenerateName:"calico-kube-controllers-8c4454f6d-", Namespace:"calico-system", SelfLink:"", UID:"021bd40b-8387-4f81-8ec5-64b895deb3c2", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8c4454f6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-5ae2bb3665", ContainerID:"", Pod:"calico-kube-controllers-8c4454f6d-fzx24", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia7ca3d5484a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:06:10.130057 containerd[2540]: 2025-12-16 13:06:10.068 [INFO][5875] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.71/32] ContainerID="6ba91e37381ad1eb7b9e510be349517811b76c0da70343267c243a048e277f94" Namespace="calico-system" Pod="calico-kube-controllers-8c4454f6d-fzx24" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-calico--kube--controllers--8c4454f6d--fzx24-eth0" Dec 16 13:06:10.130057 containerd[2540]: 2025-12-16 13:06:10.068 [INFO][5875] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia7ca3d5484a ContainerID="6ba91e37381ad1eb7b9e510be349517811b76c0da70343267c243a048e277f94" Namespace="calico-system" Pod="calico-kube-controllers-8c4454f6d-fzx24" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-calico--kube--controllers--8c4454f6d--fzx24-eth0" Dec 16 13:06:10.130057 containerd[2540]: 2025-12-16 13:06:10.096 [INFO][5875] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ba91e37381ad1eb7b9e510be349517811b76c0da70343267c243a048e277f94" Namespace="calico-system" Pod="calico-kube-controllers-8c4454f6d-fzx24" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-calico--kube--controllers--8c4454f6d--fzx24-eth0" Dec 16 13:06:10.130057 containerd[2540]: 2025-12-16 13:06:10.097 [INFO][5875] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ba91e37381ad1eb7b9e510be349517811b76c0da70343267c243a048e277f94" Namespace="calico-system" Pod="calico-kube-controllers-8c4454f6d-fzx24" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-calico--kube--controllers--8c4454f6d--fzx24-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--5ae2bb3665-k8s-calico--kube--controllers--8c4454f6d--fzx24-eth0", GenerateName:"calico-kube-controllers-8c4454f6d-", Namespace:"calico-system", SelfLink:"", UID:"021bd40b-8387-4f81-8ec5-64b895deb3c2", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8c4454f6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-5ae2bb3665", ContainerID:"6ba91e37381ad1eb7b9e510be349517811b76c0da70343267c243a048e277f94", Pod:"calico-kube-controllers-8c4454f6d-fzx24", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia7ca3d5484a", MAC:"fa:56:8e:58:6e:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:06:10.130057 containerd[2540]: 2025-12-16 13:06:10.126 [INFO][5875] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ba91e37381ad1eb7b9e510be349517811b76c0da70343267c243a048e277f94" Namespace="calico-system" Pod="calico-kube-controllers-8c4454f6d-fzx24" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-calico--kube--controllers--8c4454f6d--fzx24-eth0" Dec 16 13:06:10.205358 containerd[2540]: time="2025-12-16T13:06:10.205239368Z" level=info msg="connecting to shim 6ba91e37381ad1eb7b9e510be349517811b76c0da70343267c243a048e277f94" address="unix:///run/containerd/s/48a1c179f0da50563d741be36a695ab274c6aef955ef4880b1510cf3ab6bee93" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:10.150000 audit[5916]: NETFILTER_CFG table=nat:134 family=2 entries=47 op=nft_register_chain pid=5916 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:06:10.150000 audit[5916]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffed2b86610 a2=0 a3=7ffed2b865fc items=0 ppid=4127 pid=5916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:10.150000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:06:10.244687 systemd[1]: Started cri-containerd-6ba91e37381ad1eb7b9e510be349517811b76c0da70343267c243a048e277f94.scope - libcontainer container 6ba91e37381ad1eb7b9e510be349517811b76c0da70343267c243a048e277f94. Dec 16 13:06:10.282000 audit[6005]: NETFILTER_CFG table=filter:135 family=2 entries=40 op=nft_register_chain pid=6005 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:06:10.282000 audit[6005]: SYSCALL arch=c000003e syscall=46 success=yes exit=20784 a0=3 a1=7ffd9a9a6890 a2=0 a3=7ffd9a9a687c items=0 ppid=5252 pid=6005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:10.282000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:06:10.296000 audit: BPF prog-id=270 op=LOAD Dec 16 13:06:10.297000 audit: BPF prog-id=271 op=LOAD Dec 16 13:06:10.297000 audit[5978]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5966 pid=5978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:10.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662613931653337333831616431656237623965353130626533343935 Dec 16 13:06:10.297000 audit: BPF prog-id=271 op=UNLOAD Dec 16 13:06:10.297000 audit[5978]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5966 pid=5978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:10.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662613931653337333831616431656237623965353130626533343935 Dec 16 13:06:10.297000 audit: BPF prog-id=272 op=LOAD Dec 16 13:06:10.297000 audit[5978]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5966 pid=5978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:10.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662613931653337333831616431656237623965353130626533343935 Dec 16 13:06:10.297000 audit: BPF prog-id=273 op=LOAD Dec 16 13:06:10.297000 audit[5978]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5966 pid=5978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:10.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662613931653337333831616431656237623965353130626533343935 Dec 16 13:06:10.297000 audit: BPF prog-id=273 op=UNLOAD Dec 16 13:06:10.297000 audit[5978]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5966 pid=5978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:10.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662613931653337333831616431656237623965353130626533343935 Dec 16 13:06:10.297000 audit: BPF prog-id=272 op=UNLOAD Dec 16 13:06:10.297000 audit[5978]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5966 pid=5978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:10.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662613931653337333831616431656237623965353130626533343935 Dec 16 13:06:10.298000 audit: BPF prog-id=274 op=LOAD Dec 16 13:06:10.298000 audit[5978]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5966 pid=5978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:10.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662613931653337333831616431656237623965353130626533343935 Dec 16 13:06:10.306000 audit: BPF prog-id=275 op=LOAD Dec 16 13:06:10.306000 audit: BPF prog-id=276 op=LOAD Dec 16 13:06:10.306000 audit[5939]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228238 a2=98 a3=0 items=0 ppid=5927 pid=5939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:10.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230363034356262663836656536323135626438313430656666613036 Dec 16 13:06:10.306000 audit: BPF prog-id=276 op=UNLOAD Dec 16 13:06:10.306000 audit[5939]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5927 pid=5939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:10.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230363034356262663836656536323135626438313430656666613036 Dec 16 13:06:10.306000 audit: BPF prog-id=277 op=LOAD Dec 16 13:06:10.306000 audit[5939]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228488 a2=98 a3=0 items=0 ppid=5927 pid=5939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:10.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230363034356262663836656536323135626438313430656666613036 Dec 16 13:06:10.306000 audit: BPF prog-id=278 op=LOAD Dec 16 13:06:10.306000 audit[5939]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000228218 a2=98 a3=0 items=0 ppid=5927 pid=5939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:10.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230363034356262663836656536323135626438313430656666613036 Dec 16 13:06:10.306000 audit: BPF prog-id=278 op=UNLOAD Dec 16 13:06:10.306000 audit[5939]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5927 pid=5939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:10.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230363034356262663836656536323135626438313430656666613036 Dec 16 13:06:10.306000 audit: BPF prog-id=277 op=UNLOAD Dec 16 13:06:10.306000 audit[5939]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5927 pid=5939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:10.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230363034356262663836656536323135626438313430656666613036 Dec 16 13:06:10.306000 audit: BPF prog-id=279 op=LOAD Dec 16 13:06:10.306000 audit[5939]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002286e8 a2=98 a3=0 items=0 ppid=5927 pid=5939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:10.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230363034356262663836656536323135626438313430656666613036 Dec 16 13:06:10.386359 containerd[2540]: time="2025-12-16T13:06:10.385710529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7798f6444b-zjrsf,Uid:fa544c8c-af21-41f2-8ffb-1fe7c36b0bfb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b06045bbf86ee6215bd8140effa06b1de25464d023f9ead31ba7563eaabd43a9\"" Dec 16 13:06:10.389180 containerd[2540]: time="2025-12-16T13:06:10.389151056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:06:10.410217 containerd[2540]: time="2025-12-16T13:06:10.410181574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8c4454f6d-fzx24,Uid:021bd40b-8387-4f81-8ec5-64b895deb3c2,Namespace:calico-system,Attempt:0,} returns sandbox id \"6ba91e37381ad1eb7b9e510be349517811b76c0da70343267c243a048e277f94\"" Dec 16 13:06:10.605531 systemd-networkd[2150]: califbb348ced66: Gained IPv6LL Dec 16 13:06:10.670783 containerd[2540]: time="2025-12-16T13:06:10.669940712Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:10.674327 containerd[2540]: time="2025-12-16T13:06:10.674164002Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:06:10.674327 containerd[2540]: time="2025-12-16T13:06:10.674296955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:10.676390 kubelet[4018]: E1216 13:06:10.674630 4018 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:10.676390 kubelet[4018]: E1216 13:06:10.674697 4018 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:10.676390 kubelet[4018]: E1216 13:06:10.674947 4018 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7798f6444b-zjrsf_calico-apiserver(fa544c8c-af21-41f2-8ffb-1fe7c36b0bfb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:10.676390 kubelet[4018]: E1216 13:06:10.674994 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7798f6444b-zjrsf" podUID="fa544c8c-af21-41f2-8ffb-1fe7c36b0bfb" Dec 16 13:06:10.677359 containerd[2540]: time="2025-12-16T13:06:10.677117402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:06:10.721069 containerd[2540]: time="2025-12-16T13:06:10.721037337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ctchn,Uid:940a093b-83dc-454c-8522-5e1b1f40521f,Namespace:calico-system,Attempt:0,}" Dec 16 13:06:10.916377 kubelet[4018]: E1216 13:06:10.916321 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kffxh" podUID="02113441-a531-45ff-9a40-51f9ff37eeb2" Dec 16 13:06:10.917595 kubelet[4018]: E1216 13:06:10.917551 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7798f6444b-zjrsf" podUID="fa544c8c-af21-41f2-8ffb-1fe7c36b0bfb" Dec 16 13:06:10.960264 containerd[2540]: time="2025-12-16T13:06:10.960200287Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:10.965474 containerd[2540]: time="2025-12-16T13:06:10.965397922Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:06:10.966249 containerd[2540]: time="2025-12-16T13:06:10.965434077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:10.967447 kubelet[4018]: E1216 13:06:10.967086 4018 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:06:10.967447 kubelet[4018]: E1216 13:06:10.967133 4018 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:06:10.967447 kubelet[4018]: E1216 13:06:10.967209 4018 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-8c4454f6d-fzx24_calico-system(021bd40b-8387-4f81-8ec5-64b895deb3c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:10.967447 kubelet[4018]: E1216 13:06:10.967257 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8c4454f6d-fzx24" podUID="021bd40b-8387-4f81-8ec5-64b895deb3c2" Dec 16 13:06:10.985625 systemd-networkd[2150]: cali9a68556fa29: Link UP Dec 16 13:06:10.985849 systemd-networkd[2150]: cali9a68556fa29: Gained carrier Dec 16 13:06:11.011980 containerd[2540]: 2025-12-16 13:06:10.799 [INFO][6021] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--5ae2bb3665-k8s-csi--node--driver--ctchn-eth0 csi-node-driver- calico-system 940a093b-83dc-454c-8522-5e1b1f40521f 744 0 2025-12-16 13:05:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4515.1.0-a-5ae2bb3665 csi-node-driver-ctchn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9a68556fa29 [] [] }} ContainerID="735cceeaead3ee9bccb8bbcfbdda78a2bdfe3283b03e22da8e550e400f501897" Namespace="calico-system" Pod="csi-node-driver-ctchn" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-csi--node--driver--ctchn-" Dec 16 13:06:11.011980 containerd[2540]: 2025-12-16 13:06:10.801 [INFO][6021] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="735cceeaead3ee9bccb8bbcfbdda78a2bdfe3283b03e22da8e550e400f501897" Namespace="calico-system" Pod="csi-node-driver-ctchn" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-csi--node--driver--ctchn-eth0" Dec 16 13:06:11.011980 containerd[2540]: 2025-12-16 13:06:10.855 [INFO][6032] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="735cceeaead3ee9bccb8bbcfbdda78a2bdfe3283b03e22da8e550e400f501897" HandleID="k8s-pod-network.735cceeaead3ee9bccb8bbcfbdda78a2bdfe3283b03e22da8e550e400f501897" Workload="ci--4515.1.0--a--5ae2bb3665-k8s-csi--node--driver--ctchn-eth0" Dec 16 13:06:11.011980 containerd[2540]: 2025-12-16 13:06:10.856 [INFO][6032] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="735cceeaead3ee9bccb8bbcfbdda78a2bdfe3283b03e22da8e550e400f501897" HandleID="k8s-pod-network.735cceeaead3ee9bccb8bbcfbdda78a2bdfe3283b03e22da8e550e400f501897" Workload="ci--4515.1.0--a--5ae2bb3665-k8s-csi--node--driver--ctchn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-a-5ae2bb3665", "pod":"csi-node-driver-ctchn", "timestamp":"2025-12-16 13:06:10.855252605 +0000 UTC"}, Hostname:"ci-4515.1.0-a-5ae2bb3665", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:06:11.011980 containerd[2540]: 2025-12-16 13:06:10.856 [INFO][6032] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:06:11.011980 containerd[2540]: 2025-12-16 13:06:10.856 [INFO][6032] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:06:11.011980 containerd[2540]: 2025-12-16 13:06:10.856 [INFO][6032] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-5ae2bb3665' Dec 16 13:06:11.011980 containerd[2540]: 2025-12-16 13:06:10.897 [INFO][6032] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.735cceeaead3ee9bccb8bbcfbdda78a2bdfe3283b03e22da8e550e400f501897" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:11.011980 containerd[2540]: 2025-12-16 13:06:10.902 [INFO][6032] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:11.011980 containerd[2540]: 2025-12-16 13:06:10.909 [INFO][6032] ipam/ipam.go 511: Trying affinity for 192.168.5.64/26 host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:11.011980 containerd[2540]: 2025-12-16 13:06:10.915 [INFO][6032] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.64/26 host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:11.011980 containerd[2540]: 2025-12-16 13:06:10.922 [INFO][6032] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.64/26 host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:11.011980 containerd[2540]: 2025-12-16 13:06:10.923 [INFO][6032] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.64/26 handle="k8s-pod-network.735cceeaead3ee9bccb8bbcfbdda78a2bdfe3283b03e22da8e550e400f501897" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:11.011980 containerd[2540]: 2025-12-16 13:06:10.942 [INFO][6032] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.735cceeaead3ee9bccb8bbcfbdda78a2bdfe3283b03e22da8e550e400f501897 Dec 16 13:06:11.011980 containerd[2540]: 2025-12-16 13:06:10.959 [INFO][6032] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.64/26 handle="k8s-pod-network.735cceeaead3ee9bccb8bbcfbdda78a2bdfe3283b03e22da8e550e400f501897" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:11.011980 containerd[2540]: 2025-12-16 13:06:10.975 [INFO][6032] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.72/26] block=192.168.5.64/26 handle="k8s-pod-network.735cceeaead3ee9bccb8bbcfbdda78a2bdfe3283b03e22da8e550e400f501897" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:11.011980 containerd[2540]: 2025-12-16 13:06:10.975 [INFO][6032] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.72/26] handle="k8s-pod-network.735cceeaead3ee9bccb8bbcfbdda78a2bdfe3283b03e22da8e550e400f501897" host="ci-4515.1.0-a-5ae2bb3665" Dec 16 13:06:11.011980 containerd[2540]: 2025-12-16 13:06:10.975 [INFO][6032] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:06:11.011980 containerd[2540]: 2025-12-16 13:06:10.975 [INFO][6032] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.72/26] IPv6=[] ContainerID="735cceeaead3ee9bccb8bbcfbdda78a2bdfe3283b03e22da8e550e400f501897" HandleID="k8s-pod-network.735cceeaead3ee9bccb8bbcfbdda78a2bdfe3283b03e22da8e550e400f501897" Workload="ci--4515.1.0--a--5ae2bb3665-k8s-csi--node--driver--ctchn-eth0" Dec 16 13:06:11.014942 containerd[2540]: 2025-12-16 13:06:10.978 [INFO][6021] cni-plugin/k8s.go 418: Populated endpoint ContainerID="735cceeaead3ee9bccb8bbcfbdda78a2bdfe3283b03e22da8e550e400f501897" Namespace="calico-system" Pod="csi-node-driver-ctchn" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-csi--node--driver--ctchn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--5ae2bb3665-k8s-csi--node--driver--ctchn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"940a093b-83dc-454c-8522-5e1b1f40521f", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-5ae2bb3665", ContainerID:"", Pod:"csi-node-driver-ctchn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.5.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9a68556fa29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:06:11.014942 containerd[2540]: 2025-12-16 13:06:10.979 [INFO][6021] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.72/32] ContainerID="735cceeaead3ee9bccb8bbcfbdda78a2bdfe3283b03e22da8e550e400f501897" Namespace="calico-system" Pod="csi-node-driver-ctchn" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-csi--node--driver--ctchn-eth0" Dec 16 13:06:11.014942 containerd[2540]: 2025-12-16 13:06:10.980 [INFO][6021] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9a68556fa29 ContainerID="735cceeaead3ee9bccb8bbcfbdda78a2bdfe3283b03e22da8e550e400f501897" Namespace="calico-system" Pod="csi-node-driver-ctchn" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-csi--node--driver--ctchn-eth0" Dec 16 13:06:11.014942 containerd[2540]: 2025-12-16 13:06:10.983 [INFO][6021] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="735cceeaead3ee9bccb8bbcfbdda78a2bdfe3283b03e22da8e550e400f501897" Namespace="calico-system" Pod="csi-node-driver-ctchn" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-csi--node--driver--ctchn-eth0" Dec 16 13:06:11.014942 containerd[2540]: 2025-12-16 13:06:10.988 [INFO][6021] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="735cceeaead3ee9bccb8bbcfbdda78a2bdfe3283b03e22da8e550e400f501897" Namespace="calico-system" Pod="csi-node-driver-ctchn" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-csi--node--driver--ctchn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--5ae2bb3665-k8s-csi--node--driver--ctchn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"940a093b-83dc-454c-8522-5e1b1f40521f", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-5ae2bb3665", ContainerID:"735cceeaead3ee9bccb8bbcfbdda78a2bdfe3283b03e22da8e550e400f501897", Pod:"csi-node-driver-ctchn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.5.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9a68556fa29", MAC:"1a:63:32:90:d2:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:06:11.014942 containerd[2540]: 2025-12-16 13:06:11.008 [INFO][6021] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="735cceeaead3ee9bccb8bbcfbdda78a2bdfe3283b03e22da8e550e400f501897" Namespace="calico-system" Pod="csi-node-driver-ctchn" WorkloadEndpoint="ci--4515.1.0--a--5ae2bb3665-k8s-csi--node--driver--ctchn-eth0" Dec 16 13:06:11.043000 audit[6046]: NETFILTER_CFG table=filter:136 family=2 entries=52 op=nft_register_chain pid=6046 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:06:11.043000 audit[6046]: SYSCALL arch=c000003e syscall=46 success=yes exit=24312 a0=3 a1=7ffd6de23220 a2=0 a3=7ffd6de2320c items=0 ppid=5252 pid=6046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:11.043000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:06:11.074749 containerd[2540]: time="2025-12-16T13:06:11.074707693Z" level=info msg="connecting to shim 735cceeaead3ee9bccb8bbcfbdda78a2bdfe3283b03e22da8e550e400f501897" address="unix:///run/containerd/s/2264f4a7e18118a135ee31b7be1f246054f8a0908ff95c933e327902c5a4fbae" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:11.120405 systemd[1]: Started cri-containerd-735cceeaead3ee9bccb8bbcfbdda78a2bdfe3283b03e22da8e550e400f501897.scope - libcontainer container 735cceeaead3ee9bccb8bbcfbdda78a2bdfe3283b03e22da8e550e400f501897. Dec 16 13:06:11.168000 audit: BPF prog-id=280 op=LOAD Dec 16 13:06:11.170000 audit: BPF prog-id=281 op=LOAD Dec 16 13:06:11.170000 audit[6068]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=6055 pid=6068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:11.170000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733356363656561656164336565396263636238626263666264646137 Dec 16 13:06:11.170000 audit: BPF prog-id=281 op=UNLOAD Dec 16 13:06:11.170000 audit[6068]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6055 pid=6068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:11.170000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733356363656561656164336565396263636238626263666264646137 Dec 16 13:06:11.170000 audit: BPF prog-id=282 op=LOAD Dec 16 13:06:11.170000 audit[6068]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=6055 pid=6068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:11.170000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733356363656561656164336565396263636238626263666264646137 Dec 16 13:06:11.170000 audit: BPF prog-id=283 op=LOAD Dec 16 13:06:11.170000 audit[6068]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=6055 pid=6068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:11.170000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733356363656561656164336565396263636238626263666264646137 Dec 16 13:06:11.170000 audit: BPF prog-id=283 op=UNLOAD Dec 16 13:06:11.170000 audit[6068]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6055 pid=6068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:11.170000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733356363656561656164336565396263636238626263666264646137 Dec 16 13:06:11.170000 audit: BPF prog-id=282 op=UNLOAD Dec 16 13:06:11.170000 audit[6068]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6055 pid=6068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:11.170000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733356363656561656164336565396263636238626263666264646137 Dec 16 13:06:11.170000 audit: BPF prog-id=284 op=LOAD Dec 16 13:06:11.170000 audit[6068]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=6055 pid=6068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:11.170000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733356363656561656164336565396263636238626263666264646137 Dec 16 13:06:11.225128 containerd[2540]: time="2025-12-16T13:06:11.225059863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ctchn,Uid:940a093b-83dc-454c-8522-5e1b1f40521f,Namespace:calico-system,Attempt:0,} returns sandbox id \"735cceeaead3ee9bccb8bbcfbdda78a2bdfe3283b03e22da8e550e400f501897\"" Dec 16 13:06:11.228286 containerd[2540]: time="2025-12-16T13:06:11.228212640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:06:11.284000 audit[6094]: NETFILTER_CFG table=filter:137 family=2 entries=14 op=nft_register_rule pid=6094 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:06:11.284000 audit[6094]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffca3edadc0 a2=0 a3=7ffca3edadac items=0 ppid=4127 pid=6094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:11.284000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:06:11.288000 audit[6094]: NETFILTER_CFG table=nat:138 family=2 entries=20 op=nft_register_rule pid=6094 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:06:11.288000 audit[6094]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffca3edadc0 a2=0 a3=7ffca3edadac items=0 ppid=4127 pid=6094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:11.288000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:06:11.494420 containerd[2540]: time="2025-12-16T13:06:11.494241857Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:11.500325 containerd[2540]: time="2025-12-16T13:06:11.500285774Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:06:11.500473 containerd[2540]: time="2025-12-16T13:06:11.500382615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:11.500564 kubelet[4018]: E1216 13:06:11.500528 4018 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:06:11.500634 kubelet[4018]: E1216 13:06:11.500578 4018 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:06:11.500697 kubelet[4018]: E1216 13:06:11.500681 4018 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-ctchn_calico-system(940a093b-83dc-454c-8522-5e1b1f40521f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:11.501870 containerd[2540]: time="2025-12-16T13:06:11.501836673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:06:11.756547 systemd-networkd[2150]: cali11436c62a92: Gained IPv6LL Dec 16 13:06:11.788999 containerd[2540]: time="2025-12-16T13:06:11.788950218Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:11.808395 containerd[2540]: time="2025-12-16T13:06:11.807473087Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:06:11.808502 containerd[2540]: time="2025-12-16T13:06:11.807499764Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:11.808729 kubelet[4018]: E1216 13:06:11.808687 4018 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:06:11.809034 kubelet[4018]: E1216 13:06:11.808744 4018 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:06:11.809034 kubelet[4018]: E1216 13:06:11.808831 4018 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-ctchn_calico-system(940a093b-83dc-454c-8522-5e1b1f40521f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:11.809034 kubelet[4018]: E1216 13:06:11.808876 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ctchn" podUID="940a093b-83dc-454c-8522-5e1b1f40521f" Dec 16 13:06:11.820540 systemd-networkd[2150]: calia7ca3d5484a: Gained IPv6LL Dec 16 13:06:11.921941 kubelet[4018]: E1216 13:06:11.921894 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8c4454f6d-fzx24" podUID="021bd40b-8387-4f81-8ec5-64b895deb3c2" Dec 16 13:06:11.922127 kubelet[4018]: E1216 13:06:11.922087 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7798f6444b-zjrsf" podUID="fa544c8c-af21-41f2-8ffb-1fe7c36b0bfb" Dec 16 13:06:11.922579 kubelet[4018]: E1216 13:06:11.922316 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ctchn" podUID="940a093b-83dc-454c-8522-5e1b1f40521f" Dec 16 13:06:12.716680 systemd-networkd[2150]: cali9a68556fa29: Gained IPv6LL Dec 16 13:06:12.925247 kubelet[4018]: E1216 13:06:12.925189 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ctchn" podUID="940a093b-83dc-454c-8522-5e1b1f40521f" Dec 16 13:06:17.713673 containerd[2540]: time="2025-12-16T13:06:17.713610616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:06:17.984718 containerd[2540]: time="2025-12-16T13:06:17.983436734Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:17.989241 containerd[2540]: time="2025-12-16T13:06:17.988915169Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:06:17.989241 containerd[2540]: time="2025-12-16T13:06:17.989024210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:17.989520 kubelet[4018]: E1216 13:06:17.989450 4018 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:06:17.990013 kubelet[4018]: E1216 13:06:17.989627 4018 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:06:17.990980 kubelet[4018]: E1216 13:06:17.990553 4018 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-66f67cd584-8rhwp_calico-system(7c146d92-4a81-4948-9e2f-1093c61dcd5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:17.992387 containerd[2540]: time="2025-12-16T13:06:17.992132434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:06:18.266630 containerd[2540]: time="2025-12-16T13:06:18.266079785Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:18.270923 containerd[2540]: time="2025-12-16T13:06:18.270873303Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:06:18.271045 containerd[2540]: time="2025-12-16T13:06:18.270997758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:18.273346 kubelet[4018]: E1216 13:06:18.272527 4018 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:06:18.273541 kubelet[4018]: E1216 13:06:18.273441 4018 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:06:18.273847 kubelet[4018]: E1216 13:06:18.273749 4018 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-66f67cd584-8rhwp_calico-system(7c146d92-4a81-4948-9e2f-1093c61dcd5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:18.273847 kubelet[4018]: E1216 13:06:18.273808 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66f67cd584-8rhwp" podUID="7c146d92-4a81-4948-9e2f-1093c61dcd5c" Dec 16 13:06:21.715246 containerd[2540]: time="2025-12-16T13:06:21.715179637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:06:22.001593 containerd[2540]: time="2025-12-16T13:06:22.001406450Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:22.004669 containerd[2540]: time="2025-12-16T13:06:22.004584236Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:06:22.004801 containerd[2540]: time="2025-12-16T13:06:22.004614391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:22.005008 kubelet[4018]: E1216 13:06:22.004949 4018 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:06:22.005412 kubelet[4018]: E1216 13:06:22.005025 4018 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:06:22.005412 kubelet[4018]: E1216 13:06:22.005148 4018 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-kffxh_calico-system(02113441-a531-45ff-9a40-51f9ff37eeb2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:22.005412 kubelet[4018]: E1216 13:06:22.005194 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kffxh" podUID="02113441-a531-45ff-9a40-51f9ff37eeb2" Dec 16 13:06:22.006037 containerd[2540]: time="2025-12-16T13:06:22.006005977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:06:22.286121 containerd[2540]: time="2025-12-16T13:06:22.285934993Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:22.289691 containerd[2540]: time="2025-12-16T13:06:22.289628477Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:06:22.289939 containerd[2540]: time="2025-12-16T13:06:22.289787608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:22.290042 kubelet[4018]: E1216 13:06:22.289988 4018 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:22.290101 kubelet[4018]: E1216 13:06:22.290063 4018 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:22.290269 kubelet[4018]: E1216 13:06:22.290197 4018 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7798f6444b-p9dhf_calico-apiserver(d35c67aa-255b-42a2-83b2-79e30256e265): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:22.290315 kubelet[4018]: E1216 13:06:22.290258 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7798f6444b-p9dhf" podUID="d35c67aa-255b-42a2-83b2-79e30256e265" Dec 16 13:06:22.710730 containerd[2540]: time="2025-12-16T13:06:22.710649501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:06:22.981704 containerd[2540]: time="2025-12-16T13:06:22.981522945Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:22.988786 containerd[2540]: time="2025-12-16T13:06:22.988733007Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:06:22.988927 containerd[2540]: time="2025-12-16T13:06:22.988741004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:22.990571 kubelet[4018]: E1216 13:06:22.990518 4018 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:22.990720 kubelet[4018]: E1216 13:06:22.990584 4018 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:22.990720 kubelet[4018]: E1216 13:06:22.990695 4018 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7798f6444b-zjrsf_calico-apiserver(fa544c8c-af21-41f2-8ffb-1fe7c36b0bfb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:22.990771 kubelet[4018]: E1216 13:06:22.990736 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7798f6444b-zjrsf" podUID="fa544c8c-af21-41f2-8ffb-1fe7c36b0bfb" Dec 16 13:06:24.711722 containerd[2540]: time="2025-12-16T13:06:24.711529193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:06:24.986696 containerd[2540]: time="2025-12-16T13:06:24.986407325Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:24.993812 containerd[2540]: time="2025-12-16T13:06:24.993748682Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:06:24.994053 containerd[2540]: time="2025-12-16T13:06:24.993952541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:24.994248 kubelet[4018]: E1216 13:06:24.994206 4018 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:06:24.995288 kubelet[4018]: E1216 13:06:24.994684 4018 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:06:24.995460 kubelet[4018]: E1216 13:06:24.995439 4018 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-ctchn_calico-system(940a093b-83dc-454c-8522-5e1b1f40521f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:24.996523 containerd[2540]: time="2025-12-16T13:06:24.996498259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:06:25.273662 containerd[2540]: time="2025-12-16T13:06:25.273228724Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:25.278270 containerd[2540]: time="2025-12-16T13:06:25.278203188Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:06:25.278673 containerd[2540]: time="2025-12-16T13:06:25.278317263Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:25.279145 kubelet[4018]: E1216 13:06:25.279065 4018 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:06:25.279935 kubelet[4018]: E1216 13:06:25.279243 4018 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:06:25.280203 kubelet[4018]: E1216 13:06:25.280101 4018 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-ctchn_calico-system(940a093b-83dc-454c-8522-5e1b1f40521f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:25.280358 kubelet[4018]: E1216 13:06:25.280295 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ctchn" podUID="940a093b-83dc-454c-8522-5e1b1f40521f" Dec 16 13:06:26.714043 containerd[2540]: time="2025-12-16T13:06:26.713707297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:06:27.016576 containerd[2540]: time="2025-12-16T13:06:27.016406937Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:27.021634 containerd[2540]: time="2025-12-16T13:06:27.021597451Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:06:27.021726 containerd[2540]: time="2025-12-16T13:06:27.021693965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:27.022405 kubelet[4018]: E1216 13:06:27.021883 4018 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:06:27.022405 kubelet[4018]: E1216 13:06:27.021944 4018 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:06:27.022405 kubelet[4018]: E1216 13:06:27.022047 4018 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-8c4454f6d-fzx24_calico-system(021bd40b-8387-4f81-8ec5-64b895deb3c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:27.022405 kubelet[4018]: E1216 13:06:27.022091 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8c4454f6d-fzx24" podUID="021bd40b-8387-4f81-8ec5-64b895deb3c2" Dec 16 13:06:30.711066 kubelet[4018]: E1216 13:06:30.710982 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66f67cd584-8rhwp" podUID="7c146d92-4a81-4948-9e2f-1093c61dcd5c" Dec 16 13:06:35.715965 kubelet[4018]: E1216 13:06:35.714751 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kffxh" podUID="02113441-a531-45ff-9a40-51f9ff37eeb2" Dec 16 13:06:35.715965 kubelet[4018]: E1216 13:06:35.715822 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7798f6444b-p9dhf" podUID="d35c67aa-255b-42a2-83b2-79e30256e265" Dec 16 13:06:36.710186 kubelet[4018]: E1216 13:06:36.710084 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7798f6444b-zjrsf" podUID="fa544c8c-af21-41f2-8ffb-1fe7c36b0bfb" Dec 16 13:06:37.714735 kubelet[4018]: E1216 13:06:37.714677 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ctchn" podUID="940a093b-83dc-454c-8522-5e1b1f40521f" Dec 16 13:06:42.710473 kubelet[4018]: E1216 13:06:42.710379 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8c4454f6d-fzx24" podUID="021bd40b-8387-4f81-8ec5-64b895deb3c2" Dec 16 13:06:43.715819 containerd[2540]: time="2025-12-16T13:06:43.714275355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:06:43.989800 containerd[2540]: time="2025-12-16T13:06:43.989639636Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:43.993246 containerd[2540]: time="2025-12-16T13:06:43.993193036Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:06:43.995525 containerd[2540]: time="2025-12-16T13:06:43.993216941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:43.995723 kubelet[4018]: E1216 13:06:43.995690 4018 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:06:43.996155 kubelet[4018]: E1216 13:06:43.995782 4018 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:06:43.996694 kubelet[4018]: E1216 13:06:43.996224 4018 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-66f67cd584-8rhwp_calico-system(7c146d92-4a81-4948-9e2f-1093c61dcd5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:43.997576 containerd[2540]: time="2025-12-16T13:06:43.997554208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:06:44.276732 containerd[2540]: time="2025-12-16T13:06:44.276182125Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:44.283202 containerd[2540]: time="2025-12-16T13:06:44.283063272Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:06:44.283202 containerd[2540]: time="2025-12-16T13:06:44.283097699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:44.284199 kubelet[4018]: E1216 13:06:44.283521 4018 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:06:44.284199 kubelet[4018]: E1216 13:06:44.283585 4018 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:06:44.284199 kubelet[4018]: E1216 13:06:44.283679 4018 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-66f67cd584-8rhwp_calico-system(7c146d92-4a81-4948-9e2f-1093c61dcd5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:44.284199 kubelet[4018]: E1216 13:06:44.283725 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66f67cd584-8rhwp" podUID="7c146d92-4a81-4948-9e2f-1093c61dcd5c" Dec 16 13:06:46.711084 containerd[2540]: time="2025-12-16T13:06:46.710957695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:06:47.012894 containerd[2540]: time="2025-12-16T13:06:47.012416526Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:47.016356 containerd[2540]: time="2025-12-16T13:06:47.016274999Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:06:47.016598 containerd[2540]: time="2025-12-16T13:06:47.016584945Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:47.016919 kubelet[4018]: E1216 13:06:47.016865 4018 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:47.017629 kubelet[4018]: E1216 13:06:47.016991 4018 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:47.017629 kubelet[4018]: E1216 13:06:47.017463 4018 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7798f6444b-p9dhf_calico-apiserver(d35c67aa-255b-42a2-83b2-79e30256e265): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:47.017629 kubelet[4018]: E1216 13:06:47.017520 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7798f6444b-p9dhf" podUID="d35c67aa-255b-42a2-83b2-79e30256e265" Dec 16 13:06:49.711664 containerd[2540]: time="2025-12-16T13:06:49.711602360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:06:49.988240 containerd[2540]: time="2025-12-16T13:06:49.987962104Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:49.991811 containerd[2540]: time="2025-12-16T13:06:49.991661310Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:06:49.991811 containerd[2540]: time="2025-12-16T13:06:49.991780836Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:49.992235 kubelet[4018]: E1216 13:06:49.992178 4018 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:49.992866 kubelet[4018]: E1216 13:06:49.992641 4018 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:49.992866 kubelet[4018]: E1216 13:06:49.992781 4018 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7798f6444b-zjrsf_calico-apiserver(fa544c8c-af21-41f2-8ffb-1fe7c36b0bfb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:49.992866 kubelet[4018]: E1216 13:06:49.992823 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7798f6444b-zjrsf" podUID="fa544c8c-af21-41f2-8ffb-1fe7c36b0bfb" Dec 16 13:06:50.714548 containerd[2540]: time="2025-12-16T13:06:50.714490976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:06:50.984613 containerd[2540]: time="2025-12-16T13:06:50.984417625Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:50.988378 containerd[2540]: time="2025-12-16T13:06:50.988160343Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:06:50.988378 containerd[2540]: time="2025-12-16T13:06:50.988308608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:50.989197 kubelet[4018]: E1216 13:06:50.988569 4018 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:06:50.989197 kubelet[4018]: E1216 13:06:50.988637 4018 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:06:50.989197 kubelet[4018]: E1216 13:06:50.988744 4018 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-kffxh_calico-system(02113441-a531-45ff-9a40-51f9ff37eeb2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:50.989197 kubelet[4018]: E1216 13:06:50.988789 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kffxh" podUID="02113441-a531-45ff-9a40-51f9ff37eeb2" Dec 16 13:06:52.712372 containerd[2540]: time="2025-12-16T13:06:52.711566206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:06:52.984609 containerd[2540]: time="2025-12-16T13:06:52.984424138Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:52.990937 containerd[2540]: time="2025-12-16T13:06:52.990821848Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:06:52.991095 containerd[2540]: time="2025-12-16T13:06:52.990881148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:52.991382 kubelet[4018]: E1216 13:06:52.991314 4018 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:06:52.991759 kubelet[4018]: E1216 13:06:52.991405 4018 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:06:52.991759 kubelet[4018]: E1216 13:06:52.991685 4018 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-ctchn_calico-system(940a093b-83dc-454c-8522-5e1b1f40521f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:52.994374 containerd[2540]: time="2025-12-16T13:06:52.993637320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:06:53.266426 containerd[2540]: time="2025-12-16T13:06:53.265893813Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:53.269673 containerd[2540]: time="2025-12-16T13:06:53.269614821Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:06:53.271373 containerd[2540]: time="2025-12-16T13:06:53.269817401Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:53.271499 kubelet[4018]: E1216 13:06:53.270066 4018 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:06:53.271499 kubelet[4018]: E1216 13:06:53.270118 4018 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:06:53.271499 kubelet[4018]: E1216 13:06:53.270209 4018 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-ctchn_calico-system(940a093b-83dc-454c-8522-5e1b1f40521f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:53.271499 kubelet[4018]: E1216 13:06:53.270259 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ctchn" podUID="940a093b-83dc-454c-8522-5e1b1f40521f" Dec 16 13:06:56.713116 containerd[2540]: time="2025-12-16T13:06:56.713050440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:06:56.714545 kubelet[4018]: E1216 13:06:56.713983 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66f67cd584-8rhwp" podUID="7c146d92-4a81-4948-9e2f-1093c61dcd5c" Dec 16 13:06:56.993807 containerd[2540]: time="2025-12-16T13:06:56.993247326Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:56.997463 containerd[2540]: time="2025-12-16T13:06:56.997412221Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:06:56.997605 containerd[2540]: time="2025-12-16T13:06:56.997538345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:56.998592 kubelet[4018]: E1216 13:06:56.998540 4018 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:06:56.998710 kubelet[4018]: E1216 13:06:56.998611 4018 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:06:56.998735 kubelet[4018]: E1216 13:06:56.998708 4018 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-8c4454f6d-fzx24_calico-system(021bd40b-8387-4f81-8ec5-64b895deb3c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:56.998773 kubelet[4018]: E1216 13:06:56.998749 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8c4454f6d-fzx24" podUID="021bd40b-8387-4f81-8ec5-64b895deb3c2" Dec 16 13:06:58.256938 update_engine[2508]: I20251216 13:06:58.256470 2508 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 16 13:06:58.256938 update_engine[2508]: I20251216 13:06:58.256544 2508 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 16 13:06:58.256938 update_engine[2508]: I20251216 13:06:58.256779 2508 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 16 13:06:58.260626 update_engine[2508]: I20251216 13:06:58.259954 2508 omaha_request_params.cc:62] Current group set to beta Dec 16 13:06:58.260626 update_engine[2508]: I20251216 13:06:58.260119 2508 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 16 13:06:58.260626 update_engine[2508]: I20251216 13:06:58.260124 2508 update_attempter.cc:643] Scheduling an action processor start. Dec 16 13:06:58.260626 update_engine[2508]: I20251216 13:06:58.260150 2508 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 16 13:06:58.260626 update_engine[2508]: I20251216 13:06:58.260192 2508 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 16 13:06:58.260626 update_engine[2508]: I20251216 13:06:58.260244 2508 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 16 13:06:58.260626 update_engine[2508]: I20251216 13:06:58.260248 2508 omaha_request_action.cc:272] Request: Dec 16 13:06:58.260626 update_engine[2508]: Dec 16 13:06:58.260626 update_engine[2508]: Dec 16 13:06:58.260626 update_engine[2508]: Dec 16 13:06:58.260626 update_engine[2508]: Dec 16 13:06:58.260626 update_engine[2508]: Dec 16 13:06:58.260626 update_engine[2508]: Dec 16 13:06:58.260626 update_engine[2508]: Dec 16 13:06:58.260626 update_engine[2508]: Dec 16 13:06:58.260626 update_engine[2508]: I20251216 13:06:58.260258 2508 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 13:06:58.262780 locksmithd[2596]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 16 13:06:58.264151 update_engine[2508]: I20251216 13:06:58.263931 2508 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 13:06:58.265384 update_engine[2508]: I20251216 13:06:58.265326 2508 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 13:06:58.292502 update_engine[2508]: E20251216 13:06:58.292457 2508 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 13:06:58.292700 update_engine[2508]: I20251216 13:06:58.292685 2508 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 16 13:06:58.711310 kubelet[4018]: E1216 13:06:58.711110 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7798f6444b-p9dhf" podUID="d35c67aa-255b-42a2-83b2-79e30256e265" Dec 16 13:07:04.710991 kubelet[4018]: E1216 13:07:04.710867 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kffxh" podUID="02113441-a531-45ff-9a40-51f9ff37eeb2" Dec 16 13:07:05.712387 kubelet[4018]: E1216 13:07:05.712142 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ctchn" podUID="940a093b-83dc-454c-8522-5e1b1f40521f" Dec 16 13:07:05.712387 kubelet[4018]: E1216 13:07:05.712269 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7798f6444b-zjrsf" podUID="fa544c8c-af21-41f2-8ffb-1fe7c36b0bfb" Dec 16 13:07:07.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.4.43:22-10.200.16.10:34386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:07.073329 systemd[1]: Started sshd@7-10.200.4.43:22-10.200.16.10:34386.service - OpenSSH per-connection server daemon (10.200.16.10:34386). Dec 16 13:07:07.081359 kernel: kauditd_printk_skb: 227 callbacks suppressed Dec 16 13:07:07.081472 kernel: audit: type=1130 audit(1765890427.073:778): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.4.43:22-10.200.16.10:34386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:07.583000 audit[6165]: USER_ACCT pid=6165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:07.589388 sshd-session[6165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:07.590335 sshd[6165]: Accepted publickey for core from 10.200.16.10 port 34386 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:07.590498 kernel: audit: type=1101 audit(1765890427.583:779): pid=6165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:07.588000 audit[6165]: CRED_ACQ pid=6165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:07.598684 kernel: audit: type=1103 audit(1765890427.588:780): pid=6165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:07.599296 kernel: audit: type=1006 audit(1765890427.588:781): pid=6165 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 16 13:07:07.588000 audit[6165]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffedcc16ba0 a2=3 a3=0 items=0 ppid=1 pid=6165 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:07.604592 kernel: audit: type=1300 audit(1765890427.588:781): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffedcc16ba0 a2=3 a3=0 items=0 ppid=1 pid=6165 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:07.588000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:07.606652 kernel: audit: type=1327 audit(1765890427.588:781): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:07.609597 systemd-logind[2506]: New session 10 of user core. Dec 16 13:07:07.616569 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 13:07:07.619000 audit[6165]: USER_START pid=6165 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:07.626373 kernel: audit: type=1105 audit(1765890427.619:782): pid=6165 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:07.626000 audit[6168]: CRED_ACQ pid=6168 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:07.633369 kernel: audit: type=1103 audit(1765890427.626:783): pid=6168 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:07.713280 kubelet[4018]: E1216 13:07:07.713234 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8c4454f6d-fzx24" podUID="021bd40b-8387-4f81-8ec5-64b895deb3c2" Dec 16 13:07:07.966335 sshd[6168]: Connection closed by 10.200.16.10 port 34386 Dec 16 13:07:07.966973 sshd-session[6165]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:07.969000 audit[6165]: USER_END pid=6165 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:07.976359 kernel: audit: type=1106 audit(1765890427.969:784): pid=6165 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:07.980776 systemd[1]: sshd@7-10.200.4.43:22-10.200.16.10:34386.service: Deactivated successfully. Dec 16 13:07:07.982907 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 13:07:07.975000 audit[6165]: CRED_DISP pid=6165 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:07.990669 kernel: audit: type=1104 audit(1765890427.975:785): pid=6165 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:07.991820 systemd-logind[2506]: Session 10 logged out. Waiting for processes to exit. Dec 16 13:07:07.993322 systemd-logind[2506]: Removed session 10. Dec 16 13:07:07.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.4.43:22-10.200.16.10:34386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:08.210242 update_engine[2508]: I20251216 13:07:08.209503 2508 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 13:07:08.210242 update_engine[2508]: I20251216 13:07:08.209659 2508 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 13:07:08.210242 update_engine[2508]: I20251216 13:07:08.210189 2508 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 13:07:08.220821 update_engine[2508]: E20251216 13:07:08.220679 2508 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 13:07:08.221042 update_engine[2508]: I20251216 13:07:08.221014 2508 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 16 13:07:08.710943 kubelet[4018]: E1216 13:07:08.710843 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66f67cd584-8rhwp" podUID="7c146d92-4a81-4948-9e2f-1093c61dcd5c" Dec 16 13:07:11.712227 kubelet[4018]: E1216 13:07:11.712168 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7798f6444b-p9dhf" podUID="d35c67aa-255b-42a2-83b2-79e30256e265" Dec 16 13:07:13.092944 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:07:13.093114 kernel: audit: type=1130 audit(1765890433.080:787): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.4.43:22-10.200.16.10:33032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:13.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.4.43:22-10.200.16.10:33032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:13.081669 systemd[1]: Started sshd@8-10.200.4.43:22-10.200.16.10:33032.service - OpenSSH per-connection server daemon (10.200.16.10:33032). Dec 16 13:07:13.607000 audit[6204]: USER_ACCT pid=6204 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:13.615947 kernel: audit: type=1101 audit(1765890433.607:788): pid=6204 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:13.616011 kernel: audit: type=1103 audit(1765890433.610:789): pid=6204 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:13.610000 audit[6204]: CRED_ACQ pid=6204 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:13.615608 sshd-session[6204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:13.616500 sshd[6204]: Accepted publickey for core from 10.200.16.10 port 33032 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:13.618211 kernel: audit: type=1006 audit(1765890433.610:790): pid=6204 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 16 13:07:13.620366 kernel: audit: type=1300 audit(1765890433.610:790): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef27fda10 a2=3 a3=0 items=0 ppid=1 pid=6204 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:13.610000 audit[6204]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef27fda10 a2=3 a3=0 items=0 ppid=1 pid=6204 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:13.610000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:13.626422 kernel: audit: type=1327 audit(1765890433.610:790): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:13.626590 systemd-logind[2506]: New session 11 of user core. Dec 16 13:07:13.633560 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 13:07:13.641450 kernel: audit: type=1105 audit(1765890433.634:791): pid=6204 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:13.634000 audit[6204]: USER_START pid=6204 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:13.640000 audit[6207]: CRED_ACQ pid=6207 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:13.646366 kernel: audit: type=1103 audit(1765890433.640:792): pid=6207 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:13.968252 sshd[6207]: Connection closed by 10.200.16.10 port 33032 Dec 16 13:07:13.969001 sshd-session[6204]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:13.969000 audit[6204]: USER_END pid=6204 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:13.981124 kernel: audit: type=1106 audit(1765890433.969:793): pid=6204 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:13.981207 kernel: audit: type=1104 audit(1765890433.969:794): pid=6204 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:13.969000 audit[6204]: CRED_DISP pid=6204 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:13.977073 systemd-logind[2506]: Session 11 logged out. Waiting for processes to exit. Dec 16 13:07:13.979828 systemd[1]: sshd@8-10.200.4.43:22-10.200.16.10:33032.service: Deactivated successfully. Dec 16 13:07:13.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.4.43:22-10.200.16.10:33032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:13.983483 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 13:07:13.988774 systemd-logind[2506]: Removed session 11. Dec 16 13:07:15.712526 kubelet[4018]: E1216 13:07:15.712466 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kffxh" podUID="02113441-a531-45ff-9a40-51f9ff37eeb2" Dec 16 13:07:18.211318 update_engine[2508]: I20251216 13:07:18.211138 2508 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 13:07:18.211318 update_engine[2508]: I20251216 13:07:18.211279 2508 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 13:07:18.212027 update_engine[2508]: I20251216 13:07:18.211971 2508 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 13:07:18.229225 update_engine[2508]: E20251216 13:07:18.229178 2508 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 13:07:18.229380 update_engine[2508]: I20251216 13:07:18.229291 2508 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 16 13:07:18.709814 kubelet[4018]: E1216 13:07:18.709765 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7798f6444b-zjrsf" podUID="fa544c8c-af21-41f2-8ffb-1fe7c36b0bfb" Dec 16 13:07:19.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.4.43:22-10.200.16.10:33038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:19.081710 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:07:19.082019 kernel: audit: type=1130 audit(1765890439.076:796): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.4.43:22-10.200.16.10:33038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:19.077961 systemd[1]: Started sshd@9-10.200.4.43:22-10.200.16.10:33038.service - OpenSSH per-connection server daemon (10.200.16.10:33038). Dec 16 13:07:19.598000 audit[6221]: USER_ACCT pid=6221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:19.603105 sshd[6221]: Accepted publickey for core from 10.200.16.10 port 33038 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:19.605440 kernel: audit: type=1101 audit(1765890439.598:797): pid=6221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:19.606423 sshd-session[6221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:19.604000 audit[6221]: CRED_ACQ pid=6221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:19.612389 kernel: audit: type=1103 audit(1765890439.604:798): pid=6221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:19.618199 kernel: audit: type=1006 audit(1765890439.604:799): pid=6221 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 16 13:07:19.618463 kernel: audit: type=1300 audit(1765890439.604:799): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc568efe40 a2=3 a3=0 items=0 ppid=1 pid=6221 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:19.604000 audit[6221]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc568efe40 a2=3 a3=0 items=0 ppid=1 pid=6221 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:19.604000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:19.626432 kernel: audit: type=1327 audit(1765890439.604:799): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:19.630443 systemd-logind[2506]: New session 12 of user core. Dec 16 13:07:19.637329 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 13:07:19.641000 audit[6221]: USER_START pid=6221 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:19.647416 kernel: audit: type=1105 audit(1765890439.641:800): pid=6221 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:19.648000 audit[6224]: CRED_ACQ pid=6224 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:19.659374 kernel: audit: type=1103 audit(1765890439.648:801): pid=6224 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:19.717230 kubelet[4018]: E1216 13:07:19.716790 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ctchn" podUID="940a093b-83dc-454c-8522-5e1b1f40521f" Dec 16 13:07:19.943361 sshd[6224]: Connection closed by 10.200.16.10 port 33038 Dec 16 13:07:19.944578 sshd-session[6221]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:19.945000 audit[6221]: USER_END pid=6221 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:19.960240 kernel: audit: type=1106 audit(1765890439.945:802): pid=6221 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:19.960321 kernel: audit: type=1104 audit(1765890439.945:803): pid=6221 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:19.945000 audit[6221]: CRED_DISP pid=6221 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:19.956320 systemd[1]: sshd@9-10.200.4.43:22-10.200.16.10:33038.service: Deactivated successfully. Dec 16 13:07:19.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.4.43:22-10.200.16.10:33038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:19.961428 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 13:07:19.963077 systemd-logind[2506]: Session 12 logged out. Waiting for processes to exit. Dec 16 13:07:19.964040 systemd-logind[2506]: Removed session 12. Dec 16 13:07:20.711366 kubelet[4018]: E1216 13:07:20.710923 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8c4454f6d-fzx24" podUID="021bd40b-8387-4f81-8ec5-64b895deb3c2" Dec 16 13:07:22.712372 kubelet[4018]: E1216 13:07:22.712297 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66f67cd584-8rhwp" podUID="7c146d92-4a81-4948-9e2f-1093c61dcd5c" Dec 16 13:07:24.709984 kubelet[4018]: E1216 13:07:24.709918 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7798f6444b-p9dhf" podUID="d35c67aa-255b-42a2-83b2-79e30256e265" Dec 16 13:07:25.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.4.43:22-10.200.16.10:42476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:25.053660 systemd[1]: Started sshd@10-10.200.4.43:22-10.200.16.10:42476.service - OpenSSH per-connection server daemon (10.200.16.10:42476). Dec 16 13:07:25.054952 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:07:25.054993 kernel: audit: type=1130 audit(1765890445.052:805): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.4.43:22-10.200.16.10:42476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:25.566000 audit[6245]: USER_ACCT pid=6245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:25.577358 kernel: audit: type=1101 audit(1765890445.566:806): pid=6245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:25.577430 sshd[6245]: Accepted publickey for core from 10.200.16.10 port 42476 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:25.579582 sshd-session[6245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:25.577000 audit[6245]: CRED_ACQ pid=6245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:25.588825 kernel: audit: type=1103 audit(1765890445.577:807): pid=6245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:25.588923 kernel: audit: type=1006 audit(1765890445.577:808): pid=6245 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 16 13:07:25.577000 audit[6245]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd161f9260 a2=3 a3=0 items=0 ppid=1 pid=6245 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:25.593328 kernel: audit: type=1300 audit(1765890445.577:808): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd161f9260 a2=3 a3=0 items=0 ppid=1 pid=6245 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:25.577000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:25.597357 kernel: audit: type=1327 audit(1765890445.577:808): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:25.600817 systemd-logind[2506]: New session 13 of user core. Dec 16 13:07:25.606762 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 13:07:25.608000 audit[6245]: USER_START pid=6245 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:25.614358 kernel: audit: type=1105 audit(1765890445.608:809): pid=6245 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:25.613000 audit[6248]: CRED_ACQ pid=6248 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:25.623404 kernel: audit: type=1103 audit(1765890445.613:810): pid=6248 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:25.981615 sshd[6248]: Connection closed by 10.200.16.10 port 42476 Dec 16 13:07:25.984652 sshd-session[6245]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:25.985000 audit[6245]: USER_END pid=6245 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:25.991202 systemd[1]: sshd@10-10.200.4.43:22-10.200.16.10:42476.service: Deactivated successfully. Dec 16 13:07:25.994446 kernel: audit: type=1106 audit(1765890445.985:811): pid=6245 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:25.994927 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 13:07:25.997954 systemd-logind[2506]: Session 13 logged out. Waiting for processes to exit. Dec 16 13:07:25.985000 audit[6245]: CRED_DISP pid=6245 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:26.006363 kernel: audit: type=1104 audit(1765890445.985:812): pid=6245 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:26.006718 systemd-logind[2506]: Removed session 13. Dec 16 13:07:25.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.4.43:22-10.200.16.10:42476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:28.211688 update_engine[2508]: I20251216 13:07:28.211601 2508 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 13:07:28.212214 update_engine[2508]: I20251216 13:07:28.211730 2508 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 13:07:28.212214 update_engine[2508]: I20251216 13:07:28.212175 2508 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 13:07:28.243988 update_engine[2508]: E20251216 13:07:28.243929 2508 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 13:07:28.244161 update_engine[2508]: I20251216 13:07:28.244050 2508 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 16 13:07:28.244161 update_engine[2508]: I20251216 13:07:28.244060 2508 omaha_request_action.cc:617] Omaha request response: Dec 16 13:07:28.244207 update_engine[2508]: E20251216 13:07:28.244165 2508 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 16 13:07:28.244207 update_engine[2508]: I20251216 13:07:28.244187 2508 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 16 13:07:28.244207 update_engine[2508]: I20251216 13:07:28.244192 2508 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 16 13:07:28.244207 update_engine[2508]: I20251216 13:07:28.244196 2508 update_attempter.cc:306] Processing Done. Dec 16 13:07:28.244295 update_engine[2508]: E20251216 13:07:28.244220 2508 update_attempter.cc:619] Update failed. Dec 16 13:07:28.244295 update_engine[2508]: I20251216 13:07:28.244225 2508 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 16 13:07:28.244295 update_engine[2508]: I20251216 13:07:28.244230 2508 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 16 13:07:28.244295 update_engine[2508]: I20251216 13:07:28.244236 2508 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 16 13:07:28.245014 update_engine[2508]: I20251216 13:07:28.244594 2508 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 16 13:07:28.245014 update_engine[2508]: I20251216 13:07:28.244649 2508 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 16 13:07:28.245014 update_engine[2508]: I20251216 13:07:28.244654 2508 omaha_request_action.cc:272] Request: Dec 16 13:07:28.245014 update_engine[2508]: Dec 16 13:07:28.245014 update_engine[2508]: Dec 16 13:07:28.245014 update_engine[2508]: Dec 16 13:07:28.245014 update_engine[2508]: Dec 16 13:07:28.245014 update_engine[2508]: Dec 16 13:07:28.245014 update_engine[2508]: Dec 16 13:07:28.245014 update_engine[2508]: I20251216 13:07:28.244660 2508 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 13:07:28.245014 update_engine[2508]: I20251216 13:07:28.244681 2508 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 13:07:28.245014 update_engine[2508]: I20251216 13:07:28.244984 2508 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 13:07:28.245247 locksmithd[2596]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 16 13:07:28.276179 update_engine[2508]: E20251216 13:07:28.276134 2508 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 13:07:28.276303 update_engine[2508]: I20251216 13:07:28.276201 2508 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 16 13:07:28.276303 update_engine[2508]: I20251216 13:07:28.276207 2508 omaha_request_action.cc:617] Omaha request response: Dec 16 13:07:28.276303 update_engine[2508]: I20251216 13:07:28.276214 2508 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 16 13:07:28.276303 update_engine[2508]: I20251216 13:07:28.276219 2508 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 16 13:07:28.276303 update_engine[2508]: I20251216 13:07:28.276223 2508 update_attempter.cc:306] Processing Done. Dec 16 13:07:28.276303 update_engine[2508]: I20251216 13:07:28.276229 2508 update_attempter.cc:310] Error event sent. Dec 16 13:07:28.276303 update_engine[2508]: I20251216 13:07:28.276239 2508 update_check_scheduler.cc:74] Next update check in 48m54s Dec 16 13:07:28.276685 locksmithd[2596]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 16 13:07:28.711159 kubelet[4018]: E1216 13:07:28.711097 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kffxh" podUID="02113441-a531-45ff-9a40-51f9ff37eeb2" Dec 16 13:07:29.710275 kubelet[4018]: E1216 13:07:29.710212 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7798f6444b-zjrsf" podUID="fa544c8c-af21-41f2-8ffb-1fe7c36b0bfb" Dec 16 13:07:31.088611 systemd[1]: Started sshd@11-10.200.4.43:22-10.200.16.10:60934.service - OpenSSH per-connection server daemon (10.200.16.10:60934). Dec 16 13:07:31.095646 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:07:31.095772 kernel: audit: type=1130 audit(1765890451.088:814): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.4.43:22-10.200.16.10:60934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:31.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.4.43:22-10.200.16.10:60934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:31.596945 sshd[6265]: Accepted publickey for core from 10.200.16.10 port 60934 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:31.596000 audit[6265]: USER_ACCT pid=6265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:31.599576 sshd-session[6265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:31.604596 kernel: audit: type=1101 audit(1765890451.596:815): pid=6265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:31.597000 audit[6265]: CRED_ACQ pid=6265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:31.617290 systemd-logind[2506]: New session 14 of user core. Dec 16 13:07:31.623083 kernel: audit: type=1103 audit(1765890451.597:816): pid=6265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:31.623238 kernel: audit: type=1006 audit(1765890451.597:817): pid=6265 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 16 13:07:31.597000 audit[6265]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc2fe21fd0 a2=3 a3=0 items=0 ppid=1 pid=6265 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:31.629308 kernel: audit: type=1300 audit(1765890451.597:817): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc2fe21fd0 a2=3 a3=0 items=0 ppid=1 pid=6265 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:31.629589 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 13:07:31.597000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:31.633410 kernel: audit: type=1327 audit(1765890451.597:817): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:31.634000 audit[6265]: USER_START pid=6265 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:31.642367 kernel: audit: type=1105 audit(1765890451.634:818): pid=6265 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:31.643000 audit[6268]: CRED_ACQ pid=6268 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:31.650361 kernel: audit: type=1103 audit(1765890451.643:819): pid=6268 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:32.004011 sshd[6268]: Connection closed by 10.200.16.10 port 60934 Dec 16 13:07:32.004565 sshd-session[6265]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:32.007000 audit[6265]: USER_END pid=6265 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:32.017370 kernel: audit: type=1106 audit(1765890452.007:820): pid=6265 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:32.016000 audit[6265]: CRED_DISP pid=6265 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:32.020147 systemd[1]: sshd@11-10.200.4.43:22-10.200.16.10:60934.service: Deactivated successfully. Dec 16 13:07:32.024905 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 13:07:32.027360 kernel: audit: type=1104 audit(1765890452.016:821): pid=6265 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:32.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.4.43:22-10.200.16.10:60934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:32.030794 systemd-logind[2506]: Session 14 logged out. Waiting for processes to exit. Dec 16 13:07:32.031572 systemd-logind[2506]: Removed session 14. Dec 16 13:07:32.116474 systemd[1]: Started sshd@12-10.200.4.43:22-10.200.16.10:60936.service - OpenSSH per-connection server daemon (10.200.16.10:60936). Dec 16 13:07:32.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.4.43:22-10.200.16.10:60936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:32.632000 audit[6281]: USER_ACCT pid=6281 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:32.633210 sshd[6281]: Accepted publickey for core from 10.200.16.10 port 60936 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:32.634000 audit[6281]: CRED_ACQ pid=6281 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:32.634000 audit[6281]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd79865c0 a2=3 a3=0 items=0 ppid=1 pid=6281 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.634000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:32.634774 sshd-session[6281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:32.642200 systemd-logind[2506]: New session 15 of user core. Dec 16 13:07:32.648796 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 13:07:32.651000 audit[6281]: USER_START pid=6281 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:32.653000 audit[6284]: CRED_ACQ pid=6284 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:33.016464 sshd[6284]: Connection closed by 10.200.16.10 port 60936 Dec 16 13:07:33.017210 sshd-session[6281]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:33.018000 audit[6281]: USER_END pid=6281 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:33.018000 audit[6281]: CRED_DISP pid=6281 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:33.021973 systemd[1]: sshd@12-10.200.4.43:22-10.200.16.10:60936.service: Deactivated successfully. Dec 16 13:07:33.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.4.43:22-10.200.16.10:60936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:33.024255 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 13:07:33.025391 systemd-logind[2506]: Session 15 logged out. Waiting for processes to exit. Dec 16 13:07:33.027196 systemd-logind[2506]: Removed session 15. Dec 16 13:07:33.123866 systemd[1]: Started sshd@13-10.200.4.43:22-10.200.16.10:60940.service - OpenSSH per-connection server daemon (10.200.16.10:60940). Dec 16 13:07:33.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.4.43:22-10.200.16.10:60940 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:33.641949 sshd[6294]: Accepted publickey for core from 10.200.16.10 port 60940 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:33.641000 audit[6294]: USER_ACCT pid=6294 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:33.643000 audit[6294]: CRED_ACQ pid=6294 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:33.643000 audit[6294]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1ba83fb0 a2=3 a3=0 items=0 ppid=1 pid=6294 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:33.643000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:33.644378 sshd-session[6294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:33.653666 systemd-logind[2506]: New session 16 of user core. Dec 16 13:07:33.661140 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 13:07:33.664000 audit[6294]: USER_START pid=6294 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:33.667000 audit[6297]: CRED_ACQ pid=6297 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:33.714361 containerd[2540]: time="2025-12-16T13:07:33.713757761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:07:34.019952 containerd[2540]: time="2025-12-16T13:07:34.019560755Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:34.022489 sshd[6297]: Connection closed by 10.200.16.10 port 60940 Dec 16 13:07:34.024672 sshd-session[6294]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:34.025912 containerd[2540]: time="2025-12-16T13:07:34.025754435Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:07:34.026949 containerd[2540]: time="2025-12-16T13:07:34.025859766Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 13:07:34.027272 kubelet[4018]: E1216 13:07:34.027092 4018 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:07:34.027272 kubelet[4018]: E1216 13:07:34.027150 4018 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:07:34.027000 audit[6294]: USER_END pid=6294 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:34.028000 audit[6294]: CRED_DISP pid=6294 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:34.030486 kubelet[4018]: E1216 13:07:34.030445 4018 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-66f67cd584-8rhwp_calico-system(7c146d92-4a81-4948-9e2f-1093c61dcd5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:34.032615 containerd[2540]: time="2025-12-16T13:07:34.032589395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:07:34.033772 systemd[1]: sshd@13-10.200.4.43:22-10.200.16.10:60940.service: Deactivated successfully. Dec 16 13:07:34.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.4.43:22-10.200.16.10:60940 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:34.037017 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 13:07:34.039996 systemd-logind[2506]: Session 16 logged out. Waiting for processes to exit. Dec 16 13:07:34.040948 systemd-logind[2506]: Removed session 16. Dec 16 13:07:34.300998 containerd[2540]: time="2025-12-16T13:07:34.300802438Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:34.304847 containerd[2540]: time="2025-12-16T13:07:34.304786000Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:07:34.305315 containerd[2540]: time="2025-12-16T13:07:34.304797401Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 13:07:34.305381 kubelet[4018]: E1216 13:07:34.305026 4018 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:07:34.305381 kubelet[4018]: E1216 13:07:34.305074 4018 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:07:34.305381 kubelet[4018]: E1216 13:07:34.305176 4018 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-66f67cd584-8rhwp_calico-system(7c146d92-4a81-4948-9e2f-1093c61dcd5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:34.305381 kubelet[4018]: E1216 13:07:34.305224 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66f67cd584-8rhwp" podUID="7c146d92-4a81-4948-9e2f-1093c61dcd5c" Dec 16 13:07:34.711074 kubelet[4018]: E1216 13:07:34.710279 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8c4454f6d-fzx24" podUID="021bd40b-8387-4f81-8ec5-64b895deb3c2" Dec 16 13:07:34.711299 containerd[2540]: time="2025-12-16T13:07:34.710950109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:07:34.975207 containerd[2540]: time="2025-12-16T13:07:34.975024480Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:34.979446 containerd[2540]: time="2025-12-16T13:07:34.979390039Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:07:34.979625 containerd[2540]: time="2025-12-16T13:07:34.979423418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 13:07:34.979854 kubelet[4018]: E1216 13:07:34.979809 4018 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:07:34.979968 kubelet[4018]: E1216 13:07:34.979951 4018 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:07:34.980120 kubelet[4018]: E1216 13:07:34.980099 4018 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-ctchn_calico-system(940a093b-83dc-454c-8522-5e1b1f40521f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:34.982598 containerd[2540]: time="2025-12-16T13:07:34.982563398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:07:35.257567 containerd[2540]: time="2025-12-16T13:07:35.255196041Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:35.263291 containerd[2540]: time="2025-12-16T13:07:35.263227428Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:07:35.263555 containerd[2540]: time="2025-12-16T13:07:35.263506513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 13:07:35.264414 kubelet[4018]: E1216 13:07:35.264373 4018 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:07:35.264776 kubelet[4018]: E1216 13:07:35.264441 4018 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:07:35.264776 kubelet[4018]: E1216 13:07:35.264555 4018 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-ctchn_calico-system(940a093b-83dc-454c-8522-5e1b1f40521f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:35.264932 kubelet[4018]: E1216 13:07:35.264905 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ctchn" podUID="940a093b-83dc-454c-8522-5e1b1f40521f" Dec 16 13:07:38.712466 containerd[2540]: time="2025-12-16T13:07:38.712411159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:07:38.981021 containerd[2540]: time="2025-12-16T13:07:38.980803580Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:38.985039 containerd[2540]: time="2025-12-16T13:07:38.984888077Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:07:38.985039 containerd[2540]: time="2025-12-16T13:07:38.984919457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:07:38.985405 kubelet[4018]: E1216 13:07:38.985356 4018 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:07:38.985814 kubelet[4018]: E1216 13:07:38.985428 4018 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:07:38.985814 kubelet[4018]: E1216 13:07:38.985537 4018 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7798f6444b-p9dhf_calico-apiserver(d35c67aa-255b-42a2-83b2-79e30256e265): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:38.985814 kubelet[4018]: E1216 13:07:38.985579 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7798f6444b-p9dhf" podUID="d35c67aa-255b-42a2-83b2-79e30256e265" Dec 16 13:07:39.132684 systemd[1]: Started sshd@14-10.200.4.43:22-10.200.16.10:60950.service - OpenSSH per-connection server daemon (10.200.16.10:60950). Dec 16 13:07:39.140776 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 13:07:39.140816 kernel: audit: type=1130 audit(1765890459.131:841): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.4.43:22-10.200.16.10:60950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:39.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.4.43:22-10.200.16.10:60950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:39.649383 sshd[6352]: Accepted publickey for core from 10.200.16.10 port 60950 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:39.647000 audit[6352]: USER_ACCT pid=6352 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:39.655487 kernel: audit: type=1101 audit(1765890459.647:842): pid=6352 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:39.656713 sshd-session[6352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:39.654000 audit[6352]: CRED_ACQ pid=6352 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:39.667416 kernel: audit: type=1103 audit(1765890459.654:843): pid=6352 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:39.673365 kernel: audit: type=1006 audit(1765890459.654:844): pid=6352 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 16 13:07:39.673685 systemd-logind[2506]: New session 17 of user core. Dec 16 13:07:39.654000 audit[6352]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff489841b0 a2=3 a3=0 items=0 ppid=1 pid=6352 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:39.680368 kernel: audit: type=1300 audit(1765890459.654:844): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff489841b0 a2=3 a3=0 items=0 ppid=1 pid=6352 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:39.680583 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 13:07:39.654000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:39.685369 kernel: audit: type=1327 audit(1765890459.654:844): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:39.685000 audit[6352]: USER_START pid=6352 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:39.692500 kernel: audit: type=1105 audit(1765890459.685:845): pid=6352 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:39.693000 audit[6355]: CRED_ACQ pid=6355 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:39.698364 kernel: audit: type=1103 audit(1765890459.693:846): pid=6355 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:40.041394 sshd[6355]: Connection closed by 10.200.16.10 port 60950 Dec 16 13:07:40.039867 sshd-session[6352]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:40.041000 audit[6352]: USER_END pid=6352 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:40.049368 kernel: audit: type=1106 audit(1765890460.041:847): pid=6352 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:40.048319 systemd[1]: sshd@14-10.200.4.43:22-10.200.16.10:60950.service: Deactivated successfully. Dec 16 13:07:40.041000 audit[6352]: CRED_DISP pid=6352 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:40.051600 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 13:07:40.056376 kernel: audit: type=1104 audit(1765890460.041:848): pid=6352 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:40.056489 systemd-logind[2506]: Session 17 logged out. Waiting for processes to exit. Dec 16 13:07:40.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.4.43:22-10.200.16.10:60950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:40.057974 systemd-logind[2506]: Removed session 17. Dec 16 13:07:40.710085 containerd[2540]: time="2025-12-16T13:07:40.710018769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:07:40.980768 containerd[2540]: time="2025-12-16T13:07:40.980582337Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:40.985573 containerd[2540]: time="2025-12-16T13:07:40.985439953Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:07:40.985573 containerd[2540]: time="2025-12-16T13:07:40.985462065Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 13:07:40.985939 kubelet[4018]: E1216 13:07:40.985848 4018 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:07:40.986307 kubelet[4018]: E1216 13:07:40.985960 4018 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:07:40.986334 kubelet[4018]: E1216 13:07:40.986161 4018 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-kffxh_calico-system(02113441-a531-45ff-9a40-51f9ff37eeb2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:40.986415 kubelet[4018]: E1216 13:07:40.986378 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kffxh" podUID="02113441-a531-45ff-9a40-51f9ff37eeb2" Dec 16 13:07:43.712256 containerd[2540]: time="2025-12-16T13:07:43.711896731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:07:43.971387 containerd[2540]: time="2025-12-16T13:07:43.971234991Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:43.976398 containerd[2540]: time="2025-12-16T13:07:43.976318109Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:07:43.976529 containerd[2540]: time="2025-12-16T13:07:43.976359021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:07:43.976590 kubelet[4018]: E1216 13:07:43.976552 4018 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:07:43.976941 kubelet[4018]: E1216 13:07:43.976603 4018 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:07:43.976941 kubelet[4018]: E1216 13:07:43.976690 4018 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7798f6444b-zjrsf_calico-apiserver(fa544c8c-af21-41f2-8ffb-1fe7c36b0bfb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:43.976941 kubelet[4018]: E1216 13:07:43.976727 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7798f6444b-zjrsf" podUID="fa544c8c-af21-41f2-8ffb-1fe7c36b0bfb" Dec 16 13:07:45.150502 systemd[1]: Started sshd@15-10.200.4.43:22-10.200.16.10:46978.service - OpenSSH per-connection server daemon (10.200.16.10:46978). Dec 16 13:07:45.158545 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:07:45.158661 kernel: audit: type=1130 audit(1765890465.149:850): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.4.43:22-10.200.16.10:46978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:45.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.4.43:22-10.200.16.10:46978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:45.661000 audit[6374]: USER_ACCT pid=6374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:45.663361 sshd[6374]: Accepted publickey for core from 10.200.16.10 port 46978 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:45.665405 sshd-session[6374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:45.670679 kernel: audit: type=1101 audit(1765890465.661:851): pid=6374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:45.670785 kernel: audit: type=1103 audit(1765890465.663:852): pid=6374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:45.663000 audit[6374]: CRED_ACQ pid=6374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:45.674444 systemd-logind[2506]: New session 18 of user core. Dec 16 13:07:45.678657 kernel: audit: type=1006 audit(1765890465.663:853): pid=6374 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 16 13:07:45.663000 audit[6374]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5b4d9e50 a2=3 a3=0 items=0 ppid=1 pid=6374 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:45.682925 kernel: audit: type=1300 audit(1765890465.663:853): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5b4d9e50 a2=3 a3=0 items=0 ppid=1 pid=6374 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:45.663000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:45.685333 kernel: audit: type=1327 audit(1765890465.663:853): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:45.687591 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 13:07:45.691000 audit[6374]: USER_START pid=6374 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:45.693000 audit[6377]: CRED_ACQ pid=6377 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:45.703283 kernel: audit: type=1105 audit(1765890465.691:854): pid=6374 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:45.703327 kernel: audit: type=1103 audit(1765890465.693:855): pid=6377 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:46.000098 sshd[6377]: Connection closed by 10.200.16.10 port 46978 Dec 16 13:07:46.000860 sshd-session[6374]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:46.002000 audit[6374]: USER_END pid=6374 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:46.007702 systemd[1]: sshd@15-10.200.4.43:22-10.200.16.10:46978.service: Deactivated successfully. Dec 16 13:07:46.011140 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 13:07:46.012390 kernel: audit: type=1106 audit(1765890466.002:856): pid=6374 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:46.013074 systemd-logind[2506]: Session 18 logged out. Waiting for processes to exit. Dec 16 13:07:46.015188 systemd-logind[2506]: Removed session 18. Dec 16 13:07:46.002000 audit[6374]: CRED_DISP pid=6374 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:46.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.4.43:22-10.200.16.10:46978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:46.021355 kernel: audit: type=1104 audit(1765890466.002:857): pid=6374 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:48.713945 kubelet[4018]: E1216 13:07:48.713884 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66f67cd584-8rhwp" podUID="7c146d92-4a81-4948-9e2f-1093c61dcd5c" Dec 16 13:07:48.714579 kubelet[4018]: E1216 13:07:48.713998 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ctchn" podUID="940a093b-83dc-454c-8522-5e1b1f40521f" Dec 16 13:07:48.715058 containerd[2540]: time="2025-12-16T13:07:48.715017862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:07:48.993879 containerd[2540]: time="2025-12-16T13:07:48.993718110Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:48.997236 containerd[2540]: time="2025-12-16T13:07:48.997185709Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:07:48.997390 containerd[2540]: time="2025-12-16T13:07:48.997207843Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 13:07:48.997557 kubelet[4018]: E1216 13:07:48.997519 4018 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:07:48.997618 kubelet[4018]: E1216 13:07:48.997569 4018 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:07:48.997677 kubelet[4018]: E1216 13:07:48.997661 4018 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-8c4454f6d-fzx24_calico-system(021bd40b-8387-4f81-8ec5-64b895deb3c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:48.997737 kubelet[4018]: E1216 13:07:48.997711 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8c4454f6d-fzx24" podUID="021bd40b-8387-4f81-8ec5-64b895deb3c2" Dec 16 13:07:51.110713 systemd[1]: Started sshd@16-10.200.4.43:22-10.200.16.10:38988.service - OpenSSH per-connection server daemon (10.200.16.10:38988). Dec 16 13:07:51.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.4.43:22-10.200.16.10:38988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:51.112948 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:07:51.113017 kernel: audit: type=1130 audit(1765890471.110:859): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.4.43:22-10.200.16.10:38988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:51.623000 audit[6389]: USER_ACCT pid=6389 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:51.631358 kernel: audit: type=1101 audit(1765890471.623:860): pid=6389 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:51.631458 sshd[6389]: Accepted publickey for core from 10.200.16.10 port 38988 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:51.632279 sshd-session[6389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:51.631000 audit[6389]: CRED_ACQ pid=6389 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:51.648356 kernel: audit: type=1103 audit(1765890471.631:861): pid=6389 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:51.652645 systemd-logind[2506]: New session 19 of user core. Dec 16 13:07:51.631000 audit[6389]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde8cbb1e0 a2=3 a3=0 items=0 ppid=1 pid=6389 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:51.658505 kernel: audit: type=1006 audit(1765890471.631:862): pid=6389 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 16 13:07:51.658566 kernel: audit: type=1300 audit(1765890471.631:862): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde8cbb1e0 a2=3 a3=0 items=0 ppid=1 pid=6389 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:51.631000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:51.662749 kernel: audit: type=1327 audit(1765890471.631:862): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:51.663574 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 13:07:51.667000 audit[6389]: USER_START pid=6389 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:51.675356 kernel: audit: type=1105 audit(1765890471.667:863): pid=6389 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:51.676000 audit[6392]: CRED_ACQ pid=6392 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:51.683354 kernel: audit: type=1103 audit(1765890471.676:864): pid=6392 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:51.711639 kubelet[4018]: E1216 13:07:51.711588 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kffxh" podUID="02113441-a531-45ff-9a40-51f9ff37eeb2" Dec 16 13:07:51.971046 sshd[6392]: Connection closed by 10.200.16.10 port 38988 Dec 16 13:07:51.971631 sshd-session[6389]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:51.972000 audit[6389]: USER_END pid=6389 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:51.976160 systemd[1]: sshd@16-10.200.4.43:22-10.200.16.10:38988.service: Deactivated successfully. Dec 16 13:07:51.978627 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 13:07:51.972000 audit[6389]: CRED_DISP pid=6389 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:51.981421 kernel: audit: type=1106 audit(1765890471.972:865): pid=6389 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:51.981575 kernel: audit: type=1104 audit(1765890471.972:866): pid=6389 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:51.984084 systemd-logind[2506]: Session 19 logged out. Waiting for processes to exit. Dec 16 13:07:51.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.4.43:22-10.200.16.10:38988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:51.985303 systemd-logind[2506]: Removed session 19. Dec 16 13:07:52.079057 systemd[1]: Started sshd@17-10.200.4.43:22-10.200.16.10:38992.service - OpenSSH per-connection server daemon (10.200.16.10:38992). Dec 16 13:07:52.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.4.43:22-10.200.16.10:38992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:52.586507 sshd[6405]: Accepted publickey for core from 10.200.16.10 port 38992 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:52.586000 audit[6405]: USER_ACCT pid=6405 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:52.591316 sshd-session[6405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:52.590000 audit[6405]: CRED_ACQ pid=6405 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:52.590000 audit[6405]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff3ed81e00 a2=3 a3=0 items=0 ppid=1 pid=6405 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:52.590000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:52.599249 systemd-logind[2506]: New session 20 of user core. Dec 16 13:07:52.606122 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 13:07:52.611000 audit[6405]: USER_START pid=6405 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:52.614000 audit[6408]: CRED_ACQ pid=6408 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:52.987366 sshd[6408]: Connection closed by 10.200.16.10 port 38992 Dec 16 13:07:52.988017 sshd-session[6405]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:52.990000 audit[6405]: USER_END pid=6405 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:52.990000 audit[6405]: CRED_DISP pid=6405 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:52.993212 systemd-logind[2506]: Session 20 logged out. Waiting for processes to exit. Dec 16 13:07:52.995003 systemd[1]: sshd@17-10.200.4.43:22-10.200.16.10:38992.service: Deactivated successfully. Dec 16 13:07:52.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.4.43:22-10.200.16.10:38992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:53.000735 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 13:07:53.005399 systemd-logind[2506]: Removed session 20. Dec 16 13:07:53.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.4.43:22-10.200.16.10:39004 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:53.094623 systemd[1]: Started sshd@18-10.200.4.43:22-10.200.16.10:39004.service - OpenSSH per-connection server daemon (10.200.16.10:39004). Dec 16 13:07:53.610000 audit[6418]: USER_ACCT pid=6418 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:53.611067 sshd[6418]: Accepted publickey for core from 10.200.16.10 port 39004 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:53.612000 audit[6418]: CRED_ACQ pid=6418 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:53.612000 audit[6418]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd49ceda0 a2=3 a3=0 items=0 ppid=1 pid=6418 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:53.612000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:53.612982 sshd-session[6418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:53.619290 systemd-logind[2506]: New session 21 of user core. Dec 16 13:07:53.626585 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 13:07:53.629000 audit[6418]: USER_START pid=6418 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:53.630000 audit[6421]: CRED_ACQ pid=6421 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:53.711302 kubelet[4018]: E1216 13:07:53.711263 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7798f6444b-p9dhf" podUID="d35c67aa-255b-42a2-83b2-79e30256e265" Dec 16 13:07:54.349000 audit[6431]: NETFILTER_CFG table=filter:139 family=2 entries=26 op=nft_register_rule pid=6431 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:07:54.349000 audit[6431]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffcdaf98180 a2=0 a3=7ffcdaf9816c items=0 ppid=4127 pid=6431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:54.349000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:07:54.354000 audit[6431]: NETFILTER_CFG table=nat:140 family=2 entries=20 op=nft_register_rule pid=6431 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:07:54.354000 audit[6431]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcdaf98180 a2=0 a3=0 items=0 ppid=4127 pid=6431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:54.354000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:07:54.409364 sshd[6421]: Connection closed by 10.200.16.10 port 39004 Dec 16 13:07:54.410156 sshd-session[6418]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:54.411000 audit[6418]: USER_END pid=6418 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:54.411000 audit[6418]: CRED_DISP pid=6418 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:54.414736 systemd[1]: sshd@18-10.200.4.43:22-10.200.16.10:39004.service: Deactivated successfully. Dec 16 13:07:54.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.4.43:22-10.200.16.10:39004 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:54.418463 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 13:07:54.420351 systemd-logind[2506]: Session 21 logged out. Waiting for processes to exit. Dec 16 13:07:54.423272 systemd-logind[2506]: Removed session 21. Dec 16 13:07:54.518647 systemd[1]: Started sshd@19-10.200.4.43:22-10.200.16.10:39016.service - OpenSSH per-connection server daemon (10.200.16.10:39016). Dec 16 13:07:54.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.4.43:22-10.200.16.10:39016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:55.038000 audit[6436]: USER_ACCT pid=6436 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:55.040263 sshd[6436]: Accepted publickey for core from 10.200.16.10 port 39016 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:55.041924 sshd-session[6436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:55.041000 audit[6436]: CRED_ACQ pid=6436 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:55.041000 audit[6436]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd880b7db0 a2=3 a3=0 items=0 ppid=1 pid=6436 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:55.041000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:55.049010 systemd-logind[2506]: New session 22 of user core. Dec 16 13:07:55.056688 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 13:07:55.060000 audit[6436]: USER_START pid=6436 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:55.062000 audit[6439]: CRED_ACQ pid=6439 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:55.382000 audit[6446]: NETFILTER_CFG table=filter:141 family=2 entries=38 op=nft_register_rule pid=6446 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:07:55.382000 audit[6446]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff145d9eb0 a2=0 a3=7fff145d9e9c items=0 ppid=4127 pid=6446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:55.382000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:07:55.392000 audit[6446]: NETFILTER_CFG table=nat:142 family=2 entries=20 op=nft_register_rule pid=6446 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:07:55.392000 audit[6446]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff145d9eb0 a2=0 a3=0 items=0 ppid=4127 pid=6446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:55.392000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:07:55.499286 sshd[6439]: Connection closed by 10.200.16.10 port 39016 Dec 16 13:07:55.499958 sshd-session[6436]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:55.501000 audit[6436]: USER_END pid=6436 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:55.501000 audit[6436]: CRED_DISP pid=6436 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:55.504447 systemd[1]: sshd@19-10.200.4.43:22-10.200.16.10:39016.service: Deactivated successfully. Dec 16 13:07:55.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.4.43:22-10.200.16.10:39016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:55.507266 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 13:07:55.508105 systemd-logind[2506]: Session 22 logged out. Waiting for processes to exit. Dec 16 13:07:55.509491 systemd-logind[2506]: Removed session 22. Dec 16 13:07:55.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.4.43:22-10.200.16.10:39018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:55.612501 systemd[1]: Started sshd@20-10.200.4.43:22-10.200.16.10:39018.service - OpenSSH per-connection server daemon (10.200.16.10:39018). Dec 16 13:07:56.125000 audit[6451]: USER_ACCT pid=6451 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:56.132385 kernel: kauditd_printk_skb: 47 callbacks suppressed Dec 16 13:07:56.132519 kernel: audit: type=1101 audit(1765890476.125:900): pid=6451 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:56.130713 sshd-session[6451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:56.132881 sshd[6451]: Accepted publickey for core from 10.200.16.10 port 39018 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:56.129000 audit[6451]: CRED_ACQ pid=6451 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:56.142489 kernel: audit: type=1103 audit(1765890476.129:901): pid=6451 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:56.147285 kernel: audit: type=1006 audit(1765890476.129:902): pid=6451 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 16 13:07:56.142560 systemd-logind[2506]: New session 23 of user core. Dec 16 13:07:56.129000 audit[6451]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1ca333d0 a2=3 a3=0 items=0 ppid=1 pid=6451 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:56.153409 kernel: audit: type=1300 audit(1765890476.129:902): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1ca333d0 a2=3 a3=0 items=0 ppid=1 pid=6451 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:56.157489 kernel: audit: type=1327 audit(1765890476.129:902): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:56.129000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:56.154552 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 13:07:56.160000 audit[6451]: USER_START pid=6451 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:56.165710 kernel: audit: type=1105 audit(1765890476.160:903): pid=6451 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:56.165000 audit[6454]: CRED_ACQ pid=6454 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:56.172359 kernel: audit: type=1103 audit(1765890476.165:904): pid=6454 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:56.488364 sshd[6454]: Connection closed by 10.200.16.10 port 39018 Dec 16 13:07:56.488303 sshd-session[6451]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:56.492000 audit[6451]: USER_END pid=6451 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:56.498415 kernel: audit: type=1106 audit(1765890476.492:905): pid=6451 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:56.498783 systemd-logind[2506]: Session 23 logged out. Waiting for processes to exit. Dec 16 13:07:56.499633 systemd[1]: sshd@20-10.200.4.43:22-10.200.16.10:39018.service: Deactivated successfully. Dec 16 13:07:56.492000 audit[6451]: CRED_DISP pid=6451 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:56.506271 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 13:07:56.508360 kernel: audit: type=1104 audit(1765890476.492:906): pid=6451 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:56.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.4.43:22-10.200.16.10:39018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:56.510285 systemd-logind[2506]: Removed session 23. Dec 16 13:07:56.515406 kernel: audit: type=1131 audit(1765890476.500:907): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.4.43:22-10.200.16.10:39018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:58.710779 kubelet[4018]: E1216 13:07:58.710642 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7798f6444b-zjrsf" podUID="fa544c8c-af21-41f2-8ffb-1fe7c36b0bfb" Dec 16 13:07:59.714138 kubelet[4018]: E1216 13:07:59.713996 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66f67cd584-8rhwp" podUID="7c146d92-4a81-4948-9e2f-1093c61dcd5c" Dec 16 13:08:00.712233 kubelet[4018]: E1216 13:08:00.712132 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8c4454f6d-fzx24" podUID="021bd40b-8387-4f81-8ec5-64b895deb3c2" Dec 16 13:08:01.597466 systemd[1]: Started sshd@21-10.200.4.43:22-10.200.16.10:46106.service - OpenSSH per-connection server daemon (10.200.16.10:46106). Dec 16 13:08:01.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.4.43:22-10.200.16.10:46106 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:08:01.603447 kernel: audit: type=1130 audit(1765890481.596:908): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.4.43:22-10.200.16.10:46106 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:08:02.119000 audit[6468]: USER_ACCT pid=6468 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:02.127773 kernel: audit: type=1101 audit(1765890482.119:909): pid=6468 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:02.127941 sshd[6468]: Accepted publickey for core from 10.200.16.10 port 46106 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:08:02.129591 sshd-session[6468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:02.127000 audit[6468]: CRED_ACQ pid=6468 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:02.138827 kernel: audit: type=1103 audit(1765890482.127:910): pid=6468 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:02.144395 kernel: audit: type=1006 audit(1765890482.127:911): pid=6468 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 16 13:08:02.127000 audit[6468]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc7d17170 a2=3 a3=0 items=0 ppid=1 pid=6468 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:02.146522 systemd-logind[2506]: New session 24 of user core. Dec 16 13:08:02.151386 kernel: audit: type=1300 audit(1765890482.127:911): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc7d17170 a2=3 a3=0 items=0 ppid=1 pid=6468 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:02.127000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:08:02.154989 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 13:08:02.156397 kernel: audit: type=1327 audit(1765890482.127:911): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:08:02.166367 kernel: audit: type=1105 audit(1765890482.159:912): pid=6468 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:02.159000 audit[6468]: USER_START pid=6468 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:02.165000 audit[6471]: CRED_ACQ pid=6471 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:02.173368 kernel: audit: type=1103 audit(1765890482.165:913): pid=6471 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:02.485863 sshd[6471]: Connection closed by 10.200.16.10 port 46106 Dec 16 13:08:02.488502 sshd-session[6468]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:02.490000 audit[6468]: USER_END pid=6468 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:02.498363 kernel: audit: type=1106 audit(1765890482.490:914): pid=6468 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:02.497320 systemd[1]: sshd@21-10.200.4.43:22-10.200.16.10:46106.service: Deactivated successfully. Dec 16 13:08:02.490000 audit[6468]: CRED_DISP pid=6468 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:02.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.4.43:22-10.200.16.10:46106 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:08:02.504364 kernel: audit: type=1104 audit(1765890482.490:915): pid=6468 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:02.505068 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 13:08:02.506419 systemd-logind[2506]: Session 24 logged out. Waiting for processes to exit. Dec 16 13:08:02.508995 systemd-logind[2506]: Removed session 24. Dec 16 13:08:03.716361 kubelet[4018]: E1216 13:08:03.716295 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ctchn" podUID="940a093b-83dc-454c-8522-5e1b1f40521f" Dec 16 13:08:05.713749 kubelet[4018]: E1216 13:08:05.713689 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7798f6444b-p9dhf" podUID="d35c67aa-255b-42a2-83b2-79e30256e265" Dec 16 13:08:06.006000 audit[6483]: NETFILTER_CFG table=filter:143 family=2 entries=26 op=nft_register_rule pid=6483 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:08:06.006000 audit[6483]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe2a917d70 a2=0 a3=7ffe2a917d5c items=0 ppid=4127 pid=6483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:06.006000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:08:06.012000 audit[6483]: NETFILTER_CFG table=nat:144 family=2 entries=104 op=nft_register_chain pid=6483 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:08:06.012000 audit[6483]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffe2a917d70 a2=0 a3=7ffe2a917d5c items=0 ppid=4127 pid=6483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:06.012000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:08:06.711017 kubelet[4018]: E1216 13:08:06.710951 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kffxh" podUID="02113441-a531-45ff-9a40-51f9ff37eeb2" Dec 16 13:08:07.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.4.43:22-10.200.16.10:46116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:08:07.595002 systemd[1]: Started sshd@22-10.200.4.43:22-10.200.16.10:46116.service - OpenSSH per-connection server daemon (10.200.16.10:46116). Dec 16 13:08:07.596313 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 13:08:07.597422 kernel: audit: type=1130 audit(1765890487.593:919): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.4.43:22-10.200.16.10:46116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:08:08.113000 audit[6485]: USER_ACCT pid=6485 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:08.119298 sshd-session[6485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:08.120151 sshd[6485]: Accepted publickey for core from 10.200.16.10 port 46116 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:08:08.120358 kernel: audit: type=1101 audit(1765890488.113:920): pid=6485 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:08.117000 audit[6485]: CRED_ACQ pid=6485 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:08.126573 kernel: audit: type=1103 audit(1765890488.117:921): pid=6485 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:08.126633 kernel: audit: type=1006 audit(1765890488.117:922): pid=6485 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 16 13:08:08.129568 kernel: audit: type=1300 audit(1765890488.117:922): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebeb8cb10 a2=3 a3=0 items=0 ppid=1 pid=6485 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:08.117000 audit[6485]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebeb8cb10 a2=3 a3=0 items=0 ppid=1 pid=6485 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:08.135293 kernel: audit: type=1327 audit(1765890488.117:922): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:08:08.117000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:08:08.136723 systemd-logind[2506]: New session 25 of user core. Dec 16 13:08:08.142554 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 13:08:08.152715 kernel: audit: type=1105 audit(1765890488.144:923): pid=6485 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:08.144000 audit[6485]: USER_START pid=6485 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:08.153000 audit[6488]: CRED_ACQ pid=6488 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:08.160452 kernel: audit: type=1103 audit(1765890488.153:924): pid=6488 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:08.466901 sshd[6488]: Connection closed by 10.200.16.10 port 46116 Dec 16 13:08:08.468534 sshd-session[6485]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:08.469000 audit[6485]: USER_END pid=6485 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:08.476616 systemd-logind[2506]: Session 25 logged out. Waiting for processes to exit. Dec 16 13:08:08.477557 systemd[1]: sshd@22-10.200.4.43:22-10.200.16.10:46116.service: Deactivated successfully. Dec 16 13:08:08.478367 kernel: audit: type=1106 audit(1765890488.469:925): pid=6485 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:08.469000 audit[6485]: CRED_DISP pid=6485 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:08.485088 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 13:08:08.487388 kernel: audit: type=1104 audit(1765890488.469:926): pid=6485 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:08.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.4.43:22-10.200.16.10:46116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:08:08.490146 systemd-logind[2506]: Removed session 25. Dec 16 13:08:13.581446 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:08:13.581649 kernel: audit: type=1130 audit(1765890493.579:928): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.4.43:22-10.200.16.10:52472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:08:13.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.4.43:22-10.200.16.10:52472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:08:13.579731 systemd[1]: Started sshd@23-10.200.4.43:22-10.200.16.10:52472.service - OpenSSH per-connection server daemon (10.200.16.10:52472). Dec 16 13:08:13.712352 kubelet[4018]: E1216 13:08:13.712303 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7798f6444b-zjrsf" podUID="fa544c8c-af21-41f2-8ffb-1fe7c36b0bfb" Dec 16 13:08:13.715355 kubelet[4018]: E1216 13:08:13.713284 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8c4454f6d-fzx24" podUID="021bd40b-8387-4f81-8ec5-64b895deb3c2" Dec 16 13:08:13.715748 kubelet[4018]: E1216 13:08:13.715684 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66f67cd584-8rhwp" podUID="7c146d92-4a81-4948-9e2f-1093c61dcd5c" Dec 16 13:08:14.106000 audit[6525]: USER_ACCT pid=6525 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:14.108949 sshd-session[6525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:14.109707 sshd[6525]: Accepted publickey for core from 10.200.16.10 port 52472 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:08:14.108000 audit[6525]: CRED_ACQ pid=6525 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:14.117201 systemd-logind[2506]: New session 26 of user core. Dec 16 13:08:14.120158 kernel: audit: type=1101 audit(1765890494.106:929): pid=6525 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:14.120230 kernel: audit: type=1103 audit(1765890494.108:930): pid=6525 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:14.123251 kernel: audit: type=1006 audit(1765890494.108:931): pid=6525 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 16 13:08:14.108000 audit[6525]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffddf6a4bc0 a2=3 a3=0 items=0 ppid=1 pid=6525 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:14.127931 kernel: audit: type=1300 audit(1765890494.108:931): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffddf6a4bc0 a2=3 a3=0 items=0 ppid=1 pid=6525 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:14.128462 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 13:08:14.130696 kernel: audit: type=1327 audit(1765890494.108:931): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:08:14.108000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:08:14.132000 audit[6525]: USER_START pid=6525 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:14.134000 audit[6528]: CRED_ACQ pid=6528 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:14.144127 kernel: audit: type=1105 audit(1765890494.132:932): pid=6525 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:14.144188 kernel: audit: type=1103 audit(1765890494.134:933): pid=6528 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:14.445346 sshd[6528]: Connection closed by 10.200.16.10 port 52472 Dec 16 13:08:14.447428 sshd-session[6525]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:14.448000 audit[6525]: USER_END pid=6525 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:14.455038 systemd[1]: sshd@23-10.200.4.43:22-10.200.16.10:52472.service: Deactivated successfully. Dec 16 13:08:14.455361 kernel: audit: type=1106 audit(1765890494.448:934): pid=6525 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:14.457667 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 13:08:14.449000 audit[6525]: CRED_DISP pid=6525 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:14.463366 kernel: audit: type=1104 audit(1765890494.449:935): pid=6525 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:14.467627 systemd-logind[2506]: Session 26 logged out. Waiting for processes to exit. Dec 16 13:08:14.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.4.43:22-10.200.16.10:52472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:08:14.469506 systemd-logind[2506]: Removed session 26. Dec 16 13:08:15.712089 kubelet[4018]: E1216 13:08:15.711923 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ctchn" podUID="940a093b-83dc-454c-8522-5e1b1f40521f" Dec 16 13:08:19.564989 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:08:19.565142 kernel: audit: type=1130 audit(1765890499.554:937): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.4.43:22-10.200.16.10:52474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:08:19.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.4.43:22-10.200.16.10:52474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:08:19.554692 systemd[1]: Started sshd@24-10.200.4.43:22-10.200.16.10:52474.service - OpenSSH per-connection server daemon (10.200.16.10:52474). Dec 16 13:08:19.714575 kubelet[4018]: E1216 13:08:19.714530 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7798f6444b-p9dhf" podUID="d35c67aa-255b-42a2-83b2-79e30256e265" Dec 16 13:08:20.078000 audit[6540]: USER_ACCT pid=6540 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:20.081103 sshd[6540]: Accepted publickey for core from 10.200.16.10 port 52474 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:08:20.085365 kernel: audit: type=1101 audit(1765890500.078:938): pid=6540 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:20.086576 sshd-session[6540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:20.084000 audit[6540]: CRED_ACQ pid=6540 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:20.095357 kernel: audit: type=1103 audit(1765890500.084:939): pid=6540 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:20.101363 kernel: audit: type=1006 audit(1765890500.084:940): pid=6540 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Dec 16 13:08:20.084000 audit[6540]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea5cae3d0 a2=3 a3=0 items=0 ppid=1 pid=6540 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:20.108370 kernel: audit: type=1300 audit(1765890500.084:940): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea5cae3d0 a2=3 a3=0 items=0 ppid=1 pid=6540 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:20.084000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:08:20.109487 systemd-logind[2506]: New session 27 of user core. Dec 16 13:08:20.113353 kernel: audit: type=1327 audit(1765890500.084:940): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:08:20.115550 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 13:08:20.117000 audit[6540]: USER_START pid=6540 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:20.124374 kernel: audit: type=1105 audit(1765890500.117:941): pid=6540 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:20.123000 audit[6543]: CRED_ACQ pid=6543 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:20.131381 kernel: audit: type=1103 audit(1765890500.123:942): pid=6543 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:20.445998 sshd[6543]: Connection closed by 10.200.16.10 port 52474 Dec 16 13:08:20.446411 sshd-session[6540]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:20.447000 audit[6540]: USER_END pid=6540 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:20.451000 audit[6540]: CRED_DISP pid=6540 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:20.457978 kernel: audit: type=1106 audit(1765890500.447:943): pid=6540 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:20.458029 kernel: audit: type=1104 audit(1765890500.451:944): pid=6540 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:20.459069 systemd-logind[2506]: Session 27 logged out. Waiting for processes to exit. Dec 16 13:08:20.461632 systemd[1]: sshd@24-10.200.4.43:22-10.200.16.10:52474.service: Deactivated successfully. Dec 16 13:08:20.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.4.43:22-10.200.16.10:52474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:08:20.465140 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 13:08:20.469310 systemd-logind[2506]: Removed session 27. Dec 16 13:08:21.715652 kubelet[4018]: E1216 13:08:21.715602 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kffxh" podUID="02113441-a531-45ff-9a40-51f9ff37eeb2" Dec 16 13:08:24.712648 kubelet[4018]: E1216 13:08:24.712574 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7798f6444b-zjrsf" podUID="fa544c8c-af21-41f2-8ffb-1fe7c36b0bfb" Dec 16 13:08:25.556651 systemd[1]: Started sshd@25-10.200.4.43:22-10.200.16.10:56800.service - OpenSSH per-connection server daemon (10.200.16.10:56800). Dec 16 13:08:25.562558 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:08:25.562659 kernel: audit: type=1130 audit(1765890505.556:946): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.4.43:22-10.200.16.10:56800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:08:25.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.4.43:22-10.200.16.10:56800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:08:25.710572 kubelet[4018]: E1216 13:08:25.710532 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8c4454f6d-fzx24" podUID="021bd40b-8387-4f81-8ec5-64b895deb3c2" Dec 16 13:08:26.085374 kernel: audit: type=1101 audit(1765890506.075:947): pid=6557 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:26.075000 audit[6557]: USER_ACCT pid=6557 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:26.078595 sshd-session[6557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:26.085996 sshd[6557]: Accepted publickey for core from 10.200.16.10 port 56800 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:08:26.092725 systemd-logind[2506]: New session 28 of user core. Dec 16 13:08:26.076000 audit[6557]: CRED_ACQ pid=6557 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:26.103358 kernel: audit: type=1103 audit(1765890506.076:948): pid=6557 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:26.106592 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 16 13:08:26.110750 kernel: audit: type=1006 audit(1765890506.076:949): pid=6557 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Dec 16 13:08:26.076000 audit[6557]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe9a9e79c0 a2=3 a3=0 items=0 ppid=1 pid=6557 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:26.120588 kernel: audit: type=1300 audit(1765890506.076:949): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe9a9e79c0 a2=3 a3=0 items=0 ppid=1 pid=6557 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:26.076000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:08:26.126356 kernel: audit: type=1327 audit(1765890506.076:949): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:08:26.110000 audit[6557]: USER_START pid=6557 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:26.134358 kernel: audit: type=1105 audit(1765890506.110:950): pid=6557 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:26.119000 audit[6560]: CRED_ACQ pid=6560 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:26.143463 kernel: audit: type=1103 audit(1765890506.119:951): pid=6560 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:26.445366 sshd[6560]: Connection closed by 10.200.16.10 port 56800 Dec 16 13:08:26.446017 sshd-session[6557]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:26.446000 audit[6557]: USER_END pid=6557 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:26.451872 systemd[1]: sshd@25-10.200.4.43:22-10.200.16.10:56800.service: Deactivated successfully. Dec 16 13:08:26.455219 systemd[1]: session-28.scope: Deactivated successfully. Dec 16 13:08:26.456386 kernel: audit: type=1106 audit(1765890506.446:952): pid=6557 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:26.457202 systemd-logind[2506]: Session 28 logged out. Waiting for processes to exit. Dec 16 13:08:26.460267 systemd-logind[2506]: Removed session 28. Dec 16 13:08:26.446000 audit[6557]: CRED_DISP pid=6557 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:26.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.4.43:22-10.200.16.10:56800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:08:26.467412 kernel: audit: type=1104 audit(1765890506.446:953): pid=6557 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:27.714018 kubelet[4018]: E1216 13:08:27.713961 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ctchn" podUID="940a093b-83dc-454c-8522-5e1b1f40521f" Dec 16 13:08:28.710785 kubelet[4018]: E1216 13:08:28.710720 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66f67cd584-8rhwp" podUID="7c146d92-4a81-4948-9e2f-1093c61dcd5c" Dec 16 13:08:31.551717 systemd[1]: Started sshd@26-10.200.4.43:22-10.200.16.10:53416.service - OpenSSH per-connection server daemon (10.200.16.10:53416). Dec 16 13:08:31.560379 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:08:31.560481 kernel: audit: type=1130 audit(1765890511.550:955): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.4.43:22-10.200.16.10:53416 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:08:31.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.4.43:22-10.200.16.10:53416 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:08:32.061000 audit[6574]: USER_ACCT pid=6574 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:32.063021 sshd[6574]: Accepted publickey for core from 10.200.16.10 port 53416 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:08:32.065245 sshd-session[6574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:32.063000 audit[6574]: CRED_ACQ pid=6574 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:32.071771 systemd-logind[2506]: New session 29 of user core. Dec 16 13:08:32.074933 kernel: audit: type=1101 audit(1765890512.061:956): pid=6574 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:32.075006 kernel: audit: type=1103 audit(1765890512.063:957): pid=6574 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:32.078731 kernel: audit: type=1006 audit(1765890512.063:958): pid=6574 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Dec 16 13:08:32.063000 audit[6574]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb262c980 a2=3 a3=0 items=0 ppid=1 pid=6574 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:32.083045 kernel: audit: type=1300 audit(1765890512.063:958): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb262c980 a2=3 a3=0 items=0 ppid=1 pid=6574 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:32.083555 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 16 13:08:32.063000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:08:32.086518 kernel: audit: type=1327 audit(1765890512.063:958): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:08:32.086000 audit[6574]: USER_START pid=6574 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:32.095802 kernel: audit: type=1105 audit(1765890512.086:959): pid=6574 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:32.093000 audit[6577]: CRED_ACQ pid=6577 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:32.103360 kernel: audit: type=1103 audit(1765890512.093:960): pid=6577 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:32.407272 sshd[6577]: Connection closed by 10.200.16.10 port 53416 Dec 16 13:08:32.407952 sshd-session[6574]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:32.409000 audit[6574]: USER_END pid=6574 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:32.417019 systemd[1]: sshd@26-10.200.4.43:22-10.200.16.10:53416.service: Deactivated successfully. Dec 16 13:08:32.418355 kernel: audit: type=1106 audit(1765890512.409:961): pid=6574 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:32.420734 systemd[1]: session-29.scope: Deactivated successfully. Dec 16 13:08:32.409000 audit[6574]: CRED_DISP pid=6574 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:32.425379 kernel: audit: type=1104 audit(1765890512.409:962): pid=6574 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:32.426418 systemd-logind[2506]: Session 29 logged out. Waiting for processes to exit. Dec 16 13:08:32.429218 systemd-logind[2506]: Removed session 29. Dec 16 13:08:32.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.4.43:22-10.200.16.10:53416 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:08:33.711100 kubelet[4018]: E1216 13:08:33.710659 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7798f6444b-p9dhf" podUID="d35c67aa-255b-42a2-83b2-79e30256e265" Dec 16 13:08:34.711625 kubelet[4018]: E1216 13:08:34.711560 4018 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kffxh" podUID="02113441-a531-45ff-9a40-51f9ff37eeb2"